Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 58
Filter
1.
J Healthc Manag ; 69(3): 219-230, 2024.
Article in English | MEDLINE | ID: mdl-38728547

ABSTRACT

GOAL: Boarding emergency department (ED) patients is associated with reductions in quality of care, patient safety and experience, and ED operational efficiency. However, ED boarding is ultimately reflective of inefficiencies in hospital capacity management. The ability of a hospital to accommodate variability in patient flow presumably affects its financial performance, but this relationship is not well studied. We investigated the relationship between ED boarding and hospital financial performance measures. Our objective was to see if there was an association between key financial measures of business performance and limitations in patient progression efficiency, as evidenced by ED boarding. METHODS: Cross-sectional ED operational data were collected from the Emergency Department Benchmarking Alliance, a voluntarily self-reporting operational database that includes 54% of EDs in the United States. Freestanding EDs, pediatric EDs and EDs with missing boarding data were excluded. The key operational outcome variable was boarding time. We reviewed the financial information of these nonprofit institutions by accessing their Internal Revenue Service Form 990. We examined standard measures of financial performance, including return on equity, total margin, total asset turnover, and equity multiplier (EM). We studied these associations using quantile regressions of added ED volume, ED admission percentage, urban versus nonurban ED site location, trauma status, and percentage of the population receiving Medicare and Medicaid as covariates in the regression models. PRINCIPAL FINDINGS: Operational data were available for 892 EDs from 31 states. Of those, 127 reported a Form 990 in the year corresponding to the ED boarding measures. Median boarding time across EDs was 148 min (interquartile range [IQR]: 100-216). A significant relationship exists between boarding and the EM, along with a negative association with the hospital's total profit margin in the highest-performing hospitals (by profit margin percentage). After adjusting for the covariates in the regression model, we found that for every 10 min above 90 min of boarding, the mean EM for the top quartile increased from 245.8% to 249.5% (p < .001). In hospitals in the top 90th percentile of total margin, every 10 min beyond the median ED boarding interval led to a decrease in total margin of 0.24%. PRACTICAL APPLICATIONS: Using the largest available national registry of ED operational data and concordant nonprofit financial reports, higher boarding among the highest-profitability hospitals (i.e., top 10%) is associated with a drag on profit margin, while hospitals with the highest boarding are associated with the highest leverage (i.e., indicated by the EM). These relationships suggest an association between a key ED indicator of hospital capacity management and overall institutional financial performance.


Subject(s)
Efficiency, Organizational , Emergency Service, Hospital , Emergency Service, Hospital/statistics & numerical data , Emergency Service, Hospital/economics , Cross-Sectional Studies , United States , Humans , Efficiency, Organizational/economics , Benchmarking
2.
West J Emerg Med ; 25(1): 61-66, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38205986

ABSTRACT

Introduction: Big data and improved analytic techniques, such as triple exponential smoothing (TES), allow for prediction of emergency department (ED) volume. We sought to determine 1) which method of TES was most accurate in predicting pre-coronavirus 2019 (COVID-19), during COVID-19, and post-COVID-19 ED volume; 2) how the pandemic would affect TES prediction accuracy; and 3) whether TES would regain its pre-COVID-19 accuracy in the early post-pandemic period. Methods: We studied monthly volumes of four EDs with a combined annual census of approximately 250,000 visits in the two years prior to, during the 25-month COVID-19 pandemic, and the 14 months following. We compared the accuracy of four models of TES forecasting by measuring the mean absolute percentage error (MAPE), mean square errors (MSE) and mean absolute deviation (MAD), comparing actual to predicted monthly volume. Results: In the 23 months prior to COVID-19, the overall average MAPE across four forecasting methods was 3.88% ± 1.88% (range 2.41-6.42% across the four ED sites), rising to 15.21% ± 6.67% during the 25-month COVID-19 period (range 9.97-25.18% across the four sites), and falling to 6.45% ± 3.92% in the 14 months after (range 3.86-12.34% across the four sites). The 12-month Holt-Winter method had the greatest accuracy prior to COVID-19 (3.18% ± 1.65%) and during the pandemic (11.31% ± 4.81%), while the 24-month Holt-Winter offered the best performance following the pandemic (5.91% ± 3.82%). The pediatric ED had an average MAPE more than twice that of the average MAPE of the three adult EDs (6.42% ± 1.54% prior to COVID-19, 25.18% ± 9.42% during the pandemic, and 12.34% ± 0.55% after COVID-19). After the onset of the pandemic, there was no immediate improvement in forecasting model accuracy until two years later; however, these still had not returned to baseline accuracy levels. Conclusion: We were able to identify a TES model that was the most accurate. Most of the models saw an approximate four-fold increase in MAPE after onset of the pandemic. In the months following the most severe waves of COVID-19, we saw improvements in the accuracy of forecasting models, but they were not back to pre-COVID-19 accuracies.


Subject(s)
COVID-19 , Pandemics , Adult , Child , Humans , COVID-19/epidemiology , Accidental Falls , Emergency Service, Hospital , Seasons
3.
J Emerg Nurs ; 49(2): 294-304.e5, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36567152

ABSTRACT

INTRODUCTION: Unrealistic patient expectations for wait times can lead to poor satisfaction. This study's dual purpose was: (1) to address disparities between patients' perceived priority level and the Emergency Severity Index (ESI) assigned by emergency room triage nurses; and (2) to evaluate validity and reliability of using the Patient Perception of Priority to be Seen Survey (PPPSS) to investigate patient expectations for emergency department urgency. METHODS: A two-group pretest-posttest quasi-experimental approach compared patient urgency opinions to nurse urgency ratings with and without a scripted educational intervention. This tested how closely patient perceptions were related to triage nurse ratings. RESULTS: Reliability for the PPPSS was acceptable (reliability = 0.75). Patients who were rated lower urgency on the ESI by triage nurses tended to self-report higher urgency (rho = -0.44, P < .01). Attitudes were more consistent in the posttest patient group who were exposed to the scripted verbal description of emergency department procedures (χ2 (1, N = 352) = 8.09, P < .01). Patients who disagreed with emergency nurse scores tended to be younger on average (eg, < 40 years old; rho = 0.69, P < .01). Male identified patients tended to be rated both by nurses and themselves as higher urgency (beta = 0.18, P = .02). DISCUSSION: We recommend the PPPSS for nurses and researchers to quickly assess patient expectations. Additionally, promoting patient understanding through a scripted educational strategy about the ESI system may also result in improvements in communication between patients and nurses.


Subject(s)
Emergency Nursing , Triage , Humans , Male , Adult , Triage/methods , Reproducibility of Results , Psychometrics , Emergency Service, Hospital , Surveys and Questionnaires
5.
Mil Med ; 187(5-6): e558-e561, 2022 05 03.
Article in English | MEDLINE | ID: mdl-33580799

ABSTRACT

INTRODUCTION: The surge of SARS-CoV-2-virus infected (COVID-19) patients presenting to New York City (NYC) hospitals quickly overwhelmed and outnumbered the available acute care and intensive care resources in NYC in early March 2020. Upon the arrival of military medical assets to the Javits Convention Center in NYC, the planned mission to care for non-SARS-CoV-2 patients was immediately changed to manage patients with (SARS-CoV-2)COVID-19 and their comorbid conditions.Healthcare professionals from every branch of the uniformed services, augmented by state and local resources, staffed the Javits New York Medical Station (JNYMS) from April 2020. METHODS: The data review reported aggregated summary statistics and participant observations collected by N.Y. State and U.S. military officials. RESULTS: During the 28 days of patient intake at the JNYMS, 1,095 SARS-CoV-2-positive patients were transferred from NYC hospitals to the JNYMS. At its peak, the JNYMS accepted 119 patients in a single day, had a maximum census of 453, and had a peak intensive care unit census of 35. The median length of stay was 4.6 days (interquartile range: 3.1-6.9 days). A total of 103 patients were transferred back to local hospitals, and there were 6 deaths, with an overall mortality rate of 0.6% (95% CI, 0.3-1.2). DISCUSSION AND CONCLUSIONS: This is the first report of the care provided at the JNYMS. Within 2 weeks, this multi-agency effort was able to mobilize to care for over 1,000 SARS-CoV-2 patients with varying degrees of illness in a 1-month period. This was the largest field hospital mobilization in the U.S. medical history in response to a non-wartime pandemic. Its success with huge patient throughput including disposition and low mortality relieved critical overcrowding and supply deficiencies throughout NYC hospitals. The downstream impact likely saved additional hundreds of lives and reduced stress on the system during this healthcare crisis.


Subject(s)
COVID-19 , SARS-CoV-2 , COVID-19/epidemiology , Humans , Mobile Health Units , New York City/epidemiology , Pandemics
6.
Ann Emerg Med ; 79(2): 158-167, 2022 02.
Article in English | MEDLINE | ID: mdl-34119326

ABSTRACT

STUDY OBJECTIVE: People with opioid use disorder are vulnerable to disruptions in access to addiction treatment and social support during the COVID-19 pandemic. Our study objective was to understand changes in emergency department (ED) utilization following a nonfatal opioid overdose during COVID-19 compared to historical controls in 6 healthcare systems across the United States. METHODS: Opioid overdoses were retrospectively identified among adult visits to 25 EDs in Alabama, Colorado, Connecticut, North Carolina, Massachusetts, and Rhode Island from January 2018 to December 2020. Overdose visit counts and rates per 100 all-cause ED visits during the COVID-19 pandemic were compared with the levels predicted based on 2018 and 2019 visits using graphical analysis and an epidemiologic outbreak detection cumulative sum algorithm. RESULTS: Overdose visit counts increased by 10.5% (n=3486; 95% confidence interval [CI] 4.18% to 17.0%) in 2020 compared with the counts in 2018 and 2019 (n=3020 and n=3285, respectively), despite a 14% decline in all-cause ED visits. Opioid overdose rates increased by 28.5% (95% CI 23.3% to 34.0%) from 0.25 per 100 ED visits in 2018 to 2019 to 0.32 per 100 ED visits in 2020. Although all 6 studied health care systems experienced overdose ED visit rates more than the 95th percentile prediction in 6 or more weeks of 2020 (compared with 2.6 weeks as expected by chance), 2 health care systems experienced sustained outbreaks during the COVID-19 pandemic. CONCLUSION: Despite decreases in ED visits for other medical emergencies, the numbers and rates of opioid overdose-related ED visits in 6 health care systems increased during 2020, suggesting a widespread increase in opioid-related complications during the COVID-19 pandemic. Expanded community- and hospital-based interventions are needed to support people with opioid use disorder and save lives during the COVID-19 pandemic.


Subject(s)
COVID-19/epidemiology , Delivery of Health Care/statistics & numerical data , Emergency Service, Hospital/statistics & numerical data , Facilities and Services Utilization/statistics & numerical data , Health Services Accessibility/statistics & numerical data , Opiate Overdose/therapy , Adult , Cross-Sectional Studies , Humans , Pandemics , Retrospective Studies , SARS-CoV-2 , United States/epidemiology
7.
Am J Emerg Med ; 47: 115-118, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33794473

ABSTRACT

OBJECTIVE: Concussions and chronic traumatic encephalopathy (CTE) related to professional football has received much attention within emergency care and sports medicine. Research suggests that some of this may be due to a greater likelihood of initial helmet contact (IHC), however this association has not been studied across all age groups. This study aims to investigate the association between player age and IHC in American football. METHODS: Retrospective review of championship games between 2016 and 2018 at 6 levels of amateur tackle football as well as the National Football League (NFL). Trained raters classified plays as IHC using pre-specified criteria. A priori power analysis established the requisite impacts needed to establish non-inferiority of the incidence rate of IHC across the levels of play. RESULTS: Thirty-seven games representing 2912 hits were rated. The overall incidence of IHC was 16% across all groups, ranging from 12.6% to 18.9%. All but 2 of the non-NFL divisions had a statistically reduced risk of IHC when compared with the NFL, with relative risk ratios ranging from 0.55-0.92. IHC initiated by defensive participants were twice as high as offensive participants (RR 2.04, p < 0.01) while 6% [95% CI 5.4-7.2] of all hits were helmet-on-helmet contact. CONCLUSIONS: There is a high rate of IHC with a lower relative risk of IHC at most levels of play compared to the NFL. Further research is necessary to determine the impact of IHC; the high rates across all age groups suggests an important role for education and prevention.


Subject(s)
Football/statistics & numerical data , Head Protective Devices , Adolescent , Adult , Brain Concussion/etiology , Child , Humans , Male , Retrospective Studies , Risk Assessment , Young Adult
8.
West J Emerg Med ; 21(5): 1048-1053, 2020 Aug 17.
Article in English | MEDLINE | ID: mdl-32970553

ABSTRACT

INTRODUCTION: The unfolding COVID-19 pandemic has predictably followed the familiar contours of well established socioeconomic health inequities, exposing and often amplifying preexisting disparities. People living in homeless shelters are at higher risk of infection with severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) compared to the general population. The purpose of this study was to identify shelter characteristics that may be associated with higher transmission of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). METHODS: We conducted a cross-sectional assessment of five congregate shelters in Rhode Island. Shelter residents 18 years old and older were tested for SARS-CoV-2 from April 19-April 24, 2020. At time of testing, we collected participant characteristics, symptomatology, and vital signs. Shelter characteristics and infection control strategies were collected through a structured phone questionnaire with shelter administrators. RESULTS: A total of 299 shelter residents (99%, 299/302) participated. Thirty-five (11.7%) tested positive for SARS-CoV-2. Shelter-level prevalence ranged from zero to 35%. Symptom prevalence did not vary by test result. Shelters with positive cases of SARS-CoV-2 were in more densely populated areas, had more transient resident populations, and instituted fewer physical distancing practices compared to shelters with no cases. CONCLUSION: SARS-CoV-2 prevalence varies with shelter characteristics but not individual symptoms. Policies that promote resident stability and physical distancing may help reduce SARS-CoV-2 transmission. Symptom screening alone is insufficient to prevent SARS-CoV-2 transmission. Frequent universal testing and congregate housing alternatives that promote stability may help reduce spread of infection.


Subject(s)
Betacoronavirus , Coronavirus Infections/epidemiology , Health Status Disparities , Housing/statistics & numerical data , Ill-Housed Persons/statistics & numerical data , Pneumonia, Viral/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , COVID-19 , Coronavirus Infections/diagnosis , Coronavirus Infections/prevention & control , Coronavirus Infections/transmission , Cross-Sectional Studies , Female , Health Policy , Health Surveys , Humans , Infection Control/methods , Male , Middle Aged , Pandemics/prevention & control , Pneumonia, Viral/diagnosis , Pneumonia, Viral/prevention & control , Pneumonia, Viral/transmission , Prevalence , Rhode Island/epidemiology , SARS-CoV-2 , Young Adult
9.
West J Emerg Med ; 21(3): 647-652, 2020 Apr 21.
Article in English | MEDLINE | ID: mdl-32421514

ABSTRACT

INTRODUCTION: Boarding of patients in the emergency department (ED) is associated with decreased ED efficiency. The provider-in-triage (PIT) model has been shown to improve ED throughput, but it is unclear how these improvements are affected by boarding. We sought to assess the effects of boarding on ED throughput and whether implementation of a PIT model mitigated those effects. METHODS: We performed a multi-site retrospective review of 955 days of ED operations data at a tertiary care academic ED (AED) and a high-volume community ED (CED) before and after implementation of PIT. Key outcome variables were door to provider time (D2P), total length of stay of discharged patients (LOSD), and boarding time (admit request to ED departure [A2D]). RESULTS: Implementation of PIT was associated with a decrease in median D2P by 22 minutes or 43% at the AED (p < 0.01), and 18 minutes (31%) at the CED (p < 0.01). LOSD also decreased by 19 minutes (5.9%) at the AED and 8 minutes (3.3%) at the CED (p<0.01). After adjusting for variations in daily census, the effect of boarding (A2D) on D2P and LOSD was unchanged, despite the implementation of PIT. At the AED, 7.7 minutes of boarding increased median D2P by one additional minute (p < 0.01), and every four minutes of boarding increased median LOSD by one minute (p < 0.01). At the CED, 7.1 minutes of boarding added one additional minute to D2P (p < 0.01), and 4.8 minutes of boarding added one minute to median LOSD (p < 0.01). CONCLUSION: In this retrospective, observational multicenter study, ED operational efficiency was improved with the implementation of a PIT model but worsened with boarding. The PIT model was unable to mitigate any of the effects of boarding. This suggests that PIT is associated with increased efficiency of ED intake and throughput, but boarding continues to have the same effect on ED efficiency regardless of upstream efficiency measures that may be designed to minimize its impact.


Subject(s)
Efficiency, Organizational , Emergency Service, Hospital/organization & administration , Length of Stay/statistics & numerical data , Models, Organizational , Patient Admission/statistics & numerical data , Triage/organization & administration , Academic Medical Centers/organization & administration , Academic Medical Centers/statistics & numerical data , Emergency Service, Hospital/statistics & numerical data , Humans , Retrospective Studies , Tertiary Care Centers/organization & administration , Tertiary Care Centers/statistics & numerical data
10.
Acad Emerg Med ; 27(7): 600-611, 2020 07.
Article in English | MEDLINE | ID: mdl-32248605

ABSTRACT

BACKGROUND: A shared language and vocabulary are essential for managing emergency department (ED) operations. This Fourth Emergency Department Benchmarking Alliance (EDBA) Summit brought together experts in the field to review, update, and add to key definitions and metrics of ED operations. OBJECTIVE: Summit objectives were to review and revise existing definitions, define and characterize new practices related to ED operations, and introduce financial and regulatory definitions affecting ED reimbursement. METHODS: Forty-six ED operations, data management, and benchmarking experts were invited to participate in the EDBA summit. Before arrival, experts were provided with documents from the three prior summits and assigned to update the terminology. Materials and publications related to standards of ED operations were considered and discussed. Each group submitted a revised set of definitions prior to the summit. Significantly revised, topical, or controversial recommendations were discussed among all summit participants. The goal of the in-person discussion was to reach consensus on definitions. Work group leaders made changes to reflect the discussion, which was revised with public and stakeholder feedback. RESULTS: The entire EDBA dictionary was updated and expanded. This article focuses on an update and discussion of definitions related to specific topics that changed since the last summit, specifically ED intake, boarding, diversion, and observation care. In addition, an extensive new glossary of financial and regulatory terminology germane to the practice of emergency medicine is included. CONCLUSIONS: A complete and precise set of operational definitions, time intervals, and utilization measures is necessary for timely and effective ED care. A common language of financial and regulatory definitions that affect ED operations is included for the first time. This article and its companion dictionary should serve as a resource to ED leadership, researchers, informatics and health policy leaders, and regulatory bodies.


Subject(s)
Benchmarking/methods , Emergency Service, Hospital/standards , Consensus Development Conferences as Topic , Humans , Leadership
13.
J Crit Care ; 41: 130-137, 2017 10.
Article in English | MEDLINE | ID: mdl-28525778

ABSTRACT

PURPOSE: Measurement of inferior vena cava collapsibility (cIVC) by point-of-care ultrasound (POCUS) has been proposed as a viable, non-invasive means of assessing fluid responsiveness. We aimed to determine the ability of cIVC to identify patients who will respond to additional intravenous fluid (IVF) administration among spontaneously breathing critically-ill patients. METHODS: Prospective observational trial of spontaneously breathing critically-ill patients. cIVC was obtained 3cm caudal from the right atrium and IVC junction using POCUS. Fluid responsiveness was defined as a≥10% increase in cardiac index following a 500ml IVF bolus; measured using bioreactance (NICOM™, Cheetah Medical). cIVC was compared with fluid responsiveness and a cIVC optimal value was identified. RESULTS: Of the 124 participants, 49% were fluid responders. cIVC was able to detect fluid responsiveness: AUC=0.84 [0.76, 0.91]. The optimum cutoff point for cIVC was identified as 25% (LR+ 4.56 [2.72, 7.66], LR- 0.16 [0.08, 0.31]). A cIVC of 25% produced a lower misclassification rate (16.1%) for determining fluid responsiveness than the previous suggested cutoff values of 40% (34.7%). CONCLUSION: IVC collapsibility, as measured by POCUS, performs well in distinguishing fluid responders from non-responders, and may be used to guide IVF resuscitation among spontaneously breathing critically-ill patients.


Subject(s)
Critical Illness/therapy , Fluid Therapy/methods , Resuscitation/methods , Ultrasonography/methods , Vena Cava, Inferior/diagnostic imaging , Administration, Intravenous , Adult , Aged , Female , Humans , Male , Middle Aged , Point-of-Care Systems , Prospective Studies , Vena Cava, Inferior/physiopathology
14.
Crit Pathw Cardiol ; 16(1): 15-21, 2017 03.
Article in English | MEDLINE | ID: mdl-28195938

ABSTRACT

OBJECTIVES: Nearly 40% of all previously admitted chest pain patients re-present to the emergency department (ED) within 1 year regardless of stress testing, and nearly 5% of patients return with a major adverse cardiac event (MACE). The primary objective of this study was to determine the prevalence of return visits to the ED among patients previously admitted to an ED chest pain observation unit (CPU). We also identified the patient characteristics and health risk factors associated with these return ED visits. METHODS: This was a prospective cohort study of patients admitted to a CPU in a large-volume academic urban ED who were subsequently followed over a period of 1 year. Inclusion criteria were age ≥18 years old, American Heart Association low-to-intermediate assessed risk, electrocardiogram nondiagnostic for acute coronary syndrome (ACS), and a negative initial troponin I. Excluded patients were those age >75 years with a history of coronary artery disease. Patients were followed throughout their observation unit stay and then subsequently for 1 year. On all repeat ED evaluations, standardized chart abstractions forms were used, charts were reviewed by 2 trained abstractors blinded to the study hypothesis, and a random sample of charts was examined for interrater reliability. Return visits were categorized as MACE, cardiac non-MACE, or noncardiac based on a priori criteria. Social security death index searches were performed on all patients. Univariate and multivariate ordinal logistic regressions were conducted to determine demographics, medical procedures, and comorbid conditions that predicted return visits to the ED. RESULTS: A total of 2139 patients were enrolled over 17 months. The median age was 52 years, 55% were female. Forty-four patients (2.1%) had ACS on index visit. A total of 36.2% of CPU patients returned to the ED within 1 year vs. 5.4% of all ED patients (P < 0.01). However, the overall incidence of MACE at 1 year in all patients and in those without an index visit diagnosis of ACS was 0.5% (95% confidence interval [CI], 0.4%-06%) and 0.4% (95% CI, 0.2%-0.7%), respectively. Patients who received a stress test on index visit were less likely to return (adjusted odds ratio [AOR] = 0.64 [95% CI, 0.51-0.80]) but patients who smoked (AOR = 1.51 [95% CI, 1.16-1.96]) or had diabetes (AOR = 1.36 [95% CI, 1.07-1.87]) were more likely to return. Hispanic and African-American patients had increased odds of multiple return ED visits (AOR=1.23 [95% CI, 1.04-1.46] and AOR =1.74 [95% CI, 1.45-2.13], respectively). CONCLUSION: Patients treated in an ED CPU have a very low rate of MACE at 1 year. However, these same patients have very high rates of subsequent ED utilization. The associations between certain comparative demographics and ED utilization suggest the need for further research to identify and address the needs of these patient populations that precipitate the higher than expected return rate.


Subject(s)
Acute Coronary Syndrome/diagnosis , Chest Pain/diagnosis , Emergency Service, Hospital/statistics & numerical data , Risk Assessment/methods , Acute Coronary Syndrome/complications , Acute Coronary Syndrome/epidemiology , Adolescent , Adult , Aged , Aged, 80 and over , Chest Pain/etiology , Chest Pain/therapy , Electrocardiography , Exercise Test , Female , Follow-Up Studies , Hospitalization/trends , Humans , Incidence , Male , Middle Aged , Odds Ratio , Prospective Studies , Reproducibility of Results , Survival Rate/trends , Young Adult
15.
J Racial Ethn Health Disparities ; 4(4): 680-686, 2017 08.
Article in English | MEDLINE | ID: mdl-27553054

ABSTRACT

BACKGROUND/OBJECTIVES: The objective of this study is to investigate potential racial disparities in time to antibiotics among patients presenting to the emergency department (ED) with severe sepsis or septic shock. METHODS: This was a retrospective observational study of adults >18 years with severe sepsis or septic shock presenting to a large, urban, academic ED and admitted to the ICU from 10/2005 to 2/2012. Time to antibiotic data was abstracted by ICU research staff; other data were abstracted by blinded trained research assistants using standardized abstraction forms. Time from ED arrival to antibiotics was compared in white vs. non-white patients using cumulative events curves followed by Cox proportional hazards regression, controlling for age, gender, ethnicity, source of infection, and SOFA score. RESULTS: Seven hundred sixty-eight patients were included; 19.5 % (n = 150) were non-white. Median minutes to antibiotics was 131 in white patients vs. 158 in non-white patients (p = 0.03, log-rank test). The unadjusted hazard ratio for non-white patients was 0.82 (95 %CI 0.58-0.98). After adjustment, the hazard ratio for race was not significant (0.90, 95 %CI 0.73-1.10). CONCLUSIONS: In a single-center sample of patients with severe sepsis or septic shock, adjustment for factors including age and infectious source eliminated the difference in time to antibiotics by race. Further research should investigate disparities in sepsis care between hospitals with differing patient populations.


Subject(s)
Anti-Bacterial Agents/therapeutic use , Healthcare Disparities/ethnology , Racial Groups/statistics & numerical data , Sepsis/ethnology , Shock, Septic/ethnology , Time-to-Treatment/statistics & numerical data , Academic Medical Centers , Aged , Emergency Service, Hospital , Female , Hospitals, Urban , Humans , Intensive Care Units , Male , Middle Aged , Retrospective Studies , Rhode Island , Sepsis/drug therapy , Shock, Septic/drug therapy
16.
Shock ; 46(2): 132-8, 2016 08.
Article in English | MEDLINE | ID: mdl-26925867

ABSTRACT

OBJECTIVE: Fluid responsiveness is proposed as a physiology-based method to titrate fluid therapy based on preload dependence. The objectives of this study were to determine if a fluid responsiveness protocol would decrease progression of organ dysfunction, and a fluid responsiveness protocol would facilitate a more aggressive resuscitation. METHODS: Prospective, 10-center, randomized interventional trial. INCLUSION CRITERIA: suspected sepsis and lactate 2.0 to 4.0 mmol/L. Exclusion criteria (abbreviated): systolic blood pressure more than 90 mmHg, and contraindication to aggressive fluid resuscitation. INTERVENTION: fluid responsiveness protocol using Non-Invasive Cardiac Output Monitor (NICOM) to assess for fluid responsiveness (>10% increase in stroke volume in response to 5 mL/kg fluid bolus) with balance of a liter given in responsive patients. CONTROL: standard clinical care. OUTCOMES: primary-change in Sepsis-related Organ Failure Assessment (SOFA) score at least 1 over 72 h; secondary-fluids administered. Trial was initially powered at 600 patients, but stopped early due to a change in sponsor's funding priorities. RESULTS: Sixty-four patients were enrolled with 32 in the treatment arm. There were no significant differences between arms in age, comorbidities, baseline vital signs, or SOFA scores (P > 0.05 for all). Comparing treatment versus Standard of Care-there was no difference in proportion of increase in SOFA score of at least 1 point (30% vs. 33%) (note bene underpowered, P = 1.0) or mean preprotocol fluids 1,050 mL (95% confidence interval [CI]: 786-1,314) vs. 1,031 mL (95% CI: 741-1,325) (P = 0.93); however, treatment patients received more fluids during the protocol (2,633 mL [95% CI: 2,264-3,001] vs. 1,002 mL [95% CI: 707-1,298]) (P < 0.001). CONCLUSIONS: In this study of a "preshock" population, there was no change in progression of organ dysfunction with a fluid responsiveness protocol. A noninvasive fluid responsiveness protocol did facilitate delivery of an increased volume of fluid. Additional properly powered and enrolled outcomes studies are needed.


Subject(s)
Cardiac Output/physiology , Emergency Service, Hospital/statistics & numerical data , Fluid Therapy/methods , Sepsis/physiopathology , Sepsis/therapy , Adult , Aged , Female , Humans , Lactic Acid/therapeutic use , Male , Middle Aged , Monitoring, Physiologic/methods , Multicenter Studies as Topic , Prospective Studies , Shock, Septic/physiopathology , Shock, Septic/therapy , Stroke Volume/physiology
17.
Oncotarget ; 7(15): 19111-23, 2016 Apr 12.
Article in English | MEDLINE | ID: mdl-26992232

ABSTRACT

Aging produces cellular, molecular, and behavioral changes affecting many areas of the brain. The dopamine (DA) system is known to be vulnerable to the effects of aging, which regulate behavioral functions such as locomotor activity, body weight, and reward and cognition. In particular, age-related DA D2 receptor (D2R) changes have been of particular interest given its relationship with addiction and other rewarding behavioral properties. Male and female wild-type (Drd2 +/+), heterozygous (Drd2 +/-) and knockout (Drd2 -/-) mice were reared post-weaning in either an enriched environment (EE) or a deprived environment (DE). Over the course of their lifespan, body weight and locomotor activity was assessed. While an EE was generally found to be correlated with longer lifespan, these increases were only found in mice with normal or decreased expression of the D2 gene. Drd2 +/+ EE mice lived nearly 16% longer than their DE counterparts. Drd2 +/+ and Drd2 +/- EE mice lived 22% and 21% longer than Drd2 -/- EE mice, respectively. Moreover, both body weight and locomotor activity were moderated by environmental factors. In addition, EE mice show greater behavioral variability between genotypes compared to DE mice with respect to body weight and locomotor activity.


Subject(s)
Gene Expression , Gene-Environment Interaction , Longevity/genetics , Motor Activity/genetics , Receptors, Dopamine D2/genetics , Animals , Body Weight/genetics , Dopamine/metabolism , Dopaminergic Neurons/metabolism , Female , Genotype , Male , Mice, Knockout , Mice, Transgenic , Receptors, Dopamine D2/metabolism
18.
J Health Psychol ; 21(5): 690-8, 2016 05.
Article in English | MEDLINE | ID: mdl-24913009

ABSTRACT

The purpose of this mixed methods study was to identify participants' attributions for their global impression of change ratings in a behavioral intervention for unexplained chronic fatigue and chronic fatigue syndrome. At 3-month follow-up, participants (N = 67) were asked "Why do you think you are (improved, unchanged, worse)?" Improved patients pointed to specific behavioral changes, unchanged patients referred to a lack of change in lifestyle, and worsened patients invoked stress and/or specific life events. Identifying patient perceptions of behaviors associated with patient global impression of change-rated improvement and non-improvement may assist in developing more effective management strategies in clinical care.


Subject(s)
Fatigue Syndrome, Chronic/therapy , Patient Reported Outcome Measures , Self-Management/methods , Adult , Fatigue Syndrome, Chronic/psychology , Female , Follow-Up Studies , Humans , Male , Middle Aged
19.
Crit Pathw Cardiol ; 14(4): 154-6, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26569656

ABSTRACT

BACKGROUND: Cardiology consensus guidelines recommend use of the Diamond & Forrester (D&F) score in augmenting the decision to pursue stress testing. We have recently shown that it may have value in safely reducing stress utilization in an emergency department chest pain unit (CPU). However, full application necessitates demonstration of a good inter-rater reliability of the D&F score in the CPU setting. We hypothesized that D&F pretest probability would have good inter-rater reliability in CPU patients. METHODS: This was a chart review of randomly selected patients from a previously collected prospective observational trial of admitted CPU patients in a large-volume academic urban emergency department. Inclusion criteria were: age>18 years, American Heart Association low/intermediate risk, nondynamic electrocardiograms, and normal initial troponin I. Exclusion criteria were: age>75 years with coronary artery disease. A D&F score for likelihood of coronary artery disease was calculated on each patient by 2 trained chart abstractors using a standardized data abstraction instrument. Abstractors were trained to specifically categorize presenting symptoms as fitting 1 of 3 types of chest pain symptoms: nonanginal, atypical, or anginal based on previously published prespecified criteria. Approximately 20% of charts in a CPU registry were abstracted by 2 chart abstractors who were blind to each other's categorization, the patient outcomes, and the study hypothesis. The primary outcome was the kappa statistic for agreement between the 2 raters. RESULTS: The charts of 705 random patients were reviewed. The mean age was 55.1±11.8 years, 52% were female. Forty four percentage of patients received stress testing, and 2.4% of patients had acute coronary syndrome. The mean D&F score was 39±24. There was good inter-rater agreement of chest pain characteristics (κ=0.77, 95% confidence interval, 0.72-0.81; P<0.01). CONCLUSION: This study supports the use of the D&F score as a reliable indicator of pretest probability in CPU patients by demonstrating that there is good inter-rater reliability. Prospective validation is necessary at the point of patient assessment, in conjunction with application of the D&F score to augment stress utilization decision making.


Subject(s)
Acute Coronary Syndrome/diagnosis , Chest Pain/diagnosis , Decision Support Techniques , Echocardiography, Stress/statistics & numerical data , Emergency Service, Hospital , Exercise Test/statistics & numerical data , Myocardial Perfusion Imaging/statistics & numerical data , Acute Coronary Syndrome/complications , Adult , Aged , Chest Pain/etiology , Female , Humans , Male , Middle Aged , Observer Variation , Reproducibility of Results , Retrospective Studies , Risk Assessment
SELECTION OF CITATIONS
SEARCH DETAIL
...