Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Resuscitation ; 167: 227-232, 2021 10.
Article in English | MEDLINE | ID: mdl-34480975

ABSTRACT

AIMS: To describe neurological and functional outcomes among out-of-hospital cardiac arrest (OHCA) patients who survived to hospital discharge; to determine the association between neurological outcome at hospital discharge and 12-month survival. METHODS: Our cohort comprised adult OHCA patients (≥18 years) attended by St John WA (SJWA) paramedics in Perth, Western Australia (WA), who survived to hospital discharge, between 1st January 2004 and 31st December 2019. Neurological and functional status at hospital discharge (and before the arrest) was determined by medical record review using the five-point 'Cerebral Performance Category (CPC)' and 'Overall Performance Category (OPC)' scores. Adjusted multivariable logistic regression analysis was used to estimate the association of CPC score at hospital discharge with 12-month survival, adjusted for prognostic variables. RESULTS: Over the study period, SJWA attended 23,712 OHCAs. Resuscitation was attempted in 43.4% of cases (n = 10,299) with 2171 subsequent admissions, 99.4% (n = 2158) of these were admitted to a study hospital. Of the 1062 hospital survivors, 71.3% (n = 757) were CPC1 (highest category of neurological performance), 21.4% (n = 227) CPC2, 6.3% (n = 67) CPC3 and 1.0% (n = 11) CPC4. OPC scores followed a similar distribution. Of the 1,011 WA residents who survived to hospital discharge, 92.3% (n = 933) survived to 12-months. A CPC1-2 at hospital discharge was significantly associated with 12-month survival (adjusted odds ratio 3.28, 95% confidence interval 1.69-6.39). CONCLUSION: Whilst overall survival is low, most survivors of OHCA have a good neurological outcome at hospital discharge and are alive at 12-months.


Subject(s)
Cardiopulmonary Resuscitation , Emergency Medical Services , Out-of-Hospital Cardiac Arrest , Adult , Allied Health Personnel , Cohort Studies , Humans , Out-of-Hospital Cardiac Arrest/therapy , Patient Discharge
3.
Resuscitation ; 162: 82-90, 2021 05.
Article in English | MEDLINE | ID: mdl-33571603

ABSTRACT

PURPOSE: International guidelines recommend targeting normocapnia in mechanically ventilated out-of-hospital cardiac arrest (OHCA) survivors, but the optimal arterial carbon dioxide (PaCO2) target remains controversial. We hypothesised that the relationship between PaCO2 and survival is non-linear, and targeting an intermediate level of PaCO2 compared to a low or high PaCO2 in the first 24-h of ICU admission is associated with an improved survival to hospital discharge (STHD) and at 12-months. METHODS: We conducted a retrospective multi-centre cohort study of adults with non-traumatic OHCA requiring admission to one of four tertiary hospital intensive care units for mechanical ventilation. A four-knot restricted cubic spline function was used to allow non-linearity between the mean PaCO2 within the first 24 h of ICU admission after OHCA and survival, and optimal PaCO2 cut-points were identified from the spline curve to generate corresponding odds ratios. RESULTS: We analysed 3769 PaCO2 results within the first 24-h of ICU admission, from 493 patients. PaCO2 and survival had an inverted U-shape association; normocapnia was associated with significantly improved STHD compared to either hypocapnia (<35 mmHg) (adjusted odds ratio [aOR] 0.45, 95% confidence interval [CI] 0.24-0.83) or hypercapnia (>45 mmHg) (aOR 0.45, 95% CI 0.24-0.84). Of the twelve predictors assessed, PaCO2 was the third most important predictor, and explained >11% of the variability in survival. The survival benefits of normocapnia extended to 12-months. CONCLUSIONS: Normocapnia within the first 24-h of intensive care admission after OHCA was associated with an improved survival compared to patients with hypocapnia or hypercapnia.


Subject(s)
Carbon Dioxide , Out-of-Hospital Cardiac Arrest , Adult , Cohort Studies , Humans , Hypercapnia , Out-of-Hospital Cardiac Arrest/therapy , Retrospective Studies
4.
Resuscitation ; 158: 130-138, 2021 01.
Article in English | MEDLINE | ID: mdl-33232752

ABSTRACT

BACKGROUND: Studies to identify safe oxygenation targets after out-of-hospital cardiac arrest (OHCA) have often assumed a linear relationship between arterial oxygen tension (PaO2) and survival, or have dichotomised PaO2 at a supra-physiological level. We hypothesised that abnormalities in mean PaO2 (both high and low) would be associated with decreased survival after OHCA. METHODS: We conducted a retrospective multicentre cohort study of adult OHCA patients who received mechanical ventilation on admission to the intensive care unit (ICU). The potential non-linear relationship between the mean PaO2 within the first 24 -hs of ICU admission and survival to hospital discharge (STHD) was assessed by a four-knot restricted cubic spline function with adjustment for potential confounders. RESULTS: 3764 arterial blood gas results were available for 491 patients in the first 24-hs of ICU admission. The relationship between mean PaO2 over the first 24-hs and STHD was an inverted U-shape, with highest survival for those with a mean PaO2 between 100 and 180 mmHg (reference category) compared to a mean PaO2 of <100 mmHg (adjusted odds ratio [aOR] 0.50 95% confidence interval [CI] 0.30, 0.84), or >180 mmHg (aOR 0.41, 95% CI 0.18, 0.92). Mean PaO2 within 24 -hs was the third most important predictor and explained 9.1% of the variability in STHD. CONCLUSION: The mean PaO2 within the first 24-hs after admission for OHCA has a non-linear association with the highest STHD seen between 100 and 180 mmHg. Randomised controlled trials are now needed to validate the optimal oxygenation targets in mechanically ventilated OHCA patients.


Subject(s)
Cardiopulmonary Resuscitation , Out-of-Hospital Cardiac Arrest , Adult , Blood Gas Analysis , Cohort Studies , Humans , Out-of-Hospital Cardiac Arrest/therapy , Oxygen , Retrospective Studies
6.
Circulation ; 137(20): 2125-2127, 2018 05 15.
Article in English | MEDLINE | ID: mdl-29760225
7.
J Intensive Care ; 4: 43, 2016.
Article in English | MEDLINE | ID: mdl-27366324

ABSTRACT

BACKGROUND: This cohort study compared the prognostic significance of strong ion gap (SIG) with other acid-base markers in the critically ill. METHODS: The relationships between SIG, lactate, anion gap (AG), anion gap albumin-corrected (AG-corrected), base excess or strong ion difference-effective (SIDe), all obtained within the first hour of intensive care unit (ICU) admission, and the hospital mortality of 6878 patients were analysed. The prognostic significance of each acid-base marker, both alone and in combination with the Admission Mortality Prediction Model (MPM0 III) predicted mortality, were assessed by the area under the receiver operating characteristic curve (AUROC). RESULTS: Of the 6878 patients included in the study, 924 patients (13.4 %) died after ICU admission. Except for plasma chloride concentrations, all acid-base markers were significantly different between the survivors and non-survivors. SIG (with lactate: AUROC 0.631, confidence interval [CI] 0.611-0.652; without lactate: AUROC 0.521, 95 % CI 0.500-0.542) only had a modest ability to predict hospital mortality, and this was no better than using lactate concentration alone (AUROC 0.701, 95 % 0.682-0.721). Adding AG-corrected or SIG to a combination of lactate and MPM0 III predicted risks also did not substantially improve the latter's ability to differentiate between survivors and non-survivors. Arterial lactate concentrations explained about 11 % of the variability in the observed mortality, and it was more important than SIG (0.6 %) and SIDe (0.9 %) in predicting hospital mortality after adjusting for MPM0 III predicted risks. Lactate remained as the strongest predictor for mortality in a sensitivity multivariate analysis, allowing for non-linearity of all acid-base markers. CONCLUSIONS: The prognostic significance of SIG was modest and inferior to arterial lactate concentration for the critically ill. Lactate concentration should always be considered regardless whether physiological, base excess or physical-chemical approach is used to interpret acid-base disturbances in critically ill patients.

8.
Aust Crit Care ; 29(1): 27-34, 2016 Feb.
Article in English | MEDLINE | ID: mdl-25939546

ABSTRACT

INTRODUCTION: Reflecting on researchers' experiences during follow-up of patients enrolled in research may lead to improved understanding of the challenges faced in maintaining contact when patients leave hospital. AIMS: (1) Describe the challenges researchers face when following-up patients who survive ICU. (2) Identify issues that influenced our ability to follow-up patients. METHODS: This sub-study was part of a larger "case-control" study investigating the quality of life of ICU survivors with and without pre-existing chronic disease. Patients completed self-assessment QLQ and symptom assessment before hospital discharge and at six months, plus they were asked to keep a paper diary of healthcare services used. Patient contact was maintained by monthly telephone calls. Each telephone call was logged and summaries of conversations documented. Our experience of conducting the study was reviewed by the identification of common issues which arose from the follow-up of patients. RESULTS: Thirty patients with a history of chronic disease and 30 patients without underlying chronic disease were followed-up. A total of 582 telephone calls were made for 60 patients discharged from hospital of which 261 (45%) calls led to a telephone interview. Only 19 (30%) of diaries were completed and returned. We identified six challenges associated with issues that arose from the follow-up of patients. CONCLUSION: We underestimated the number of telephone calls required for follow-up after discharge. Diaries were unreliable sources of data suggesting strategies are needed to improve compliance. How patients respond to follow-up is not always predictable. Processes are needed to deal with unexpected information provided during telephone follow-up.


Subject(s)
Chronic Disease/therapy , Continuity of Patient Care , Quality of Life , APACHE , Case-Control Studies , Female , Humans , Intensive Care Units , Length of Stay/statistics & numerical data , Male , Middle Aged , Patient Discharge , Survivors
9.
JPEN J Parenter Enteral Nutr ; 38(7): 809-16, 2014 Sep.
Article in English | MEDLINE | ID: mdl-23976770

ABSTRACT

BACKGROUND: Enteral nutrition (EN) tolerance is often monitored by aspirating stomach contents by syringe at prescribed intervals. No studies have been conducted to assess the most appropriate time interval for aspirating gastric tubes. We compared gastric tube aspirations every 4 hours (usual care) with a variable regimen (up to every 8 hours aspirations). METHODS: This randomized controlled trial (RCT) enrolled patients who stayed in the intensive care unit (ICU) for >48 hours, had a gastric tube, and were likely to receive EN for 3 or more days. Patients were randomized (computer-generated randomization) to either the control (every 4 hours) or intervention group (variable regimen). The primary outcome was number of gastric tube aspirations per day from randomization until EN was ceased or up to 2 weeks postrandomization. RESULTS: Following Institutional Ethics Committee approval, 357 patients were recruited (control group, n = 179; intervention group, n = 178). No differences were found in age, sex, worst APACHE II score, or time to start of EN. In the intention-to-treat analysis, the intervention group had fewer tube aspirations per day (3.4 versus 5.4 in the control group, P < .001). Vomiting/regurgitation was increased in the intervention group (2.1% versus 3.6%, P = .02). There were no other differences in complications. CONCLUSION: This is the first RCT to examine the frequency of gastric tube aspirations. The frequency of gastric tube aspirations was reduced in the variable-regimen group with no increase in risk to the patient. Reducing the frequency of aspirations saves nursing time, decreases risk of contamination of feeding circuit, and minimizes risk of body fluid exposure.


Subject(s)
Enteral Nutrition/methods , Intensive Care Units , Intubation, Gastrointestinal , Monitoring, Physiologic/methods , Adult , Aged , Enteral Nutrition/adverse effects , Female , Humans , Intention to Treat Analysis , Intubation, Gastrointestinal/adverse effects , Male , Middle Aged , Monitoring, Physiologic/adverse effects , Time Factors , Vomiting/etiology
10.
Crit Care Med ; 40(5): 1635-44, 2012 May.
Article in English | MEDLINE | ID: mdl-22511141

ABSTRACT

OBJECTIVES: To review all published clinical studies of thyroid hormone administration to brain-dead potential organ donors. METHODS: A search of PubMed using multiple search terms retrieved 401 publications including 35 original reports describing administration of thyroid hormone to brain-dead potential organ donors. Detailed review of the 35 original reports led to identification of two additional publications not retrieved in the original search. The 37 original publications reported findings from 16 separate case series or retrospective audits and seven randomized controlled trials, four of which were placebo-controlled. Meta-analysis was restricted to the four placebo-controlled randomized controlled trials. RESULTS: Whereas all case series and retrospective audits reported a beneficial effect of thyroid hormone administration, all seven randomized controlled trials reported no benefit of thyroid hormone administration either alone or in combination with other hormonal therapies. In four placebo-controlled trials including 209 donors, administration of thyroid hormone (n=108) compared with placebo (n=101) had no significant effect on donor cardiac index (pooled mean difference, 0.15 L/min/m²; 95% confidence interval -0.18 to 0.48). The major limitation of the case series and retrospective audits was the lack of consideration of uncontrolled variables that confound interpretation of the results. A limitation of the randomized controlled trials was that the proportion of donors who were hemodynamically unstable or marginal in other ways was too small to exclude a benefit of thyroid hormone in this subgroup. CONCLUSIONS: The findings of this systematic review do not support a role for routine administration of thyroid hormone in the brain-dead potential organ donor. Existing recommendations regarding the use of thyroid hormone in marginal donors are based on low-level evidence.


Subject(s)
Brain Death , Thyroid Hormones/therapeutic use , Tissue Donors , Humans , Randomized Controlled Trials as Topic , Thyroid Hormones/administration & dosage , Thyroxine/administration & dosage , Thyroxine/therapeutic use , Triiodothyronine/administration & dosage , Triiodothyronine/therapeutic use
11.
J Neurosurg ; 115(5): 1040-6, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21800964

ABSTRACT

OBJECT: Ventriculitis associated with extraventricular drains (EVD) increases rates of morbidity and mortality as well as costs. Surveillance samples of CSF are taken routinely from EVD, but there is no consensus on the optimum frequency of sampling. The goal of this study was to assess whether the incidence of ventriculitis changed when CSF sampling frequency was reduced once every 3 days. METHODS: After receiving institutional ethics committee approval for their project, the authors compared a prospective sample of EVD-treated patients (admitted 2008-2009) and a historical comparison group (admitted 2005-2007) at two tertiary hospital ICUs. A broad definition of ventriculitis included suspected ventriculitis (that is, treated with antibiotics for ventriculitis) and proven ventriculitis (positive CSF culture). Adult ICU patients with no preexisting neurological infection were enrolled in the study. After staff was provided with an education package, sampling of CSF was changed from daily to once every 3 days. All other management of the EVD remained unchanged. More frequent sampling was permitted if clinically indicated during the third daily sampling phase. RESULTS: Two hundred seven patients were recruited during the daily sampling phase and 176 patients when sampling was reduced to once every 3 days. The Acute Physiology and Chronic Health Evaluation (APACHE) II score was lower for the daily sampling group than for the every-3rd-day group (18.6 vs 20.3, respectively; p < 0.01), but there was no difference in mean age (47 and 45 years, respectively; p = 0.14), male or female sex (61% and 59%, respectively; p = 0.68), or median EVD duration in the ICU (4.9 and 5.8 days, respectively; p = 0.14). Most patients were admitted with subarachnoid hemorrhage (42% in the daily group and 33% in the every-3rd-day group) or traumatic head injuries (29% and 36%, respectively). The incidence of ventriculitis decreased from 17% to 11% overall and for proven ventriculitis from 10% to 3% once sampling frequency was reduced. Sampling of CSF once every 3 days was independently associated with ventriculitis (OR 0.44, 95% CI 0.22-0.88, p = 0.02). CONCLUSIONS: Reducing the frequency of CSF sampling to once every 3 days was associated with a significant decrease in the incidence of ventriculitis. The authors suggest that CSF sampling should therefore be performed once every 3 days in the absence of clinical indicators of ventriculitis. Reducing frequency of CSF sampling from EVDs decreased proven ventriculitis.


Subject(s)
Cerebral Ventriculitis/prevention & control , Drainage/adverse effects , Hydrocephalus/surgery , Ventriculostomy/adverse effects , Adolescent , Adult , Aged , Aged, 80 and over , Cerebral Ventriculitis/etiology , Female , Humans , Male , Middle Aged , Prospective Studies
12.
Crit Care ; 14(3): R102, 2010.
Article in English | MEDLINE | ID: mdl-20525247

ABSTRACT

INTRODUCTION: The benefits and use of low-dose corticosteroids (LDCs) in severe sepsis and septic shock remain controversial. Surviving sepsis campaign guidelines suggest LDC use for septic shock patients poorly responsive to fluid resuscitation and vasopressor therapy. Their use is suspected to be wide-spread, but paucity of data regarding global practice exists. The purpose of this study was to compare baseline characteristics and clinical outcomes of patients treated or not treated with LDC from the international PROGRESS (PROmoting Global Research Excellence in Severe Sepsis) cohort study of severe sepsis. METHODS: Patients enrolled in the PROGRESS registry were evaluated for use of vasopressor and LDC (equivalent or lesser potency to hydrocortisone 50 mg six-hourly plus 50 microg 9-alpha-fludrocortisone) for treatment of severe sepsis at any time in intensive care units (ICUs). Baseline characteristics and hospital mortality were analyzed, and logistic regression techniques used to develop propensity score and outcome models adjusted for baseline imbalances between groups. RESULTS: A total of 8,968 patients with severe sepsis and sufficient data for analysis were studied. A total of 79.8% (7,160/8,968) of patients received vasopressors, and 34.0% (3,051/8,968) of patients received LDC. Regional use of LDC was highest in Europe (51.1%) and lowest in Asia (21.6%). Country use was highest in Brazil (62.9%) and lowest in Malaysia (9.0%). A total of 14.2% of patients on LDC were not receiving any vasopressor therapy. LDC patients were older, had more co-morbidities and higher disease severity scores. Patients receiving LDC spent longer in ICU than patients who did not (median of 12 versus 8 days; P <0.001). Overall hospital mortality rates were greater in the LDC than in the non-LDC group (58.0% versus 43.0%; P <0.001). After adjusting for baseline imbalances, in all mortality models (with vasopressor use), a consistent association remained between LDC and hospital mortality (odds ratios varying from 1.30 to 1.47). CONCLUSIONS: Widespread use of LDC for the treatment of severe sepsis with significant regional and country variation exists. In this study, 14.2% of patients received LDC despite the absence of evidence of shock. Hospital mortality was higher in the LDC group and remained higher after adjustment for key determinates of mortality.


Subject(s)
Adrenal Cortex Hormones/administration & dosage , Adrenal Cortex Hormones/therapeutic use , Registries , Shock, Septic/drug therapy , Vasoconstrictor Agents/therapeutic use , Adrenal Cortex Hormones/pharmacology , Adult , Aged , Dose-Response Relationship, Drug , Humans , Intensive Care Units , Middle Aged , Practice Patterns, Physicians'/statistics & numerical data , Propensity Score , Prospective Studies , Shock, Septic/mortality , Treatment Outcome , Vasoconstrictor Agents/administration & dosage , Vasoconstrictor Agents/pharmacology
13.
J Crit Care ; 24(1): 101-7, 2009 Mar.
Article in English | MEDLINE | ID: mdl-19272545

ABSTRACT

PURPOSE: The aim of this study is to assess the effect of comorbidities on risk of readmission to an intensive care unit (ICU) and the excess hospital mortality associated with ICU readmissions. MATERIALS AND METHODS: A cohort study used clinical data from a 22-bed multidisciplinary ICU in a university hospital and comorbidity data from the Western Australian hospital morbidity database. RESULTS: From 16,926 consecutive ICU admissions between 1987 and 2002, and 654 (3.9%) of these patients were readmitted to ICU readmissions within the same hospitalization. Patients with readmission were older, more likely to be originally admitted from the operating theatre or hospital ward, had a higher Acute Physiology and Chronic Health Evaluation (APACHE)-predicted mortality, and had more comorbidities when compared with patients without readmission. The number of Charlson comorbidities was significantly associated with late readmission (>72 hours) but not early readmission (

Subject(s)
Comorbidity , Hospital Mortality/trends , Intensive Care Units/statistics & numerical data , Patient Readmission/statistics & numerical data , APACHE , Adult , Age Distribution , Aged , Cohort Studies , Data Collection , Female , Hospitals, University , Humans , Logistic Models , Male , Middle Aged , Morbidity , Multivariate Analysis , Risk Assessment , Risk Factors , Sensitivity and Specificity , Time Factors , Western Australia/epidemiology
14.
Crit Care Resusc ; 10(4): 312-5, 2008 Dec.
Article in English | MEDLINE | ID: mdl-19049482

ABSTRACT

OBJECTIVE: To examine practice patterns and workload of practising Australian intensivists and to investigate the risk and prevalence of "burnout syndrome". DESIGN AND SETTING: On-line survey was emailed to 324 intensivists listed on the database of the Australian and New Zealand Intensive Care Society (ANZICS) and practising in Australia. MAIN OUTCOME MEASURES: Prospectively recorded workload during a specific week in October 2007, self-reported 12-weekly averaged work pattern, and prevalence of burnout syndrome assessed by modified Maslach Burnout Inventory-General Survey (MBI-GS). RESULTS: 115 intensivists (36%) responded; respondents were representative of mainstream tertiary intensive care practitioners. On average in a 12-week period, intensivists spent 42% of working days in bedside patient management, 16% in administration, 11% in locum positions, 9% in research and 9% in recreational leave. During 1 week of prospective recording of actual workload, 26% of intensivists managed more than nine ventilated patients, and most admitted more than two new patients per day. Most were involved in more than two family conferences with a median duration of 1 h. The MBI-GS showed that 80% of respondents had signs of psychological stress and discomfort, 42% showed signs of emotional exhaustion, 32% had negative feelings and cynicism, and 37% considered they underachieved in terms of personal accomplishments. CONCLUSIONS: Intensivists are at high risk of burnout syndrome. Recognising the drivers and early signs of burnout and identifying a preventive strategy is a professional priority for ANZICS and the intensive care community.


Subject(s)
Burnout, Professional/epidemiology , Intensive Care Units , Professional Practice/statistics & numerical data , Workload/statistics & numerical data , Australia , Health Surveys , Humans , Job Satisfaction , Prevalence , Professional Role/psychology , Risk Factors , Syndrome , Workforce , Workload/psychology
15.
Med J Aust ; 189(1): 26-30, 2008 Jul 07.
Article in English | MEDLINE | ID: mdl-18601637

ABSTRACT

OBJECTIVE: To investigate the association between socioeconomic status (SES) and outcomes for seriously ill patients. DESIGN AND SETTING: A retrospective cohort study based on data from an intensive care unit clinical database linked with data from the Western Australian hospital morbidity and mortality databases over a 16-year period (1987-2002). MAIN OUTCOME MEASURES: In-hospital and long-term mortality. RESULTS: Data on 15,619 seriously ill patients were analysed. The in-hospital mortality rate for all seriously ill patients was 14.8%, and the incidence of death after critical illness was 7.4 per 100 person-years (4.8 per 100 person-years after hospital discharge). Patients from the most socioeconomically disadvantaged areas were more likely to be younger, to be Indigenous, to live in a remote area, to be admitted non-electively, and to have more severe acute disease and comorbidities. SES was not significantly associated with in-hospital mortality, but long-term mortality was significantly higher in patients from the lowest SES group than in those from the highest SES group, after adjusting for age, ethnicity, comorbidities, severity of acute illness, and geographical accessibility to essential services (hazard ratio for death in lowest SES group v highest SES group was 1.21 [95% CI, 1.04-1.41]; P = 0.014). The attributable incidence of death after hospital discharge between patients from the lowest and highest SES groups was 1.0 per 100 person-years (95% CI, 0.3-1.6 per 100 person-years). CONCLUSION: Lower SES was associated with worse long-term survival after critical illness over and above the background effects of age, acuity of acute illness, comorbidities, Indigenous status and geographical access to essential services.


Subject(s)
Critical Care , Critical Illness/mortality , Social Class , Adult , Aged , Female , Hospital Mortality , Humans , Longitudinal Studies , Male , Middle Aged , Retrospective Studies , Survival Analysis , Western Australia/epidemiology
16.
Am J Crit Care ; 17(4): 349-56, 2008 Jul.
Article in English | MEDLINE | ID: mdl-18593834

ABSTRACT

BACKGROUND: Sedation and analgesia scales promote a less-distressing experience in the intensive care unit and minimize complications for patients receiving mechanical ventilation. OBJECTIVES: To evaluate outcomes before and after introduction of scales for sedation and analgesia in a general intensive care unit. METHOD: A before-and-after design was used to evaluate introduction of the Richmond Agitation-Sedation Scale and the Behavioral Pain Scale for patients receiving mechanical ventilation. Data were collected for 6 months before and 6 months after training in and introduction of the scales. RESULTS: A total of 769 patients received mechanical ventilation for at least 6 hours (369 patients before and 400 patients after implementation). Age, scores on the Acute Physiology and Chronic Health Evaluation (APACHE) II, and diagnostic groups were similar in the 2 groups, but the after group had more men than did the before group. Duration of mechanical ventilation did not change significantly after the scales were introduced (median, 24 vs 28 hours). For patients who received mechanical ventilation for 96 hours or longer (24%), mechanical ventilation lasted longer after implementation of the scales (P=.03). Length of stay in the intensive care unit was similar in the 2 groups (P= .18), but patients received sedatives for longer after implementation (P=.01). By logistic regression analysis, APACHE II score (P<.001) and diagnostic group (P<.001) were independent predictors of mechanical ventilation lasting 96 hours or longer. CONCLUSION: Sedation and analgesia scales did not reduce duration of ventilation in an Australian intensive care unit.


Subject(s)
Analgesia/methods , Conscious Sedation/methods , Data Collection/methods , Intensive Care Units , Respiration, Artificial/methods , APACHE , Adult , Aged , Female , Humans , Length of Stay , Male , Middle Aged , Pain Measurement , Time Factors
17.
Crit Care Med ; 36(5): 1523-30, 2008 May.
Article in English | MEDLINE | ID: mdl-18434893

ABSTRACT

OBJECTIVE: To identify prognostic determinants of long-term survival for patients treated in intensive care units (ICUs) who survived to hospital discharge. DESIGN: An ICU clinical cohort linked to state-wide hospital records and death registers. SETTING AND PATIENTS: Adult patients admitted to a 22-bed ICU at a major teaching hospital in Perth, Western Australia, between 1987 and 2002 who survived to hospital discharge (n = 19,921) were followed-up until December 31, 2003. MEASUREMENTS: The main outcome measures are crude and adjusted survival. MAIN RESULTS: The risk of death in the first year after hospital discharge was high for patients who survived the ICU compared with the general population (standardized mortality rate [SMR] at 1 yr = 2.90, 95% confidence interval [CI] 2.73-3.08) and remained higher than the general population for every year during 15 yrs of follow up (SMR at 15 yrs = 2.01, 95% CI 1.64-2.46). Factors that were independently associated with survival during the first year were older age (hazard ratio [HR] = 4.09; 95% CI 3.20-5.23), severe comorbidity (HR = 5.23; 95% CI 4.25-6.43), ICU diagnostic group (HR range 2.20 to 8.95), new malignancy (HR = 4.60; 95% CI 3.68-5.76), high acute physiology score on admission (HR = 1.55; 95% CI 1.23-1.96), and peak number of organ failures (HR = 1.51; 95% CI 1.11-2.04). All of these factors were independently associated with subsequent survival for those patients who were alive 1 yr after discharge from the hospital with the addition of male gender (HR = 1.17; 95% CI 1.10-1.25) and prolonged length of stay in ICU (HR = 1.42; 95% CI 1.29-1.55). CONCLUSIONS: Patients who survived an admission to the ICU have worse survival than the general population for at least 15 yrs. The factors that determine long-term survival include age, comorbidity, and primary diagnosis. Severity of illness was also associated with long-term survival and this suggests that an episode of critical illness, or its treatment, may shorten life-expectancy.


Subject(s)
Critical Care , Critical Illness/mortality , Critical Illness/therapy , Aged , Female , Humans , Male , Middle Aged , Prospective Studies , Survival Analysis , Time Factors
18.
Intensive Care Med ; 34(3): 481-7, 2008 Mar.
Article in English | MEDLINE | ID: mdl-17992507

ABSTRACT

OBJECTIVE: The objective was to assess the ability of potential clinical predictors and inflammatory markers within 24 h of intensive care unit (ICU) discharge to predict subsequent in-hospital mortality. DESIGN AND SETTING: A prospective cohort study of 603 consecutive patients who survived their first ICU admission, between 1 June and 31 December 2005, in a 22-bed multidisciplinary ICU of a university hospital. MEASUREMENTS AND RESULTS: A total of 26 in-hospital deaths after ICU discharge (4.3%) were identified. C-reactive protein (CRP) concentrations at ICU discharge were associated with subsequent in-hospital mortality in the univariate analysis (mean CRP concentrations of non-survivors=174 vs. survivors=85.6 mg/l, p=0.001). CRP concentrations remained significantly associated with post-ICU mortality (a 10-mg/l increment in CRP concentrations increased the odds ratio [OR] of death: 1.09, 95% confidence interval [CI]: 1.03-1.16); after adjusting for age, the Acute Physiology and Chronic Health Evaluation (APACHE) II predicted mortality, and the Delta Sequential Organ Failure Assessment (Delta SOFA) score. The area under the receiver operating characteristic curve of this multivariate model to discriminate between survivors and non-survivors after ICU discharge was 0.85 (95% CI: 0.73-0.96). The destination and timing of ICU discharge, and the Discharge SOFA score, white cell counts and fibrinogen concentrations at ICU discharge were not significantly associated with in-hospital mortality after ICU discharge. CONCLUSIONS: A high CRP concentration at ICU discharge was an independent predictor of in-hospital mortality after ICU discharge in our ICU.


Subject(s)
C-Reactive Protein/analysis , Critical Illness/mortality , Hospital Mortality , APACHE , Adult , Aged , Biomarkers/blood , Female , Humans , Inflammation/blood , Intensive Care Units , Male , Middle Aged , Patient Discharge , Patient Transfer , Predictive Value of Tests , Sepsis/blood , Severity of Illness Index
19.
Crit Care Resusc ; 9(1): 19-25, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17352662

ABSTRACT

PURPOSE: To assess the ability of potential clinical predictors and inflammatory markers to predict in-hospital mortality after patient discharge from the intensive care unit. SETTING AND PARTICIPANTS: 1272 patients who survived their index admission to a 22-bed multidisciplinary ICU of a university hospital in 2004. DESIGN: Nested case-control study with two concurrent control patients for each case of post-ICU discharge in hospital mortality. RESULTS: There were 29 unexpected in-hospital deaths after ICU discharge (2.3%). C-reactive protein (CRP) concentrations within 24 hours of ICU discharge were available for 14 of these 29 patients and 22 concurrent control patients. CRP concentration at ICU discharge was associated with subsequent mortality (mean CRP concentrations: cases, 204 mg/L v controls, 63 mg/L; P = 0.001). CRP concentration remained significantly associated with post-ICU mortality after adjustment with other potential predictors of mortality (odds ratio [OR] of death for a 10mg/L increase in CRP concentration, 1.27; 95% CI, 1.09-1.49; P = 0.005) and with propensity score (OR, 1.19; 95% CI, 1.05-1.33; P=0.004). The area under the receiver operating characteristic curve for CRP concentrations to predict in-hospital mortality was 0.87 (95% CI, 0.73-0.99; P=0.001). The destination and timing of ICU discharge, SOFA (Sequential Organ Failure Assessment) score, white cell count and fibrinogen concentration at ICU discharge were not significantly associated with in-hospital mortality after ICU discharge. CONCLUSIONS: A high CRP concentration at ICU discharge is an independent predictor of subsequent in-hospital mortality. Prospective cohort studies in ICUs with different casemix, discharge criteria and post-ICU mortality rates are needed to validate and generalise our findings.


Subject(s)
C-Reactive Protein/analysis , Hospital Mortality , Intensive Care Units , Patient Discharge , Adult , Aged , Case-Control Studies , Female , Humans , Logistic Models , Male , Middle Aged , Predictive Value of Tests , ROC Curve , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...