Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 115
Filter
1.
Heliyon ; 10(11): e32655, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38961987

ABSTRACT

This study investigated the accuracy of a machine learning algorithm for predicting mortality in patients receiving rapid response system (RRS) activation. This retrospective cohort study used data from the In-Hospital Emergency Registry in Japan, which collects nationwide data on patients receiving RRS activation. The missing values in the dataset were replaced using multiple imputations (mode imputation, BayseRidge sklearn. linear model, and K-nearest neighbor model), and the enrolled patients were randomly assigned to the training and test cohorts. We established prediction models for 30-day mortality using the following four types of machine learning classifiers: Light Gradient Boosting Machine (LightGBM), eXtreme Gradient Boosting, random forest, and neural network. Fifty-two variables (patient characteristics, details of RRS activation, reasons for RRS initiation, and hospital capacity) were used to construct the prediction algorithm. The primary outcome was the accuracy of the prediction model for 30-day mortality. Overall, the data from 4,997 patients across 34 hospitals were analyzed. The machine learning algorithms using LightGBM demonstrated the highest predictive value for 30-day mortality (area under the receiver operating characteristic curve, 0.860 [95 % confidence interval, 0.825-0.895]). The SHapley Additive exPlanations summary plot indicated that hospital capacity, site of incidence, code status, and abnormal vital signs within 24 h were important variables in the prediction model for 30-day mortality.

2.
Patient Educ Couns ; 128: 108368, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-39018781

ABSTRACT

OBJECTIVE: This study aimed to examine self-reported code-status practice patterns among emergency clinicians from Japan and the U.S. METHODS: A cross-sectional questionnaire was distributed to emergency clinicians from one academic medical center and four general hospitals in Japan and two academic medical centers in the U.S. The questionnaire was based on a hypothetical case involving a critically ill patient with end-stage lung cancer. The questionnaire items assessed whether respondent clinicians would be likely to pose questions to patients about their preferences for medical procedures and their values and goals. RESULTS: A total of 176 emergency clinicians from Japan and the U.S participated. After adjusting for participants' backgrounds, emergency clinicians in Japan were less likely to pose procedure-based questions than those in the U.S. Conversely, emergency clinicians in Japan showed a statistically higher likelihood of asking 10 out of 12 value-based questions. CONCLUSION: Significant differences were found between emergency clinicians in Japan and the U.S. in their reported practices on posing procedure-based and patient value-based questions. PRACTICE IMPLICATIONS: Serious illness communication training based in the U.S. must be adapted to the Japanese context, considering the cultural characteristics and practical responsibilities of Japanese emergency clinicians.

3.
Cureus ; 16(5): e60478, 2024 May.
Article in English | MEDLINE | ID: mdl-38882989

ABSTRACT

AIM: The aging society is expanding, and more elderly patients are admitted to intensive care units (ICUs). Elderly patients may have increased ICU mortality and are thought to have a high incidence of post-intensive care syndrome (PICS). There are few studies of PICS in the elderly. This study hypothesized that the elderly have an increased incidence of PICS compared to the non-elderly. METHODS: This is a subgroup analysis of a previous multicenter prospective observational study (Prevalence of post-intensive care syndrome among Japanese intensive care unit patients: The Japan-PICS study) conducted from April 2019 to September 2019. Ninety-six patients were included who were over 18 years old, admitted to the ICU, and expected to require mechanical ventilation for more than 48 hours. Physical component scales (PCS), mental component scales (MCS), and Short-Memory Questionnaire (SMQ) scores of included patients were compared before admission to the ICU and six months later. The diagnosis of PICS required one of the following: (1) the PCS score decreased ≧10 points, (2) the MCS score decreased ≧10 points, or (3) the SMQ score decreased by >40 points. Patients were classified as non-elderly (<65 years old) or elderly (≧65 years old), and the incidence of PICS was compared between these two groups. RESULTS: The non-elderly (N=27) and elderly (N=69) groups had incidences of PICS: 67% and 62% (p=0.69), respectively. CONCLUSION: There is no statistically significant difference in the incidence of PICS in the non-elderly and elderly.

4.
Microbiol Spectr ; 12(7): e0034224, 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38864641

ABSTRACT

Whether empirical therapy with carbapenems positively affects the outcomes of critically ill patients with bacterial infections remains unclear. This study aimed to investigate whether the use of carbapenems as the initial antimicrobial administration reduces mortality and whether the duration of carbapenem use affects the detection of multidrug-resistant (MDR) pathogens. This was a post hoc analysis of data acquired from Japanese participating sites from a multicenter, prospective observational study [Determinants of Antimicrobial Use and De-escalation in Critical Care (DIANA study)]. A total of 268 adult patients with clinically suspected or confirmed bacterial infections from 31 Japanese intensive care units (ICUs) were analyzed. The patients were divided into two groups: patients who were administered carbapenems as initial antimicrobials (initial carbapenem group, n = 99) and those who were not administered carbapenems (initial non-carbapenem group, n = 169). The primary outcomes were mortality at day 28 and detection of MDR pathogens. Multivariate logistic regression analysis revealed that mortality at day 28 did not differ between the two groups [18 (18%) vs 27 (16%), respectively; odds ratio: 1.25 (95% confidence interval (CI): 0.59-2.65), P = 0.564]. The subdistribution hazard ratio for detecting MDR pathogens on day 28 per additional day of carbapenem use is 1.08 (95% CI: 1.05-1.13, P < 0.001 using the Fine-Gray model with death regarded as a competing event). In conclusion, in-hospital mortality was similar between the groups, and a longer duration of carbapenem use as the initial antimicrobial therapy resulted in a higher risk of detection of new MDR pathogens.IMPORTANCEWe found no statistical difference in mortality with the empirical use of carbapenems as initial antimicrobial therapy among critically ill patients with bacterial infections. Our study revealed a lower proportion of inappropriate initial antimicrobial administrations than those reported in previous studies. This result suggests the importance of appropriate risk assessment for the involvement of multidrug-resistant (MDR) pathogens and the selection of suitable antibiotics based on risk. To the best of our knowledge, this study is the first to demonstrate that a longer duration of carbapenem use as initial therapy is associated with a higher risk of subsequent detection of MDR pathogens. This finding underscores the importance of efforts to minimize the duration of carbapenem use as initial antimicrobial therapy when it is necessary.


Subject(s)
Anti-Bacterial Agents , Bacterial Infections , Carbapenems , Critical Illness , Drug Resistance, Multiple, Bacterial , Intensive Care Units , Humans , Carbapenems/therapeutic use , Male , Prospective Studies , Female , Aged , Anti-Bacterial Agents/therapeutic use , Middle Aged , Intensive Care Units/statistics & numerical data , Bacterial Infections/drug therapy , Bacterial Infections/mortality , Bacterial Infections/microbiology , Japan , Aged, 80 and over , Bacteria/drug effects , Bacteria/isolation & purification , Bacteria/classification , Bacteria/genetics
5.
Resusc Plus ; 18: 100628, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38617440

ABSTRACT

Aim: Although early detection of patients' deterioration may improve outcomes, most of the detection criteria use on-the-spot values of vital signs. We investigated whether adding trend values over time enhanced the ability to predict adverse events among hospitalized patients. Methods: Patients who experienced adverse events, such as unexpected cardiac arrest or unplanned ICU admission were enrolled in this retrospective study. The association between the events and the combination of vital signs was evaluated at the time of the worst vital signs 0-8 hours before events (near the event) and at 24-48 hours before events (baseline). Multivariable logistic analysis was performed, and the area under the receiver operating characteristic curve (AUC) was used to assess the prediction power for adverse events among various combinations of vital sign parameters. Results: Among 24,509 in-patients, 54 patients experienced adverse events(cases) and 3,116 control patients eligible for data analysis were included. At the timepoint near the event, systolic blood pressure (SBP) was lower, heart rate (HR) and respiratory rate (RR) were higher in the case group, and this tendency was also observed at baseline. The AUC for event occurrence with reference to SBP, HR, and RR was lower when evaluated at baseline than at the timepoint near the event (0.85 [95%CI: 0.79-0.92] vs. 0.93 [0.88-0.97]). When the trend in RR was added to the formula constructed of baseline values of SBP, HR, and RR, the AUC increased to 0.92 [0.87-0.97]. Conclusion: Trends in RR may enhance the accuracy of predicting adverse events in hospitalized patients.

6.
Acute Med Surg ; 11(1): e951, 2024.
Article in English | MEDLINE | ID: mdl-38638890

ABSTRACT

Aim: We aimed to evaluate the clinical characteristics and outcomes of elderly critically ill patients and identify prognostic factors for mobility disability at discharge. Methods: This single-center, retrospective cohort study investigated the period from April 2020 to January 2021. Patients ≥75 years old transferred to our emergency department and admitted to the intensive care unit (ICU) or intermediate unit in our hospital were eligible. Demographics, clinical characteristics, nutritional indicators, and nutritional screening scores were collected from chart reviews and analyzed. The primary outcome was the prevalence of mobility disability, compared to that of no mobility disability. Results: A total of 124 patients were included in this present study. Median age was 83.0 years (interquartile range [IQR], 79.8-87.0 years) and 48 patients (38.7%) were female. Fifty-two patients (41.9%) could not walk independently at discharge (mobility disability group). The remaining 72 patients were in the no mobility disability group. Multiple logistic regression analyses revealed clinical frailty scale (CFS) score ≥5 (odds ratio [OR] = 6.63, 95% confidence interval [CI] = 2.51-17.52, p < 0.001), SOFA score ≥6 (OR = 6.11, 95% CI = 1.57-23.77, p = 0.009), and neurological disorder as the main cause on admission (OR = 4.48, 95% CI = 1.52-13.20, p = 0.006) were independent and significant prognostic factors for mobility disability at discharge. Conclusion: Among elderly patients admitted to the emergency department, CFS ≥5, SOFA ≥6, and neurological disorders were associated with mobility disability at hospital discharge.

7.
Clin Pharmacol Ther ; 115(6): 1372-1382, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38441177

ABSTRACT

With the coronavirus disease 2019 (COVID-19) pandemic, there is growing interest in utilizing adaptive platform clinical trials (APTs), in which multiple drugs are compared with a single common control group, such as a placebo or standard-of-care group. APTs evaluate several drugs for one disease and accept additions or exclusions of drugs as the trials progress; however, little is known about the efficiency of APTs over multiple stand-alone trials. In this study, we simulated the total development period, total sample size, and statistical operating characteristics of APTs and multiple stand-alone trials in drug development settings for hospitalized patients with COVID-19. Simulation studies using selected scenarios reconfirmed several findings regarding the efficiency of APTs. The APTs without staggered addition of drugs showed a shorter total development period than stand-alone trials, but the difference rapidly diminished if patient's enrollment was accelerated during the trials owing to the spread of infection. APTs with staggered addition of drugs still have the possibility of reducing the total development period compared with multiple stand-alone trials in some cases. Our study demonstrated that APTs could improve efficiency relative to multiple stand-alone trials regarding the total development period and total sample size without undermining statistical validity; however, this improvement varies depending on the speed of patient enrollment, sample size, presence/absence of family-wise error rate adjustment, allocation ratio between drug and placebo groups, and interval of staggered addition of drugs. Given the complexity of planning and implementing APT, the decision to implement APT during a pandemic must be made carefully.


Subject(s)
COVID-19 Drug Treatment , COVID-19 , Computer Simulation , Drug Development , Humans , Drug Development/methods , COVID-19/epidemiology , Sample Size , Pandemics , SARS-CoV-2 , Clinical Trials as Topic/methods , Antiviral Agents/therapeutic use , Adaptive Clinical Trials as Topic , Research Design
8.
Resusc Plus ; 17: 100527, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38188596

ABSTRACT

Objective: This study investigates temporal muscle atrophy in out-of-hospital cardiac arrest patients post-resuscitation, seeking associations with neurological outcomes and factors associated with atrophy. Methods: Using data from six Japanese intensive care units, adult patients' post-resuscitation who underwent head computed tomography scans on admission and two to five days post-admission were assessed. Temporal muscle area, thickness, and density were quantified from a single cross-sectional image. Patients were categorized into 'atrophy' or 'no atrophy' groups based on median daily temporal muscle atrophy rates. The primary outcome was changes in temporal muscle dimensions between admission and follow-up two to five days later. Secondary outcomes included assessing the impact of temporal muscle atrophy on 30-day survival, as well as identifying any clinical factors associated with temporal muscle atrophy. Results: A total of 185 patients were analyzed. Measurements at follow-up revealed significant decreases in temporal muscle area (214 vs. 191 mm2, p < 0.001), thickness (4.9 vs. 4.7 mm, p < 0.001), and density (46 vs. 44 HU, p < 0.001) compared to those at admission. The median daily rate for temporal muscle area atrophy was 2.0% per day. There was no significant association between temporal muscle atrophy and 30-day survival (hazard ratios, 0.71; 95% CI, 0.41-1.23, p = 0.231). Multivariable logistic regression found no clinical factors significantly associated with temporal muscle atrophy. Conclusions: Temporal muscle atrophy in post-resuscitation patients occurs rapidly at 2.0% per day. However, there was no significant association with 30-day mortality or any identified clinical factors. Further investigation into its long-term functional implications is warranted.

9.
Crit Care Med ; 52(2): 314-330, 2024 02 01.
Article in English | MEDLINE | ID: mdl-38240510

ABSTRACT

RATIONALE: Clinical deterioration of patients hospitalized outside the ICU is a source of potentially reversible morbidity and mortality. To address this, some acute care hospitals have implemented systems aimed at detecting and responding to such patients. OBJECTIVES: To provide evidence-based recommendations for hospital clinicians and administrators to optimize recognition and response to clinical deterioration in non-ICU patients. PANEL DESIGN: The 25-member panel included representatives from medicine, nursing, respiratory therapy, pharmacy, patient/family partners, and clinician-methodologists with expertise in developing evidence-based Clinical Practice Guidelines. METHODS: We generated actionable questions using the Population, Intervention, Control, and Outcomes (PICO) format and performed a systematic review of the literature to identify and synthesize the best available evidence. We used the Grading of Recommendations Assessment, Development, and Evaluation Approach to determine certainty in the evidence and to formulate recommendations and good practice statements (GPSs). RESULTS: The panel issued 10 statements on recognizing and responding to non-ICU patients with critical illness. Healthcare personnel and institutions should ensure that all vital sign acquisition is timely and accurate (GPS). We make no recommendation on the use of continuous vital sign monitoring among unselected patients. We suggest focused education for bedside clinicians in signs of clinical deterioration, and we also suggest that patient/family/care partners' concerns be included in decisions to obtain additional opinions and help (both conditional recommendations). We recommend hospital-wide deployment of a rapid response team or medical emergency team (RRT/MET) with explicit activation criteria (strong recommendation). We make no recommendation about RRT/MET professional composition or inclusion of palliative care members on the responding team but suggest that the skill set of responders should include eliciting patients' goals of care (conditional recommendation). Finally, quality improvement processes should be part of a rapid response system. CONCLUSIONS: The panel provided guidance to inform clinicians and administrators on effective processes to improve the care of patients at-risk for developing critical illness outside the ICU.


Subject(s)
Clinical Deterioration , Critical Care , Humans , Critical Care/standards , Critical Illness/therapy , Evidence-Based Practice , Intensive Care Units
10.
Crit Care Med ; 52(2): 307-313, 2024 02 01.
Article in English | MEDLINE | ID: mdl-38240509

ABSTRACT

RATIONALE: Clinical deterioration of patients hospitalized outside the ICU is a source of potentially reversible morbidity and mortality. To address this, some acute care facilities have implemented systems aimed at detecting and responding to such patients. OBJECTIVES: To provide evidence-based recommendations for hospital clinicians and administrators to optimize recognition and response to clinical deterioration in non-ICU patients. PANEL DESIGN: The 25-member panel included representatives from medicine, nursing, respiratory therapy, pharmacy, patient/family partners, and clinician-methodologists with expertise in developing evidence-based clinical practice guidelines. METHODS: We generated actionable questions using the Population, Intervention, Control, and Outcomes format and performed a systematic review of the literature to identify and synthesize the best available evidence. We used the Grading of Recommendations Assessment, Development, and Evaluation approach to determine certainty in the evidence and to formulate recommendations and good practice statements (GPSs). RESULTS: The panel issued 10 statements on recognizing and responding to non-ICU patients with critical illness. Healthcare personnel and institutions should ensure that all vital sign acquisition is timely and accurate (GPS). We make no recommendation on the use of continuous vital sign monitoring among "unselected" patients due to the absence of data regarding the benefit and the potential harms of false positive alarms, the risk of alarm fatigue, and cost. We suggest focused education for bedside clinicians in signs of clinical deterioration, and we also suggest that patient/family/care partners' concerns be included in decisions to obtain additional opinions and help (both conditional recommendations). We recommend hospital-wide deployment of a rapid response team or medical emergency team (RRT/MET) with explicit activation criteria (strong recommendation). We make no recommendation about RRT/MET professional composition or inclusion of palliative care members on the responding team but suggest that the skill set of responders should include eliciting patients' goals of care (conditional recommendation). Finally, quality improvement processes should be part of a rapid response system (GPS). CONCLUSIONS: The panel provided guidance to inform clinicians and administrators on effective processes to improve the care of patients at-risk for developing critical illness outside the ICU.


Subject(s)
Clinical Deterioration , Critical Care , Humans , Critical Care/standards , Critical Illness/therapy , Intensive Care Units , Quality Improvement
11.
Vaccine ; 42(3): 677-688, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38114409

ABSTRACT

INTRODUCTION: Since the SARS-CoV-2 Omicron variant became dominant, assessing COVID-19 vaccine effectiveness (VE) against severe disease using hospitalization as an outcome became more challenging due to incidental infections via admission screening and variable admission criteria, resulting in a wide range of estimates. To address this, the World Health Organization (WHO) guidance recommends the use of outcomes that are more specific to severe pneumonia such as oxygen use and mechanical ventilation. METHODS: A case-control study was conducted in 24 hospitals in Japan for the Delta-dominant period (August-November 2021; "Delta") and early Omicron (BA.1/BA.2)-dominant period (January-June 2022; "Omicron"). Detailed chart review/interviews were conducted in January-May 2023. VE was measured using various outcomes including disease requiring oxygen therapy, disease requiring invasive mechanical ventilation (IMV), death, outcome restricting to "true" severe COVID-19 (where oxygen requirement is due to COVID-19 rather than another condition(s)), and progression from oxygen use to IMV or death among COVID-19 patients. RESULTS: The analysis included 2125 individuals with respiratory failure (1608 cases [75.7%]; 99.2% of vaccinees received mRNA vaccines). During Delta, 2 doses provided high protection for up to 6 months (oxygen requirement: 95.2% [95% CI:88.7-98.0%] [restricted to "true" severe COVID-19: 95.5% {89.3-98.1%}]; IMV: 99.6% [97.3-99.9%]; fatal: 98.6% [92.3-99.7%]). During Omicron, 3 doses provided high protection for up to 6 months (oxygen requirement: 85.5% [68.8-93.3%] ["true" severe COVID-19: 88.1% {73.6-94.7%}]; IMV: 97.9% [85.9-99.7%]; fatal: 99.6% [95.2-99.97]). There was a trend towards higher VE for more severe and specific outcomes. CONCLUSION: Multiple outcomes pointed towards high protection of 2 doses during Delta and 3 doses during Omicron. These results demonstrate the importance of using severe and specific outcomes to accurately measure VE against severe COVID-19, as recommended in WHO guidance in settings of intense transmission as seen during Omicron.


Subject(s)
COVID-19 Vaccines , COVID-19 , Humans , COVID-19/prevention & control , Oxygen/therapeutic use , Japan/epidemiology , Respiration, Artificial , Case-Control Studies , Vaccine Efficacy , SARS-CoV-2
14.
Crit Care Med ; 51(12): 1685-1696, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37971720

ABSTRACT

OBJECTIVES: This study aimed to examine the association between ABCDEF bundles and long-term postintensive care syndrome (PICS)-related outcomes. DESIGN: Secondary analysis of the J-PICS study. SETTING: This study was simultaneously conducted in 14 centers and 16 ICUs in Japan between April 1, 2019, and September 30, 2019. PATIENTS: Adult ICU patients who were expected to be on a ventilator for at least 48 hours. INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Bundle compliance for the last 24 hours was recorded using a checklist at 8:00 am The bundle compliance rate was defined as the 3-day average of the number of bundles performed each day divided by the total number of bundles. The relationship between the bundle compliance rate and PICS prevalence (defined by the 36-item Short Form Physical Component Scale, Mental Component Scale, and Short Memory Questionnaire) was examined. A total of 191 patients were included in this study. Of these, 33 patients (17.3%) died in-hospital and 48 (25.1%) died within 6 months. Of the 96 patients with 6-month outcome data, 61 patients (63.5%) had PICS and 35 (36.5%) were non-PICS. The total bundle compliance rate was 69.8%; the rate was significantly lower in the 6-month mortality group (66.6% vs 71.6%, p = 0.031). Bundle compliance rates in patients with and without PICS were 71.3% and 69.9%, respectively ( p = 0.61). After adjusting for confounding variables, bundle compliance rates were not significantly different in the context of PICS prevalence ( p = 0.56). A strong negative correlation between the bundle compliance rate and PICS prevalence ( r = -0.84, R 2 = 0.71, p = 0.035) was observed in high-volume centers. CONCLUSIONS: The bundle compliance rate was not associated with PICS prevalence. However, 6-month mortality was lower with a higher bundle compliance rate. A trend toward a lower PICS prevalence was associated with higher bundle compliance in high-volume centers.


Subject(s)
Critical Illness , Intensive Care Units , Adult , Humans , Critical Illness/epidemiology , Critical Illness/therapy , Hospital Mortality , Ventilators, Mechanical
15.
Prog Rehabil Med ; 8: 20230020, 2023.
Article in English | MEDLINE | ID: mdl-37440788

ABSTRACT

Background: Many patients with coronavirus disease 2019 (COVID-19) develop malnutrition after a prolonged stay in the intensive care unit (ICU) with mechanical ventilation. Early enteral nutrition is recommended, but optimal nutrition management during post-extubation recovery remains challenging. Cases: The subjects were 12 acute respiratory distress syndrome patients with COVID-19 (9 men, 3 women; median age, 55.6 years). We reviewed patient characteristics, physical function, and nutrient intake during hospitalization from just after extubation to discharge. During this period, the median Functional Oral Intake Scale score improved from 4.5 (interquartile range [IQR] 3.3-5.3) to 7.0 (IQR 5.8-7.0), the median Medical Research Council (MRC) scale score improved from 45.0 (IQR 39.3-48.5) to 53.5 (IQR 47.5-59.3), and the median Barthel index improved from 7.5 (IQR 0-16.3) to 72.5 (IQR 42.5-95.0). In 3 patients, the MRC scale score remained below 48 before discharge, indicating that ICU-acquired weakness had been prolonged. The median daily caloric intake during this phase increased from 6.9 kcal/kg per day (3.5-10.2 kcal/kg per day) to 24.8 kcal/kg per day (21.0-27.9 kcal/kg per day). About half of these patients showed caloric intake below 25 kcal/kg per day before discharge. Based on the Global Leadership Initiative on Malnutrition (GLIM) diagnostic scheme, 10 patients were diagnosed with malnutrition during hospitalization. Discussion: Physical function improved in more than half of the patients, but nutritional status was not recovered. More studies for nutritional management are required to prevent malnutrition and to enhance functional recovery during the post-extubation rehabilitation phase.

16.
Acute Med Surg ; 10(1): e870, 2023.
Article in English | MEDLINE | ID: mdl-37416895

ABSTRACT

Aim: The rapid response system (RRS) was initially aimed to improve patient outcomes. Recently, some studies have implicated that RRS might facilitate do-not-attempt-resuscitation (DNAR) orders among patients, their families, and healthcare providers. This study aimed to examine the incidence and factors independently associated with DNAR orders newly implemented after RRS activation among deteriorating patients. Methods: This observational study assessed patients who required RRS activation between 2012 and 2021 in Japan. We investigated patients' characteristics and the incidence of new DNAR orders after RRS activation. Furthermore, we used multivariable hierarchical logistic regression models to explore independent predictors of new DNAR orders. Results: We identified 7904 patients (median age, 72 years; 59% male) who required RRS activation at 29 facilities. Of the 7066 patients without pre-existing DNAR orders before RRS activation, 394 (5.6%) had new DNAR orders. Multivariable hierarchical logistic regression analyses revealed that new DNAR orders were associated with age category (adjusted odds ratio [aOR], 1.56; 95% confidence interval, 1.12-2.17 [65-74 years old reference to 20-64 years old], aOR, 2.56; 1.92-3.42 [75-89 years old], and aOR, 6.58; 4.17-10.4 [90 years old]), malignancy (aOR, 1.82; 1.42-2.32), postoperative status (aOR, 0.45; 0.30-0.71), and National Early Warning Score 2 (aOR, 1.07; 1.02-1.12 [per 1 score]). Conclusion: The incidence of new DNAR orders was one in 18 patients after RRS activation. The factors associated with new DNAR orders were age, malignancy, postoperative status, and National Early Warning Score 2.

17.
Acute Med Surg ; 10(1): e851, 2023.
Article in English | MEDLINE | ID: mdl-37261374

ABSTRACT

Background: Clinical risk scores are widely used in emergency medicine, and some studies have evaluated their use in patients with coronavirus disease 2019 (COVID-19). However, no studies have evaluated their use in patients with the COVID-19 Delta variant. We aimed to study the performance of four different clinical scores (National Early Warning Score [NEWS], quick Sequential Organ Failure Assessment [qSOFA], Confusion, Respiratory rate, Blood pressure, and Age ≥65 [CRB-65], and Kanagawa score) in predicting the risk of severe disease (defined as the need for intubation and in-hospital mortality) in patients with the COVID-19 Delta variant. Methods: This was a retrospective cohort study of patients hospitalized with suspected severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) Delta variant infection between June 1 and December 31, 2021. The primary outcomes were the sensitivity and specificity of the aforementioned clinical risk scores at admission to predict severe disease. Areas under the receiver operating characteristic curves (AUROCs) were compared between the clinical risk scores and we identified new cut-off points for all four scores. Results: A total of 249 adult patients were included, of whom 18 developed severe disease. A NEWS ≥7 at admission predicted severe disease with 72.2% sensitivity and 86.2% specificity. The NEWS (AUROC 0.88) was superior to both the qSOFA (AUROC 0.74) and the CRB-65 (AUROC 0.67), and there was no significant difference between the NEWS and Kanagawa score (AUROC 0.86). Conclusion: The NEWS at hospital admission predicted the severity of the COVID-19 Delta variant with high accuracy.

18.
Front Med (Lausanne) ; 10: 1199750, 2023.
Article in English | MEDLINE | ID: mdl-37305119

ABSTRACT

Background: Airway obstruction is a relatively rare but critical condition that requires urgent intervention in the emergency department (ED). The present study aimed to investigate the association of airway obstruction with first-pass success and intubation-related adverse events in the ED. Methods: We analyzed data from two prospective multicenter observational studies of ED airway management. We included adults (aged ≥18 years) who underwent tracheal intubation for non-trauma indications from 2012 through 2021 (113-month period). Outcome measures were first-pass success and intubation-related adverse events. We constructed a multivariable logistic regression model adjusting for age, sex, modified LEMON score (without airway obstruction), intubation methods, intubation devices, bougie use, intubator's specialty, and ED visit year with accounting for patients clustering within the ED. Results: Of 7,349 eligible patients, 272 (4%) underwent tracheal intubation for airway obstruction. Overall, 74% of patients had first-pass success and 16% had intubation-related adverse events. The airway obstruction group had a lower first-pass success rate (63% vs. 74%; unadjusted odds ratio [OR], 0.63; 95% CI, 0.49-0.80), compared to the non-airway obstruction group. This association remained significant in the multivariable analysis (adjusted OR 0.60, 95%CI 0.46-0.80). The airway obstruction group also had a significantly higher risk of adverse events (28% vs. 16%; unadjusted OR, 1.93; 95% CI, 1.48-2.56, adjusted OR, 1.70; 95% CI, 1.27-2.29). In the sensitivity analysis using multiple imputation, the results remained consistent with the main results: the airway obstruction group had a significantly lower first-pass success rate (adjusted OR, 0.60; 95% CI, 0.48-0.76). Conclusion: Based on these multicenter prospective data, airway obstruction was associated with a significantly lower first-pass success rate and a higher intubation-related adverse event rate in the ED.

19.
Medicine (Baltimore) ; 102(16): e33368, 2023 Apr 21.
Article in English | MEDLINE | ID: mdl-37083800

ABSTRACT

Although anti-tumor necrosis factor-α monoclonal antibody biological preparations (BP) agents are widely used as an established treatment tool for refractory ulcerative colitis (UC), whether leukocytapheresis/granulocytapheresis (L/G-CAP) has similar beneficial impact on the disease activity remains undetermined. Furthermore, the costs defrayed for the treatment with these 2 modalities have not been compared. We retrospectively evaluated whether L/G-CAP offered sustained beneficial effects over 2-year period. The patients who had moderately to severely active UC (Rachmilewitz clinical activity index (CAI) ≧ 5) and were treated with a series (10 sessions) of L/G-CAP (n = 19) or BP (n = 7) as an add-on therapy to conventional medications were followed. Furthermore, the cost-effectiveness pertaining to the treatment with L/G-CAP and BP was assessed over 12 months. At baseline, L/G-CAP and BP groups manifested similar disease activity (CAI, L/G-CAP; 7.0 [6.0-10.0], BP; 10.0 [6.0-10.0], P = .207). The L/G-CAP and BP treatment suppressed the activity, with CAI 1 or less attained on day 180. When the L/G-CAP group was dichotomized into L/G-CAP-high and L/G-CAP-low group based on CAI values (≥3 or < 3) on day 365, CAI was gradually elevated in L/G-CAP-high group but remained suppressed in L/G-CAP-low group without additional apheresis for 2 years. Anemia was corrected more rapidly and hemoglobin levels were higher in BP group. The cost of the treatment with L/G-CAP over 12 months was curtailed to 76% of that with BP (1.79 [1.73-1.92] vs 2.35 [2.29-3.19] million yen, P = .028). L/G-CAP is as effective as BP in a substantial number of patients over 2 years. The cost for the treatment of UC favors L/G-CAP although the correction of anemia may prefer BP. Thus, L/G-CAP can effectively manage the disease activity with no additional implementation for 2 years although further therapeutic modalities might be required in a certain population with high CAI observed on day 365.


Subject(s)
Colitis, Ulcerative , Humans , Colitis, Ulcerative/drug therapy , Leukapheresis , Retrospective Studies , Tumor Necrosis Factor-alpha/therapeutic use , Treatment Outcome , Antibodies, Monoclonal/therapeutic use
20.
BMC Nephrol ; 24(1): 68, 2023 03 22.
Article in English | MEDLINE | ID: mdl-36949416

ABSTRACT

BACKGROUND: Hypertensive emergency is a critical disease that causes multifaceted sequelae, including end-stage kidney disease and cardiovascular disease. Although the renin-angiotensin-aldosterone (RAA) system is enormously activated in this disease, there are few reports that attempt to characterize the effect of early use of RAA inhibitors (RASi) on the temporal course of kidney function. METHODS: This retrospective cohort study was conducted to clarify whether the early use of RASi during hospitalization offered more favorable benefits on short-term renal function and long-term renal outcomes in patients with hypertensive emergencies. We enrolled a total of 49 patients who visited our medical center with acute severe hypertension and multiple organ dysfunction between April 2012 and August 2020. Upon admission, the patients were treated with intravenous followed by oral antihypertensive drugs, including RASi and Ca channel blockers (CCB). Kidney function as well as other laboratory and clinical parameters were compared between RASi-treated and CCB- treated group over 2 years. RESULTS: Antihypertensive treatment effectively reduced blood pressure from 222 ± 28/142 ± 21 to 141 ± 18/87 ± 14 mmHg at 2 weeks and eGFR was gradually restored from 33.2 ± 23.3 to 40.4 ± 22.5 mL/min/1.73m2 at 1 year. The renal effect of antihypertensive drugs was particularly conspicuous when RASi was started in combination with other conventional antihypertensive drugs at the early period of hospitalization (2nd day [IQR: 1-5.5]) and even in patients with moderately to severely diminished eGFR (< 30 mL/min/1.73 m2) on admission. In contrast, CCB modestly restored eGFR during the observation period. Furthermore, renal survival probabilities were progressively deteriorated in patients who had manifested reduced eGFR (< 15 mL/min/1.73 m2) or massive proteinuria (urine protein/creatinine ≥ 3.5 g/gCr) on admission. Early use of RASi was associated with a favorable 2-year renal survival probability (0.90 [95%CI: 0.77-1.0] vs. 0.63 [95%CI: 0.34-0.92] for RASi ( +) and RASi (-), respectively, p = 0.036) whereas no apparent difference in renal survival was noted for CCB. CONCLUSIONS: Early use of RASi contributes to the renal functional recovery from acute reduction in eGFR among patients with hypertensive emergencies. Furthermore, RASi offers more favorable effect on 2-year renal survival, compared with CCB.


Subject(s)
Antihypertensive Agents , Hypertension , Humans , Antihypertensive Agents/pharmacology , Renin , Angiotensin-Converting Enzyme Inhibitors/adverse effects , Angiotensins/pharmacology , Angiotensins/therapeutic use , Retrospective Studies , Emergencies , Kidney , Renin-Angiotensin System , Hypertension/complications
SELECTION OF CITATIONS
SEARCH DETAIL
...