ABSTRACT
Background@#This study aimed to evaluate whether the effect of tachycardia varies according to the degree of tissue perfusion in septic shock. @*Methods@#Patients with septic shock admitted to the intensive care units were categorized into the tachycardia (heart rate > 100 beats/min) and non-tachycardia (≤ 100 beats/min) groups. The association of tachycardia with hospital mortality was evaluated in each subgroup with low and high lactate levels, which were identified through a subpopulation treatment effect pattern plot analysis. @*Results@#In overall patients, hospital mortality did not differ between the two groups (44.6% vs. 41.8%, P = 0.441), however, tachycardia was associated with reduced hospital mortality rates in patients with a lactate level ≥ 5.3 mmol/L (48.7% vs. 60.3%, P = 0.030; adjusted odds ratio [OR], 0.59, 95% confidence interval [CI], 0.35–0.99, P = 0.045), not in patients with a lactate level < 5.3 mmol/L (36.5% vs. 29.7%, P = 0.156; adjusted OR, 1.39, 95% CI, 0.82–2.35, P = 0.227). @*Conclusion@#In septic shock patients, the effect of tachycardia on hospital mortality differed by serum lactate level. Tachycardia was associated with better survival in patients with significantly elevated lactate levels.
ABSTRACT
Squamous cell carcinoma in situ (SCCIS) is a precancerous lesion that usually appears as a well-demarcated erythematous scaly plaque, histopathologically characterized by atypical full-thickness disarrangement and pleomorphic atypical keratinocytes. Extramammary Paget disease (EMPD) is an intraepidermal tumor that usually affects skin in the genital region, histopathologically identified by large cells with pale and vacuolated cytoplasm, called Paget cells, in the epidermis. An 84-year-old female presented with an asymptomatic solitary scaly erythematous plaque on her back first noticed 10 years prior. A keratotic mass was noted immediately adjacent to the plaque. A biopsy was performed; one specimen showed atypical full-thickness disarrangement and pleomorphic keratinocytes with negative CK7 staining in the epidermis, and another specimen showed Paget cells above the basal layer with positive CK7 staining in the epidermis. Based on these findings, the patient was diagnosed with SCCIS coexisting with EMPD. The patient underwent wide surgical excision of the lesion with reconstruction, and is being monitored without recurrence.
ABSTRACT
Background@#Coronavirus disease 2019 (COVID-19) is an ongoing global public health threat and different variants of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) have been identified. This study aimed to analyse the factors associated with negative conversion of polymerase chain reaction (PCR) and prognosis in critically ill patients according to the SARS-CoV-2 variant. @*Methods@#This study retrospectively analysed 259 critically ill patients with COVID-19 who were admitted to the intensive care unit of a tertiary medical center between January 2020 and May 2022. The Charlson comorbidity index (CCI) was used to evaluate comorbidity, and a negative PCR test result within 2 weeks was used to define negative PCR conversion. The cases were divided into the following three variant groups, according to the documented variant of SARS-CoV-2 at the time of diagnosis: non-Delta (January 20, 2020–July 6, 2021), Delta (July 7, 2021– January 1, 2022), and Omicron (January 30, 2022–April 24, 2022). @*Results@#The mean age of the 259 patients was 67.1 years and 93 (35.9%) patients were female. Fifty (19.3%) patients were smokers, and 50 (19.3%) patients were vaccinated. The CCI (hazard ratio [HR], 1.555; p<0.001), vaccination (HR, 0.492; p=0.033), and Delta variant (HR, 2.469; p=0.002) were significant factors for in-hospital mortality. The Delta variant (odds ratio, 0.288; p=0.003) was associated with fewer negative PCR conversion; however, vaccination (p=0.163) and remdesivir (p=0.124) treatments did not. @*Conclusion@#The Delta variant of SARS-CoV-2 is associated with lower survival and negative PCR conversion. Contrary to expectations, vaccination and remdesivir may not affect negative PCR conversion in critically ill patients with COVID-19.
ABSTRACT
Background@#Administration of adequate antibiotics is crucial for better outcomes in sepsis. Because no uniform tool can accurately assess the risk of multidrug-resistant (MDR) pathogens, a local antibiogram is necessary. We aimed to describe the antibiogram of MDR bacteria based on locations of sepsis onset in South Korea. @*Methods@#We performed a prospective observational study of adult patients diagnosed with sepsis according to Sepsis-3 from 19 institutions (13 tertiary referral and 6 universityaffiliated general hospitals) in South Korea. Patients were divided into four groups based on the respective location of sepsis onset: community, nursing home, long-term-care hospital, and hospital. Along with the antibiogram, risk factors of MDR bacteria and drug-bug match of empirical antibiotics were analyzed. @*Results@#MDR bacteria were detected in 1,596 (22.7%) of 7,024 patients with gram-negative predominance. MDR gram-negative bacteria were more commonly detected in long-termcare hospital- (30.4%) and nursing home-acquired (26.3%) sepsis, whereas MDR grampositive bacteria were more prevalent in hospital-acquired (10.9%) sepsis. Such findings were consistent regardless of the location and tier of hospitals throughout South Korea. Patients with long-term-care hospital-acquired sepsis had the highest risk of MDR pathogen, which was even higher than those with hospital-acquired sepsis (adjusted odds ratio, 1.42; 95% confidence interval, 1.15–1.75) after adjustment of risk factors. The drug-bug match was lowest in patients with long-term-care hospital-acquired sepsis (66.8%). @*Conclusion@#Gram-negative MDR bacteria were more common in nursing home- and long-term-care hospital-acquired sepsis, whereas gram-positive MDR bacteria were more common in hospital-acquired settings in South Korea. Patients with long-term-care hospitalacquired sepsis had the highest the risk of MDR bacteria but lowest drug-bug match of initial antibiotics. We suggest that initial antibiotics be carefully selected according to the onset location in each patient.
ABSTRACT
Acute kidney injury (AKI) is a common complication in critically ill children. However, the common lack of baseline serum creatinine values affects AKI diagnosis and staging. Several approaches for estimating baseline creatinine values in those patients were evaluated. Methods: This single-center retrospective study enrolled pediatric patients with documented serum creatinine measurements within 3 months before admission and more than two serum creatinine measurements within 7 days after admission to the pediatric intensive care unit of a tertiary care children’s hospital between January 2016 and April 2020. Four different approaches for estimating AKI using serum creatinine measurements were compared: 1) back-calculation using age-adjusted normal reference glomerular filtration rates, 2) age-adjusted normal reference serum creatinine values, 3) minimum values measured within 7 days after admission, and 4) initial values upon admission. Results: The approach using minimum values showed the best agreement with the measured baseline value, with the largest intraclass correlation coefficient (0.623), smallest bias (–0.04), and narrowest limit of agreement interval (1.032). For AKI diagnosis and staging, the minimum values were 80.8% and 76.1% accurate, respectively. The other estimated baseline values underestimated AKI and showed poor agreement with baseline values before admission, with a misclassification rate of up to 42% (p < 0.001). Conclusion: Minimum values of serum creatinine measured within 7 days after hospital admission showed the best agreement with creatinine measured within 3 months before admission, indicating the possibility of using it as a baseline when baseline data are unavailable. Further large-scale studies are required to accurately diagnose AKI in critically ill children.
ABSTRACT
Background@#The quick sequential organ failure assessment (qSOFA) score is suggested to use for screening patients with a high risk of clinical deterioration in the general wards, which could simply be regarded as a general early warning score. However, comparison of unselected admissions to highlight the benefits of introducing qSOFA in hospitals already using Modified Early Warning Score (MEWS) remains unclear. We sought to compare qSOFA with MEWS for predicting clinical deterioration in general ward patients regardless of suspected infection. @*Methods@#The predictive performance of qSOFA and MEWS for in-hospital cardiac arrest (IHCA) or unexpected intensive care unit (ICU) transfer was compared with the areas under the receiver operating characteristic curve (AUC) analysis using the databases of vital signs collected from consecutive hospitalized adult patients over 12 months in five participating hospitals in Korea. @*Results@#Of 173,057 hospitalized patients included for analysis, 668 (0.39%) experienced the composite outcome. The discrimination for the composite outcome for MEWS (AUC, 0.777;95% confidence interval [CI], 0.770–0.781) was higher than that for qSOFA (AUC, 0.684;95% CI, 0.676–0.686; P < 0.001). In addition, MEWS was better for prediction of IHCA (AUC, 0.792; 95% CI, 0.781–0.795 vs. AUC, 0.640; 95% CI, 0.625–0.645; P < 0.001) and unexpected ICU transfer (AUC, 0.767; 95% CI, 0.760–0.773 vs. AUC, 0.716; 95% CI, 0.707–0.718; P < 0.001) than qSOFA. Using the MEWS at a cutoff of ≥ 5 would correctly reclassify 3.7% of patients from qSOFA score ≥ 2. Most patients met MEWS ≥ 5 criteria 13 hours before the composite outcome compared with 11 hours for qSOFA score ≥ 2. @*Conclusion@#MEWS is more accurate that qSOFA score for predicting IHCA or unexpected ICU transfer in patients outside the ICU. Our study suggests that qSOFA should not replace MEWS for identifying patients in the general wards at risk of poor outcome.
ABSTRACT
Background/Aims@#Pirfenidone slows the progression of idiopathic pulmonary fibrosis (IPF). We investigated its efficacy and safety in terms of dose and disease severity in real-world patients with IPF. @*Methods@#This multicenter retrospective cohort study investigated 338 patients treated with pirfenidone between July 2012 and March 2018. Demographics, pulmonary function, mortality, and pirfenidone-related adverse events were also investigated. Efficacy was analyzed according to pirfenidone dose and disease severity using linear mixed-effects models to assess the annual decline rate of forced vital capacity (FVC) and diffusing capacity of the lungs for carbon monoxide (DLCO). @*Results@#The mean %FVCpredicted and %DLCOpredicted values were 72.6% ± 13.1% and 61.4% ± 17.9%, respectively. The mean duration of pirfenidone treatment was 16.1 ± 9.0 months. In the standard dose (1,800 mg/day) group, the mean %FVCpredicted was −6.56% (95% confidence interval [CI], −9.26 to −3.87) per year before, but −4.43% (95% CI, −5.87 to −3.00) per year after treatment with pirfenidone. In the non-standard lower dose group, the mean %FVCpredicted was −4.96% (95% CI, −6.82 to −3.09) per year before, but −1.79% (95% CI, −2.75 to −0.83) per year after treatment with pirfenidone. The FVC decline rate was significantly reduced, regardless of the Gender-Age-Physiology (GAP) stage. Adverse events and mortality were similar across dose groups; however, they were more frequent in GAP stages II–III than in the stage I group. @*Conclusions@#The effect of pirfenidone on reducing disease progression of IPF persisted even with a consistently lower dose of pirfenidone.
ABSTRACT
Background/Aims@#There are few studies describing contemporary status of mechanical ventilation in Korea. We investigated changes in management and outcome of mechanical ventilation in Korea. @*Methods@#International, prospective observational cohort studies have been conducted every 6 years since 1998. Korean intensive care units (ICUs) participated in 2010 and 2016 cohorts. We compared 2016 and 2010 Korean data. @*Results@#Two hundred and twenty-six patients from 18 ICUs and 275 patients from 12 ICUs enrolled in 2016 and 2010, respectively. In 2016 compared to 2010, use of non-invasive ventilation outside ICU increased (10.2% vs. 2.5%, p = 0.001). Pressure-control ventilation was the most common mode in both groups. Initial tidal volume (7.1 mL/kg vs. 7.4 mL/kg, p = 0.372) and positive end-expiratory pressure (6 cmH2O vs. 6 cmH2O, p = 0.141) were similar, but peak pressure (22 cmH2O vs. 24 cmH2O, p = 0.011) was lower in 2016. More patients received sedatives (70.7% vs. 57.0%, p = 0.002) and analgesics (86.5% vs. 51.1%, p < 0.001) in 2016. The awakening (48.4% vs. 31.0%, p = 0.002) was more frequently attempted in 2016. The accidental extubation rate decreased to one tenth of what it was in 2010 (1.1% vs. 10.2%, p < 0.001). The ICU mortality did not change (31.4% 35.6%, p = 0.343) but ICU length of stay showed a decreasing trend (9 days vs. 10 days, p = 0.054) in 2016. @*Conclusions@#There were temporal changes in care of patients on mechanical ventilation including better control of pain and agitation, and active attempt of awakening.
ABSTRACT
Rapid response systems (RRSs) have been introduced to intervene with patients experiencing non-code medical emergencies and operate widely around the world. An RRS has four components: an afferent limb, an efferent limb, quality improvement, and administration. A proper triggering system, a hospital culture that embraces the RRS from the afferent limb, experienced primary responders, and dedicated physicians from the efferent limb are key for successful implementation. After initial implementation, quality improvement through objective outcome measures and self-evaluation are crucial, which lead to a better outcome when this process is well performed. Furthermore, better outcomes lead to more investment, which is essential for effective development of the system. The RRS is successfully maintained when these four components are closely interconnected.
ABSTRACT
Background@#A rapid response system (RRS) contributes to the safety of hospitalized patients. Clinical deterioration may occur in the general ward (GW) or in non-GW locations such as radiology or dialysis units. However, there are few studies regarding RRS activation in non-GW locations. This study aimed to compare the clinical characteristics and outcomes of patients with RRS activation in non-GW locations and in the GW. @*Methods@#From January 2016 to December 2017, all patients requiring RRS activation in nine South Korean hospitals were retrospectively enrolled and classified according to RRS activation location: GW vs non-GW RRS activations. @*Results@#In total, 12,793 patients were enrolled; 222 (1.7%) were non-GW RRS activations.There were more instances of shock (11.6% vs. 18.5%) and cardiac arrest (2.7% vs. 22.5%) in non-GW RRS activation patients. These patients also had a lower oxygen saturation (92.6% ± 8.6% vs. 88.7% ± 14.3%, P < 0.001) and a higher National Early Warning Score 2 (7.5 ± 3.4 vs. 8.9 ± 3.8,P < 0.001) than GW RRS activation patients. Although non-GW RRS activation patients received more intubation (odds ratio [OR], 3.135; P < 0.001), advanced cardiovascular life support (OR, 3.912; P < 0.001), and intensive care unit transfer (OR, 2.502;P < 0.001), their hospital mortality (hazard ratio, 0.630; P = 0.013) was lower than GW RRS activation patients upon multivariate analysis. @*Conclusion@#Considering that there were more critically ill but recoverable cases in non-GW locations, active RRS involvement should be required in such locations.
ABSTRACT
Rapid response systems (RRSs) have been introduced to intervene with patients experiencing non-code medical emergencies and operate widely around the world. An RRS has four components: an afferent limb, an efferent limb, quality improvement, and administration. A proper triggering system, a hospital culture that embraces the RRS from the afferent limb, experienced primary responders, and dedicated physicians from the efferent limb are key for successful implementation. After initial implementation, quality improvement through objective outcome measures and self-evaluation are crucial, which lead to a better outcome when this process is well performed. Furthermore, better outcomes lead to more investment, which is essential for effective development of the system. The RRS is successfully maintained when these four components are closely interconnected.
ABSTRACT
Disseminated adenovirus infections (d-ADV) after hematopoietic cell transplant (HCT) are often fatal with limited treatment options. Brincidofovir (BCV) a lipid ester of cidofovir is developed for this indication. We report four pediatric HCT recipients with d-ADV treated successfully with BCV.
ABSTRACT
Disseminated adenovirus infections (d-ADV) after hematopoietic cell transplant (HCT) are often fatal with limited treatment options. Brincidofovir (BCV) a lipid ester of cidofovir is developed for this indication. We report four pediatric HCT recipients with d-ADV treated successfully with BCV.
ABSTRACT
Background/Aims@#Only a few epidemiologic studies on the patients with pulmonary disorders admitted to intensive care unit exist. We investigated the characteristics and clinical outcomes of the patients with severe pulmonary disorders. @*Methods@#The sample cohort database of National Health Insurance Sharing Service from 2006 to 2015 was used. Operational definition of critically ill patients was adults who were either admitted to intensive care unit for at least 3 days or expired within first 2 days in the unit. The pulmonary disorder group comprised of critically ill patients with respiratory disease as the main diagnosis. @*Results@#Among the 997,173 patients, 12,983 (1.3%) in 383 intensive care units were categorized as critically ill. Patients in the pulmonary disorder group tended to have more comorbidities or disabilities. The length of hospital stay and duration of mechanical ventilation were longer in the pulmonary disorder group. Overall mortality and re-admission were higher in the pulmonary disorder group, with adjusted incidence rate ratios of 1.22 (95% confidence interval, 1.18 to 1.27) and 1.26 (95% confidence interval, 1.17 to 1.36), respectively. After adjustment by Cox regression, the pulmonary disorder group was an independent risk factor for in-hospital mortality. @*Conclusions@#In critically ill patients with pulmonary disorder, the use of healthcare resources was higher, and their clinical outcomes were significantly worse than the non-pulmonary disorder group.
ABSTRACT
OBJECTIVE: The aim of this study was to determine the possible prognostic factors in patients with uterine leiomyosarcoma (LMS). METHODS: This study retrospectively investigated 50 patients with uterine LMS treated at the Samsung Medical Center between 2001 and 2017. To analyze the prognostic significance of factors for recurrence-free survival (RFS), overall survival (OS), and survival after recurrence, the log-rank test and Cox proportional hazards model were used for univariate and multivariate analysis. RESULTS: Of the 50 patients, 30 (60.0%) experienced recurrence and 16 (32.0%) died within a median follow-up period of 21 (range, 3–99) months. Multivariate analysis revealed that older age, absence of residual tumor after surgery, lower mitotic count, and a history of adjuvant radiotherapy at first treatment were significantly associated with better RFS. Presence of residual tumor after surgery and severe nuclear atypia were associated with poor OS. In the analysis of survival after recurrence, hematogenous recurrence, severe nuclear atypia, and presence of residual tumor at primary surgery were significantly associated with worse prognosis. Notably, residual tumor status at primary surgery was associated with RFS, OS, and survival after recurrence. CONCLUSION: We demonstrated the possible prognostic factors for RFS, OS, and survival after recurrence for patients with LMS. These results may provide useful information for patients with LMS.
Subject(s)
Humans , Follow-Up Studies , Leiomyosarcoma , Multivariate Analysis , Neoplasm, Residual , Prognosis , Proportional Hazards Models , Radiotherapy, Adjuvant , Recurrence , Retrospective Studies , Uterine NeoplasmsABSTRACT
OBJECTIVE: To investigate the perinatal outcomes of twin pregnancies according to maternal age. METHODS: This is a retrospective cohort study of twin pregnancies delivered ≥24 weeks' gestation at a tertiary academic hospital from 1995 to 2016. Subjects were categorized into 5 groups according to maternal age: < 25, 25–29, 30–34, 35–39, and ≥40 years. Maternal and neonatal outcomes of each maternal age group were analyzed using the Jonckheere-Terpstra test and the linear-by-linear association test. RESULTS: A total of 1,936 twin pregnant women were included, of which 47 (2.4%), 470 (24.3%), 948 (49.0%), 417 (21.5%), and 54 (2.7%) women were aged < 25, 25–29, 30–34, 35–39, and ≥40 years, respectively. Higher maternal age was significantly associated with a higher rate of dichorionic twins and a higher risk of gestational diabetes and placenta previa. However, rates of preterm labor, preterm premature rupture of membranes, cervical incompetence, preterm delivery, preeclampsia, placenta abruption, and cesarean section were not associated with maternal age. Birth weight increased and the rate of admission to the neonatal intensive care unit (NICU) decreased with older maternal age, but other neonatal outcomes did not change with age. Maternal age was significantly associated with a lower rate of NICU admission after controlling for potential confounding factors in multivariable analysis. CONCLUSION: Advanced maternal age in twin pregnancies was associated with increased risk of gestational diabetes, placenta previa, and higher birth weight but a lower rate of NICU admission. However, other outcomes were not significantly associated with maternal age.
Subject(s)
Female , Humans , Infant, Newborn , Pregnancy , Birth Weight , Cesarean Section , Cohort Studies , Diabetes, Gestational , Intensive Care, Neonatal , Maternal Age , Membranes , Obstetric Labor, Premature , Placenta , Placenta Previa , Pre-Eclampsia , Pregnancy, Twin , Pregnant Women , Retrospective Studies , Rupture , TwinsABSTRACT
PURPOSE: Lung cancers presenting as subsolid nodule commonly have peripheral location, making the cancer-pleura relationship noteworthy. We aimed to evaluate the effect of pleural attachment and/or indentation on visceral pleural invasion (VPI) and recurrence-free survival. MATERIALS AND METHODS: Patients who underwent curative resection of lung cancer as subsolid nodules from April 2007 to January 2016 were retrospectively evaluated. They were divided into four groups according to their relationship with the pleura. Clinical, radiographical, and pathological findings were analyzed. RESULTS: Among 404 patients with malignant subsolid nodule, 120 (29.7%) had neither pleural attachment nor indentation, 26 (6.4%) had attachment only, 117 (29.0%) had indentation only, and 141 (34.9%) had both. VPI was observed in nodules of 36 patients (8.9%), but absent in nonsolid nodules and in those without pleural attachment and/or indentation. Compared to subsolid nodules with concurrent pleural attachment and indentation, those with attachment only (odds ratio, 0.12; 95% confidence interval [CI], 0.02 to 0.98) and indentation only (odds ratio, 0.10; 95% CI, 0.03 to 0.31) revealed lower odds of VPI. On subgroup analysis, the size of the solid portion was associated with VPI among those with pleural attachment and indentation (p=0.021). Such high-risk features for VPI were associated with earlier lung cancer recurrence (adjusted hazard ratio, 3.31; 95% CI, 1.58 to 6.91). CONCLUSION: Concurrent pleural attachment and indentation are risk factors for VPI, and the odds increase with larger solid portion in subsolid nodules. Considering the risk of recurrence, early surgical resection could be encouraged in these patients.
Subject(s)
Humans , Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Lung , Neoplasm Invasiveness , Pleura , Prognosis , Recurrence , Retrospective Studies , Risk FactorsABSTRACT
PURPOSE: Enteral nutrition is recommended in critically ill patients. On the other hand, the recommendation of nutritional support is limited and often controversial in critically ill patients in the prone position. Therefore, this study evaluated the clinical outcomes of nutritional support in critically ill patients in the prone position. METHODS: A retrospective evaluation of the electronic medical records was conducted, including adult patients who were in the medical intensive care unit (ICU) in the prone position in Seoul National University Bundang Hospital from May 1, 2015 to June 30, 2017. The patients' characteristics, nutritional support status while they were in the prone position, mortality in ICU and during hospitalization, ICU length of stay, mechanical ventilation days, and complications, such as ventilator associated pneumonia (VAP) and vomiting were collected. RESULTS: In total, 100 patients were included. Of these, 12 received enteral nutrition and parenteral nutrition and 88 received only parenteral nutrition. The groups were similar in terms of age, sex, number of comorbidity, weight, PaO₂/FiO₂, hours of prone position, Simplified Acute Physiology Score II (SAPS II), Acute Physiologic and Chronic Health Evaluation II (APACHE II) score, and Sequential Organ Failure Assessment (SOFA) score. No differences were observed in ICU mortality (75.0% vs. 46.6%; P=0.065), hospital mortality (83.3% vs. 58.0%; P=0.081), ICU length of stay (22.2±14.6 vs. 18.2±21.2; P=0.128) and mechanical ventilation days (19.3±14.8 vs. 14.5±19.1; P=0.098). In addition, there were no differences in the possible complications of the prone position, such as VAP (8.3% vs. 4.5%; P=0.480) and vomiting (8.3% vs. 1.1%; P=0.227). CONCLUSION: No significant differences in the clinical outcomes were observed. Further studies will be needed to confirm the way of nutrition support while in the prone position.
Subject(s)
Adult , Humans , Comorbidity , Critical Illness , Electronic Health Records , Enteral Nutrition , Hand , Hospital Mortality , Hospitalization , Intensive Care Units , Length of Stay , Mortality , Nutritional Support , Parenteral Nutrition , Physiology , Pneumonia, Ventilator-Associated , Prone Position , Respiration, Artificial , Retrospective Studies , Seoul , VomitingABSTRACT
BACKGROUND/AIMS: Diffuse alveolar damage (DAD) is the histopathologic hallmark of acute respiratory distress syndrome (ARDS). However, there are several non-DAD conditions mimicking ARDS. The purpose of this study was to investigate the histopathologic heterogeneity of ARDS revealed by surgical lung biopsy and its clinical relevance. METHODS: We retrospectively analyzed 84 patients with ARDS who met the criteria of the Berlin definition and underwent surgical lung biopsy between January 2004 and December 2013 in three academic hospitals in Korea. We evaluated their histopathologic findings and compared the clinical outcomes. Additionally, the impact of surgical lung biopsy on therapeutic alterations was examined. RESULTS: The histopathologic findings were highly heterogeneous. Of 84 patients undergoing surgical lung biopsy, DAD was observed in 31 patients (36.9%), while 53 patients (63.1%) did not have DAD. Among the non-DAD patients, diffuse interstitial lung diseases and infections were the most frequent histopathologic findings in 19 and 17 patients, respectively. Although the mortality rate was slightly higher in DAD (71.0%) than in non-DAD (62.3%), the difference was not significant. Overall, the biopsy results led to treatment alterations in 40 patients (47.6%). Patients with non-DAD were more likely to change the treatment than those with DAD (58.5% vs. 29.0%), but there were no significant improvements regarding the mortality rate. CONCLUSIONS: The histopathologic findings of ARDS were highly heterogeneous and classic DAD was observed in one third of the patients who underwent surgical lung biopsy. Although therapeutic alterations were more common in patients with non-DAD-ARDS, there were no significant improvements in the mortality rate.
Subject(s)
Humans , Acute Lung Injury , Berlin , Biopsy , Korea , Lung Diseases, Interstitial , Lung , Mortality , Pathology , Population Characteristics , Respiratory Distress Syndrome , Retrospective StudiesABSTRACT
BACKGROUND: A number of questionnaires designed for analyzing family members' inconvenience and demands in intensive care unit (ICU) care have been developed and validated in North America. The family satisfaction in the intensive care Unit-24 (FS-ICU-24) questionnaire is one of the most widely used of these instruments. This study aimed to translate the FS-ICU-24 questionnaire into Korean and validate the Korean version of the questionnaire. METHODS: The study was conducted in the medical, surgical, and emergency ICUs at three tertiary hospitals. Relatives of all patients hospitalized for at least 48 hours were enrolled for this study participants. The validation process included the measurement of construct validity, internal consistency, and interrater reliability. The questionnaire consists of 24 items divided between two subscales: satisfaction with care (14 items) and satisfaction with decision making (10 items). RESULTS: In total, 200 family members of 176 patients from three hospitals completed the FS-ICU-24 questionnaire. Construct validity for the questionnaire was superior to that observed for a visual analog scale (Spearman's r = 0.84, p < 0.001). Cronbach's αs were 0.83 and 0.80 for the satisfaction with care and satisfaction with decision making subscales, respectively. The mean (± standard deviation) total FS-ICU-24 score was 75.44 ± 17.70, and participants were most satisfied with consideration of their needs (82.13 ± 21.03) and least satisfied with the atmosphere in the ICU waiting room (35.38 ± 34.84). CONCLUSIONS: The Korean version of the FS-ICU-24 questionnaire demonstrated good validity and could be a useful instrument with which to measure family members' satisfaction about ICU care.