Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 53
Filter
1.
J Surg Res ; 301: 163-171, 2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38936245

ABSTRACT

INTRODUCTION: Many patients suffering from isolated severe traumatic brain injury (sTBI) receive blood transfusion on hospital arrival due to hypotension. We hypothesized that increasing blood transfusions in isolated sTBI patients would be associated with an increase in mortality. METHODS: We performed a trauma quality improvement program (TQIP) (2017-2019) and single-center (2013-2021) database review filtering for patients with isolated sTBI (Abbreviated Injury Scale head ≥3 and all other areas ≤2). Age, initial Glasgow Coma Score (GCS), Injury Severity Score (ISS), initial systolic blood pressure (SBP), mechanism (blunt/penetrating), packed red blood cells (pRBCs) and fresh frozen plasma (FFP) transfusion volume (units) within the first 4 h, FFP/pRBC ratio (4h), and in-hospital mortality were obtained from the TQIP Public User Files. RESULTS: In the TQIP database, 9257 patients had isolated sTBI and received pRBC transfusion within the first 4 h. The mortality rate within this group was 47.3%. The increase in mortality associated with the first unit of pRBCs was 20%, then increasing approximately 4% per unit transfused to a maximum mortality of 74% for 11 or more units. When adjusted for age, initial GCS, ISS, initial SBP, and mechanism, pRBC volume (1.09 [1.08-1.10], FFP volume (1.08 [1.07-1.09]), and FFP/pRBC ratio (1.18 [1.08-1.28]) were associated with in-hospital mortality. Our single-center study yielded 138 patients with isolated sTBI who received pRBC transfusion. These patients experienced a 60.1% in-hospital mortality rate. Logistic regression corrected for age, initial GCS, ISS, initial SBP, and mechanism demonstrated no significant association between pRBC transfusion volume (1.14 [0.81-1.61]), FFP transfusion volume (1.29 [0.91-1.82]), or FFP/pRBC ratio (6.42 [0.25-164.89]) and in-hospital mortality. CONCLUSIONS: Patients suffering from isolated sTBI have a higher rate of mortality with increasing amount of pRBC or FFP transfusion within the first 4 h of arrival.

2.
Mil Med ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38836595

ABSTRACT

INTRODUCTION: During high-fidelity simulations in the Critical Care Air Transport (CCAT) Advanced course, we identified a high frequency of insulin medication errors and sought strategies to reduce them using a human factors approach. MATERIALS AND METHODS: Of 169 eligible CCAT simulations, 22 were randomly selected for retrospective audio-video review to establish a baseline frequency of insulin medication errors. Using the Human Factors Analysis Classification System, dosing errors, defined as a physician ordering an inappropriate dose, were categorized as decision-based; administration errors, defined as a clinician preparing and administering a dose different than ordered, were categorized as skill-based. Next, 3 a priori interventions were developed to decrease the frequency of insulin medication errors, and these were grouped into 2 study arms. Arm 1 included a didactic session reviewing a sliding-scale insulin (SSI) dosing protocol and a hands-on exercise requiring all CCAT teams to practice preparing 10 units of insulin including a 2-person check. Arm 2 contained arm 1 interventions and added an SSI cognitive aid available to students during simulation. Frequency and type of insulin medication errors were collected for both arms with 93 simulations for arm 1 (January-August 2021) and 139 for arm 2 (August 2021-July 2022). The frequency of decision-based and skill-based errors was compared across control and intervention arms. RESULTS: Baseline insulin medication error rates were as follows: decision-based error occurred in 6/22 (27.3%) simulations and skill-based error occurred in 6/22 (27.3%). Five of the 6 skill-based errors resulted in administration of a 10-fold higher dose than ordered. The post-intervention decision-based error rates were 9/93 (9.7%) and 23/139 (2.2%), respectively, for arms 1 and 2. Compared to baseline error rates, both arm 1 (P = .04) and arm 2 (P < .001) had a significantly lower rate of decision-based errors. Additionally, arm 2 had a significantly lower decision-based error rate compared to arm 1 (P = .015). For skill-based preparation errors, 1/93 (1.1%) occurred in arm 1 and 4/139 (2.9%) occurred in arm 2. Compared to baseline, this represents a significant decrease in skill-based error in both arm 1 (P < .001) and arm 2 (P < .001). There were no significant differences in skill-based error between arms 1 and 2. CONCLUSIONS: This study demonstrates the value of descriptive error analysis during high-fidelity simulation using audio-video review and effective risk mitigation using training and cognitive aids to reduce medication errors in CCAT. As demonstrated by post-intervention observations, a human factors approach successfully reduced decision-based error by using didactic training and cognitive aids and reduced skill-based error using hands-on training. We recommend the development of a Clinical Practice Guideline including an SSI protocol, guidelines for a 2-person check, and a cognitive aid for implementation with deployed CCAT teams. Furthermore, hands-on training for insulin preparation and administration should be incorporated into home station sustainment training to reduced medication errors in the operational environment.

3.
J Am Coll Surg ; 2024 May 21.
Article in English | MEDLINE | ID: mdl-38770953

ABSTRACT

BACKGROUND: Traumatic brain injury (TBI)-related morbidity is caused largely by secondary injury resulting from hypoxia, excessive sympathetic drive, and uncontrolled inflammation. Aeromedical evacuation (AE) is utilized by the military for transport of wounded soldiers to higher levels of care. We hypothesized that the hypobaric, hypoxic conditions of AE may exacerbate uncontrolled inflammation following TBI that could contribute to more severe TBI-related secondary injury. STUDY DESIGN: Thirty-six female pigs were used to test TBI vs. TBI sham, hypoxia vs. normoxia, and hypobaria vs. ground conditions. TBI was induced by controlled cortical injury, hypobaric conditions of 12,000 feet were established in an altitude chamber, and hypoxic exposure was titrated to 85% SpO2 while at altitude. Serum cytokines, UCH-L1 and TBI biomarkers were analyzed via ELISA. Gross analysis and staining of cortex and hippocampus tissue was completed for glial fibrillary acidic protein (GFAP) and phosphorylated tau (p-tau). RESULTS: Serum IL-1b, IL-6, and TNFα were significantly elevated following TBI in pigs exposed to altitude-induced hypobaria/hypoxia, as well as hypobaria alone, compared to ground level/normoxia. No difference in TBI biomarkers following TBI or hypobaric, hypoxic exposure was noted. No difference in brain tissue GFAP or p-tau when comparing the most different conditions of sham TBI+ground/normoxia to the TBI+hypobaria/hypoxia group was noted. CONCLUSION: The hypobaric environment of AE induces systemic inflammation following TBI. Severe inflammation may play a role in exacerbating secondary injury associated with TBI and contribute to worse neurocognitive outcomes. Measures should be taken to minimize barometric and oxygenation changes during AE following TBI.

4.
Sensors (Basel) ; 24(5)2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38475001

ABSTRACT

Wearable devices in sports have been used at the professional and higher collegiate levels, but not much research has been conducted at lower collegiate division levels. The objective of this retrospective study was to gather big data using the Catapult wearable technology, develop an algorithm for musculoskeletal modeling, and longitudinally determine the workloads of male college soccer (football) athletes at the Division III (DIII) level over the course of a 12-week season. The results showed that over the course of a season, (1) the average match workload (432 ± 47.7) was 1.5× greater than the average training workload (252.9 ± 23.3) for all positions, (2) the forward position showed the lowest workloads throughout the season, and (3) the highest mean workload was in week 8 (370.1 ± 177.2), while the lowest was in week 4 (219.1 ± 26.4). These results provide the impetus to enable the interoperability of data gathered from wearable devices into data management systems for optimizing performance and health.


Subject(s)
Soccer , Wearable Electronic Devices , Humans , Male , Retrospective Studies , Universities , Athletes , Biomarkers
5.
J Surg Res ; 295: 631-640, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38101109

ABSTRACT

INTRODUCTION: Dynamic preload assessment measures including pulse pressure variation (PPV), stroke volume variation (SVV), pleth variability index (PVI), and hypotension prediction index (HPI) have been utilized clinically to guide fluid management decisions in critically ill patients. These values aid in the balance of correcting hypotension while avoiding over-resuscitation leading to respiratory failure and increased mortality. However, these measures have not been previously validated at altitude or in those with temporary abdominal closure (TAC). METHODS: Forty-eight female swine (39 ± 2 kg) were separated into eight groups (n = 6) including all combinations of flight versus ground, hemorrhage versus no hemorrhage, and TAC versus no TAC. Flight animals underwent simulated aeromedical evacuation via an altitude chamber at 8000 ft. Hemorrhagic shock was induced via stepwise hemorrhage removing 10% blood volume in 15-min increments to a total blood loss of 40% or a mean arterial pressure of 35 mmHg. Animals were then stepwise transfused with citrated shed blood with 10% volume every 15 min back to full blood volume. PPV, SVV, PVI, and HPI were monitored every 15 min throughout the simulated aeromedical evacuation or ground control. Blood samples were collected and analyzed for serum levels of serum IL-1ß, IL-6, IL-8, and TNF-α. RESULTS: Hemorrhage groups demonstrated significant increases in PPV, SVV, PVI, and HPI at each step compared to nonhemorrhage groups. Flight increased PPV (P = 0.004) and SVV (P = 0.003) in hemorrhaged animals. TAC at ground level increased PPV (P < 0.0001), SVV (P = 0.0003), and PVI (P < 0.0001). When TAC was present during flight, PPV (P = 0.004), SVV (P = 0.003), and PVI (P < 0.0001) values were decreased suggesting a dependent effect between altitude and TAC. There were no significant differences in serum IL-1ß, IL-6, IL-8, or TNF-α concentration between injury groups. CONCLUSIONS: Based on our study, PPV and SVV are increased during flight and in the presence of TAC. Pleth variability index is slightly increased with TAC at ground level. Hypotension prediction index demonstrated no significant changes regardless of altitude or TAC status, however this measure was less reliable once the resuscitation phase was initiated. Pleth variability index may be the most useful predictor of preload during aeromedical evacuation as it is a noninvasive modality.


Subject(s)
Hemodynamics , Hypotension , Humans , Female , Animals , Swine , Stroke Volume , Altitude , Tumor Necrosis Factor-alpha , Interleukin-6 , Interleukin-8 , Blood Pressure , Hemorrhage/diagnosis , Hemorrhage/etiology , Hemorrhage/therapy , Fluid Therapy
6.
J Surg Res ; 291: 691-699, 2023 11.
Article in English | MEDLINE | ID: mdl-37562231

ABSTRACT

INTRODUCTION: Seven key inflammatory biomarkers were recently found to be associated with the risk of mortality in a multicenter study of massively transfused patients. The aim of this prospective single-center study was to determine which of these early inflammatory markers could predict 30-d mortality among all critically injured trauma patients. METHODS: Serum samples were collected at 6, 24, and 72 h from 238 consecutive patients admitted to the intensive care unit following traumatic injury. Inflammatory markers syndecan-1, eotaxin, IL-1ra, IL-6, IL-8, IL-10, IP-10, and MCP-1 were analyzed via multiplex enzyme-linked immunosorbent assay. Subgroup analysis was performed for patients undergoing massive transfusion (≥5 red blood cells), submassive transfusion (1-4 red blood cells), or no transfusion during the first 4 h postinjury. The primary outcome of 30-d survival was modeled as a function of each biomarker and confounders using repeat measures logistic regression. RESULTS: Patients had a median age of 51.3 y [33.7, 70.2], 70.6% were male, 17.4% experienced penetrating trauma, and had a median injury severity score of 22 [14, 33]. IL-1ra, IL-8, IL-10, and MCP-1 were significantly increased during the first 72 h in nonsurvivors (n = 31). Elevated IL-1ra, IL-8, IL-10, and MCP-1 at 6 h postinjury were associated with 30-d mortality. By contrast, serum syndecan-1 and eotaxin levels were not associated with mortality at any time point. IL-8 and lactate were increased at 6 h in 30-d nonsurvivors for patients receiving submassive transfusion (n = 78). CONCLUSIONS: Early evaluations of IL-1ra, IL-8, IL-10, and IP-10 within 6 h of injury are useful predictors of 30-d mortality. Subgroup analysis suggests that transfusion status does not significantly affect early inflammatory markers. LEVEL OF EVIDENCE: Level III, prognostic/epidemiological.


Subject(s)
Interleukin 1 Receptor Antagonist Protein , Wounds and Injuries , Humans , Male , Female , Interleukin-10 , Syndecan-1 , Prospective Studies , Interleukin-8 , Chemokine CXCL10 , Biomarkers , Wounds and Injuries/complications , Wounds and Injuries/diagnosis , Wounds and Injuries/therapy
7.
Mil Med ; 2023 Jul 25.
Article in English | MEDLINE | ID: mdl-37489875

ABSTRACT

INTRODUCTION: Inappropriate fluid management during patient transport may lead to casualty morbidity. Percent systolic pressure variation (%SPV) is one of several technologies that perform a dynamic assessment of fluid responsiveness (FT-DYN). Trained anesthesia providers can visually estimate and use %SPV to limit the incidence of erroneous volume management decisions to 1-4%. However, the accuracy of visually estimated %SPV by other specialties is unknown. The aim of this article is to determine the accuracy of estimated %SPV and the incidence of erroneous volume management decisions for Critical Care Air Transport (CCAT) team members before and after training to visually estimate and utilize %SPV. MATERIAL AND METHODS: In one sitting, CCAT team providers received didactics defining %SPV and indicators of fluid responsiveness and treatment with %SPV ≤7 and ≥14.5 defining a fluid nonresponsive and responsive patient, respectively; they were then shown ten 45-second training arterial waveforms on a simulated Propaq M portable monitor's screen. Study subjects were asked to visually estimate %SPV for each arterial waveform and queried whether they would treat with a fluid bolus. After each training simulation, they were told the true %SPV. Seven days post-training, the subjects were shown a different set of ten 45-second testing simulations and asked to estimate %SPV and choose to treat, or not. Nonparametric limits of agreement for differences between true and estimated %SPV were analyzed using Bland-Altman graphs. In addition, three errors were defined: (1) %SPV visual estimate errors that would label a volume responsive patient as nonresponsive, or vice versa; (2) incorrect treatment decisions based on estimated %SPV (algorithm application errors); and (3) incorrect treatment decisions based on true %SPV (clinically significant treatment errors). For the training and testing simulations, these error rates were compared between, and within, provider groups. RESULTS: Sixty-one physicians (MDs), 64 registered nurses (RNs), and 53 respiratory technicians (RTs) participated in the study. For testing simulations, the incidence and 95% CI for %SPV estimate errors with sufficient magnitude to result in a treatment error were 1.4% (0.5%, 3.2%), 1.6% (0.6%, 3.4%), and 4.1% (2.2%, 6.9%) for MDs, RNs, and RTs, respectively. However, clinically significant treatment errors were statistically more common for all provider types, occurring at a rate of 7%, 10%, and 23% (all P < .05). Finally, students did not show clinically relevant reductions in their errors between training and testing simulations. CONCLUSIONS: Although most practitioners correctly visually estimated %SPV and all students completed the training in interpreting and applying %SPV, all groups persisted in making clinically significant treatment errors with moderate to high frequency. This suggests that the treatment errors were more often driven by misapplying FT-DYN algorithms rather than by inaccurate visual estimation of %SPV. Furthermore, these errors were not responsive to training, suggesting that a decision-making cognitive aid may improve CCAT teams' ability to apply FT-DYN technologies.

8.
Scand J Public Health ; 50(6): 782-786, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35350944

ABSTRACT

AIM: Reductions in the case fatality rate of COVID-19 in the unvaccinated have been credited to improvements in medical care. Here I test whether either of these factors predicts reductions in the case fatality rate, and whether observed reductions are better explicable by improved ascertainment of mild cases. METHODS: Using weighted log-log regression, I compute the association between changes in the case fatality rate and test density between 3 July 2020 and 5 January 2021 in 162 countries; and check whether case fatality rate change is associated with either per capita medical spending (proxy for critical care access) or timing of the pandemic (proxy for COVID-specific knowledge). RESULTS: The median test density increased from 175 tests per thousand population to 1200, while the median case fatality rate dropped from 4.1% to 2.0%. While the case fatality rate was higher at both timepoints in Europe/North America than Africa / Asia, its association with test density was similar across countries. For each doubling in test density, the mean case fatality rate decreased by 18% (P<0.0001) with a median (interquartile rate) country-level decline of 20% (5-30) per doubling of test density. The rate of change of the case fatality rate was not associated with either medical care access or COVID-specific knowledge (all P>0.10). CONCLUSIONS: Declines in the case fatality rate were adequately explained by improved testing, with no effect of either medical knowledge or improvements in care. The true lethality of COVID-19 may not have changed much at the population level. Prevention should remain a priority.


Subject(s)
COVID-19 , Europe/epidemiology , Humans , Pandemics , Patient Care , SARS-CoV-2
9.
J Clin Epidemiol ; 142: 54-59, 2022 02.
Article in English | MEDLINE | ID: mdl-34715312

ABSTRACT

OBJECTIVE: Calculations of disease burden of COVID-19, used to allocate scarce resources, have historically considered only mortality. However, survivors often develop postinfectious 'long-COVID' similar to chronic fatigue syndrome; physical sequelae such as heart damage, or both. This paper quantifies relative contributions of acute case fatality, delayed case fatality, and disability to total morbidity per COVID-19 case. STUDY DESIGN AND SETTING: Healthy life years lost per COVID-19 case were computed as the sum of (incidence*disability weight*duration) for death and long-COVID by sex and 10-year age category in three plausible scenarios. RESULTS: In all models, acute mortality was only a small share of total morbidity. For lifelong moderate symptoms, healthy years lost per COVID-19 case ranged from 0.92 (male in his 30s) to 5.71 (girl under 10) and were 3.5 and 3.6 for the oldest females and males. At higher symptom severities, young people and females bore larger shares of morbidity; if survivors' later mortality increased, morbidity increased most in young people of both sexes. CONCLUSIONS: Under most conditions most COVID-19 morbidity was in survivors. Future research should investigate incidence, risk factors, and clinical course of long-COVID to elucidate total disease burden, and decisionmakers should allocate scarce resources to minimize total morbidity.


Subject(s)
COVID-19/complications , Disability-Adjusted Life Years/trends , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , COVID-19/economics , COVID-19/epidemiology , Child , Child, Preschool , Cost of Illness , Female , Humans , Infant , Infant, Newborn , Male , Middle Aged , Patient Acuity , Sex Characteristics , Young Adult , Post-Acute COVID-19 Syndrome
10.
Ann Epidemiol ; 62: 13-18, 2021 10.
Article in English | MEDLINE | ID: mdl-34052437

ABSTRACT

BACKGROUND: Resistance training is cardioprotective independent of total activity in experimental research and is prescribed to clinical populations, but is often largely neglected at population scale. Here we determine whether these benefits are relevant to general practice. METHODS: A total of 6947 Americans over 20 years old (51% male) from NHANES 2003-2006 reported resistance training and objectively tracked 1-week total activity. Activity measures were modeled as five-level predictors of objectively measured binary heart-disease risks (hypertension, dyslipidemia, overweight, and diabetes) corrected for age, ethnicity, gender, and smoking. Significance was defined as Pfor trend less than .10 that the lowest activity category differed from the average of all others. If both activity measures predicted the same risk, mutually corrected models were run. RESULTS: Average total activity was 20 minutes/day (SD 24). About 30% of subjects had resistance trained in the past month, reporting up to 7 sessions/day. Prevalences of hypertension, dyslipidemia, overweight, and diabetes were 32%, 46%, 68%, and 7.2%, respectively. All significant associations for resistance training (but not total activity) exhibited a threshold in dose-response curve, with comparable benefits from any dose above "none." Resistance trainers had significantly lower odds of hypertension (ORs, 0.55-0.85), overweight (ORs, 0.55-0.74), and diabetes (ORs, 0.51-0.80), but not dyslipidemia (ORs, 0.55-0.74). For total activity there was no significant trend in risk of either hypertension or dyslipidemia, but there were for overweight (ORs for each quintile above the lowest 1.04, 0.89, 0.78, and 0.49) and diabetes (ORs, 0.83, 0.68, 0.50, and 0.23; all Pfor trend <.01). Associations of resistance training with diabetes and obesity attenuated only slightly after correction for total activity, and vice versa. CONCLUSIONS: Cardioprotective associations of resistance training were comparable to those of total activity and clinically relevant at low doses. Largest benefits accrued to those who combined any dose of resistance training with high total activity.


Subject(s)
Dyslipidemias , Resistance Training , Adult , Dyslipidemias/epidemiology , Female , Humans , Male , Nutrition Surveys , Obesity/epidemiology , Obesity/prevention & control , Overweight/epidemiology , Young Adult
11.
J Epidemiol Glob Health ; 11(2): 143-145, 2021 06.
Article in English | MEDLINE | ID: mdl-33876593

ABSTRACT

Case fatality rate (CFR) is used to calculate mortality burden of COVID-19 under different scenarios, thus informing risk-benefit balance of interventions both pharmaceutical and nonpharmaceutical. However, observed CFR is driven by testing: as more low-risk cases are identified, observed CFR will decline. This report quantifies test bias by modeling observed CFR as log-log-linear function of test density (tests per population) in 163 countries. CFR declined almost 20% (e.g. from 5% to 4%) for each doubling of test density (p < 0.0001); this association did not vary by continent (interaction p > 0.10) although at any given test density CFR was higher in Europe or North America than in Asia or Africa. This effect of test density on observed CFR is adequate to hide all but the largest true differences in case survivorship. Published estimates of CFR should specify test density, and comparisons should correct for it such as by applying the provided model.


Subject(s)
COVID-19/mortality , Africa/epidemiology , Asia/epidemiology , Bias , Europe/epidemiology , Humans , North America/epidemiology , SARS-CoV-2
12.
Rev. esp. anestesiol. reanim ; 67(2): 63-67, feb. 2020. tab
Article in Spanish | IBECS | ID: ibc-197455

ABSTRACT

INTRODUCCIÓN: La cirugía de vitrectomía es un procedimiento común para el tratamiento de varios tipos de afecciones oftalmológicas, y se puede realizar bajo anestesia regional con bloqueo peribulbar (BP) o anestesia general (AG). No hay recomendaciones basadas en evidencia sobre el mejor tipo de anestesia para este procedimiento. En este contexto, nuestro objetivo es comparar AG y BP para la cirugía de vitrectomía. MATERIALES Y MÉTODOS: Estudio observacional prospectivo en adultos sometidos a vitrectomía mecánica entre enero de 2017 y diciembre de 2017. Se recogieron datos demográficos y perioperatorios, en particular: estado físico ASA, presión arterial media, frecuencia cardiaca, consumo de opioides postoperatorio, náuseas y vómitos postoperatorios, tiempos de inducción, cirugía, recuperación y estadía en el hospital y costes considerando los fármacos y el material necesario. El análisis estadístico se realizó con SPSS V.25, con pruebas de chi cuadrado, Fisher y Mann- Whitney U, según el tipo de variables analizadas. RESULTADOS Y DISCUSIÓN: Se incluyeron 179 pacientes, de los cuales 91 (51%) estaban bajo BP y 88 (49%) bajo AG. Los pacientes sometidos a BP presentaban una edad más avanzada (69 vs. 64,5 años, p = 0,006) y se presentaron con valores en la escala ASA más elevados (p = 0,001). Para los resultados hemodinámicos, los pacientes sometidos a BP presentaron una menor variación de la presión arterial media (-3 vs. -13,5mmHg, p = 0,000) y sin diferencias significativas en la frecuencia cardiaca (-2 vs. -3ppm, p = 0,825). En el período postoperatorio, el grupo de BP presentó una menor necesidad de analgesia postoperatoria (0 vs. 5, p = 0,026) y una menor incidencia de náuseas y vómitos (1 vs. 12, p = 0,001). Los tiempos relacionados con la anestesia y la cirugía fueron mejores en el grupo BP, con un tiempo de inducción más corto (10 vs. 11min, p = 0,000), tiempo de cirugía (56,5 vs. 62min, p = 0,001), tiempo de recuperación (10 vs. 75,5min, p = 0,000), y estancia hospitalaria (2 vs. 3 días, p = 0,000). Al analizar los costes, el BP fue más económico que AG (4,65 frente a 12,09 euros, p = 0,021). CONCLUSIÓN: El bloqueo peribulbar es una alternativa segura a la anestesia general para pacientes sometidos a vitrectomía, especialmente pacientes mayores y aquellos con más comorbilidades


INTRODUCTION: Vitrectomy surgery is a common procedure for the treatment of several types of ophthalmologic conditions. It can be performed under regional anaesthesia with peribulbar block (PB) or general anaesthesia (GA). There are no evidence-based recommendations on the optimal anaesthesia strategy for this procedure. The aim of this study was to compare the advantages of PB and GA for vitrectomy surgery. MATERIALS AND METHODS: A prospective observational study was conducted on adults submitted for mechanical vitrectomy between January 2017 and December 2017. Demographic and perioperative data were collected, namely ASA physical status, median arterial pressure, heart rate, postoperative opioid consumption, postoperative nausea and vomiting, times of induction, surgery, recovery, and hospital stay and costs considering medication and material needed. Statistical analysis was performed using SPSS V.25, with chi-square, Fisher and Mann-Whitney U tests, according to the type of variables analysed. RESULTS AND DISCUSSION: We included 179 patients submitted for mechanical vitrectomy: 91 (51%) with PB and 88 (49%) under GA. Patients submitted to PB were older (69.0 vs. 64.5 years, p=.006) and presented with higher ASA physical status (p=.001). For haemodynamic outcomes, patients submitted to PB presented with less variation of median arterial pressure (-3.0 vs. -13.5mmHg, p=.000) and with no significant differences in heart rate (-2.0 vs. -3.0 bpm, p=.825). In the postoperative period, the PB group presented with decreased need of postoperative analgesia (0.0 vs. 5.0, p=.026) and a lower incidence of nausea and vomiting (1.0 vs. 12.0, p=.001). Times related to anaesthesia and surgery were better in PB group, with shorter induction time (10.0 vs. 11.0min, p=.000), surgery time (56.5 vs. 62.0min, p=.001), recovery time (10.0 vs. 75.5min, p=.000), and hospital stay (2.0 vs. 3.0 days, p=.000). When analysing costs, PB was less expensive than GA (4.65 vs. 12.09 euros, p=.021). CONCLUSION: PB is a reliable and safe alternative to GA for patients undergoing mechanical vitrectomy, permitting good anaesthesia and akinesia conditions during surgery, better haemodynamic stability, and less postoperative complications, especially in older patients and those with more comorbidities


Subject(s)
Humans , Adolescent , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Vitrectomy , Retinal Diseases/surgery , Anesthesia, Conduction/methods , Anesthesia, General/methods , Analgesia , Postoperative Nausea and Vomiting , Postoperative Period , Prospective Studies , Length of Stay , Vitrectomy/economics , Analgesics, Non-Narcotic/therapeutic use , Heart Rate/drug effects
13.
Rev Esp Anestesiol Reanim (Engl Ed) ; 67(2): 63-67, 2020 Feb.
Article in English, Spanish | MEDLINE | ID: mdl-31955889

ABSTRACT

INTRODUCTION: Vitrectomy surgery is a common procedure for the treatment of several types of ophthalmologic conditions. It can be performed under regional anaesthesia with peribulbar block (PB) or general anaesthesia (GA). There are no evidence-based recommendations on the optimal anaesthesia strategy for this procedure. The aim of this study was to compare the advantages of PB and GA for vitrectomy surgery. MATERIALS AND METHODS: A prospective observational study was conducted on adults submitted for mechanical vitrectomy between January 2017 and December 2017. Demographic and perioperative data were collected, namely ASA physical status, median arterial pressure, heart rate, postoperative opioid consumption, postoperative nausea and vomiting, times of induction, surgery, recovery, and hospital stay and costs considering medication and material needed. Statistical analysis was performed using SPSS v.25, with chi-square, Fisher and Mann-Whitney U tests, according to the type of variables analysed. RESULTS AND DISCUSSION: We included 179 patients submitted for mechanical vitrectomy: 91 (51%) with PB and 88 (49%) under GA. Patients submitted to PB were older (69.0 vs. 64.5 years, p=.006) and presented with higher ASA physical status (p=.001). For haemodynamic outcomes, patients submitted to PB presented with less variation of median arterial pressure (-3.0 vs. -13.5mmHg, p=.000) and with no significant differences in heart rate (-2.0 vs. -3.0 bpm, p=.825). In the postoperative period, the PB group presented with decreased need of postoperative analgesia (0.0 vs. 5.0, p=.026) and a lower incidence of nausea and vomiting (1.0 vs. 12.0, p=.001). Times related to anaesthesia and surgery were better in PB group, with shorter induction time (10.0 vs. 11.0min, p=.000), surgery time (56.5 vs. 62.0min, p=.001), recovery time (10.0 vs. 75.5min, p=.000), and hospital stay (2.0 vs. 3.0 days, p=.000). When analysing costs, PB was less expensive than GA (4.65 vs. 12.09 euros, p=.021) CONCLUSION: PB is a reliable and safe alternative to GA for patients undergoing mechanical vitrectomy, permitting good anaesthesia and akinesia conditions during surgery, better haemodynamic stability, and less postoperative complications, especially in older patients and those with more comorbidities.


Subject(s)
Anesthesia, General , Nerve Block/methods , Vitrectomy/methods , Age Factors , Aged , Anesthesia Recovery Period , Anesthesia, General/economics , Anesthesia, General/statistics & numerical data , Humans , Length of Stay , Middle Aged , Nerve Block/economics , Nerve Block/statistics & numerical data , Operative Time , Prospective Studies , Vitrectomy/statistics & numerical data
14.
PLoS Negl Trop Dis ; 14(1): e0007940, 2020 01.
Article in English | MEDLINE | ID: mdl-31961893

ABSTRACT

Bats can harbor zoonotic pathogens, but their status as reservoir hosts for Leptospira bacteria is unclear. During 2015-2017, kidneys from 47 of 173 bats captured in Grenada, West Indies, tested PCR-positive for Leptospira. Sequence analysis of the Leptospira rpoB gene from 31 of the positive samples showed 87-91% similarity to known Leptospira species. Pairwise and phylogenetic analysis of sequences indicate that bats from Grenada harbor as many as eight undescribed Leptospira genotypes that are most similar to known pathogenic Leptospira, including known zoonotic serovars. Warthin-Starry staining revealed leptospiral organisms colonizing the renal tubules in 70% of the PCR-positive bats examined. Mild inflammatory lesions in liver and kidney observed in some bats were not significantly correlated with renal Leptospira PCR-positivity. Our findings suggest that Grenada bats are asymptomatically infected with novel and diverse Leptospira genotypes phylogenetically related to known pathogenic strains, supporting the hypothesis that bats may be reservoirs for zoonotic Leptospira.


Subject(s)
Chiroptera/microbiology , Disease Reservoirs/microbiology , Leptospira/classification , Leptospirosis/veterinary , Animals , Disease Reservoirs/veterinary , Grenada , Kidney/microbiology , Kidney/pathology , Leptospira/genetics , Leptospira/isolation & purification , Leptospirosis/microbiology , Leptospirosis/pathology , Liver/microbiology , Liver/pathology , Phylogeny
16.
Eur J Prev Cardiol ; 26(5): 492-501, 2019 03.
Article in English | MEDLINE | ID: mdl-30501371

ABSTRACT

BACKGROUND: The respiratory benefits of muscle strength are well-known in heart-healthy populations, but recommendations and research often focus instead on aerobic fitness (peak oxygen uptake) or total activity. Independent benefits of strength thus may be underestimated, especially in congenital heart disease where perceived dangers of certain types of exercise may outweigh perceived benefits. To assess whether it is plausible that pulmonary benefits of strength in heart-healthy populations also apply in congenital heart disease, we simultaneously correlated these patients' lung function with fitness, strength, and cardiac diagnosis. METHODS: Lung function (forced expiratory volume in one second percentage predicted (FEV1%pred)) was modeled as function of handgrip strength, congenital heart disease diagnosis, peak oxygen uptake and the interactions of handgrip with sex and diagnosis in 538 Germans (58% male, ages 6-82 years) in linear models corrected for age, sex, height and weight. Congenital heart disease diagnoses were: complex cyanotic; Fallot/Truncus arteriosus communis (common arterial trunk) (TAC); shunts; transposition of the great arteries (TGA); left heart; and other/none. RESULTS: Each kg of handgrip was associated with 0.74% higher FEV1%pred ( p < 0.001) and handgrip explained almost 10% of variance in FEV1%pred. While some groups had higher FEV1%pred than others ( p for global null <0.0001), all experienced similar associations with strength ( p for interaction with handgrip >0.10 for both sex and diagnosis.) Correction for peak oxygen uptake eliminated the association with congenital heart disease, but not handgrip. CONCLUSION: Strength was associated with better lung function in all ages even after correction for peak oxygen uptake, regardless of sex and congenital heart disease. This suggests that strength may be at least as important for lung function as aerobic fitness. Heart-safe strength training may improve pulmonary function in congenital heart disease.


Subject(s)
Exercise Tolerance , Hand Strength , Heart Defects, Congenital/physiopathology , Lung/physiopathology , Muscle Contraction , Muscle, Skeletal/physiopathology , Oxygen Consumption , Physical Fitness , Adolescent , Adult , Aged , Aged, 80 and over , Cardiac Rehabilitation , Child , Exercise , Exercise Test , Female , Forced Expiratory Volume , Health Status , Heart Defects, Congenital/diagnosis , Heart Defects, Congenital/metabolism , Heart Defects, Congenital/rehabilitation , Humans , Male , Middle Aged , Muscle, Skeletal/metabolism , Resistance Training , Spirometry , Young Adult
17.
Sci Rep ; 8(1): 15055, 2018 10 10.
Article in English | MEDLINE | ID: mdl-30305651

ABSTRACT

Accelerometers objectively monitor physical activity, and ongoing research suggests they can also detect patterns of body movement. However, different types of signal (uniaxial, captured by older studies, vs. the newer triaxial) and or/device (validated Actigraph used by older studies, vs. others) may lead to incomparability of results from different time periods. Standardization is desirable. We establish whether uniaxial signals adequately monitor routine activity, and whether triaxial accelerometry can detect sport-specific variations in movement pattern. 1402 adolescents wore triaxial Actigraphs (GT3X) for one week and diaried sport. Uni- and triaxial counts per minute were compared across the week and between over 30 different sports. Across the whole recording period 95% of variance in triaxial counts was explained by the vertical axis (5th percentile for R2, 91%). Sport made up a small fraction of daily routine, but differences were visible: even when total acceleration was comparable, little was vertical in horizontal movements, such as ice skating (uniaxial counts 41% of triaxial) compared to complex movements (taekwondo, 55%) or ambulation (soccer, 69%). Triaxial accelerometry captured differences in movement pattern between sports, but so little time was spent in sport that, across the whole day, uni- and triaxial signals correlated closely. This indicates that, with certain limitations, uniaxial accelerometric measures of routine activity from older studies can be feasibly compared to triaxial measures from newer studies. Comparison of new studies based on raw accelerations to older studies based on proprietary devices and measures (epochs, counts) will require additional efforts which are not addressed in this paper.


Subject(s)
Accelerometry/methods , Sports , Adolescent , Female , Humans , Male
18.
Intensive Care Med ; 44(7): 1039-1049, 2018 07.
Article in English | MEDLINE | ID: mdl-29808345

ABSTRACT

PURPOSE: Whether the quality of the ethical climate in the intensive care unit (ICU) improves the identification of patients receiving excessive care and affects patient outcomes is unknown. METHODS: In this prospective observational study, perceptions of excessive care (PECs) by clinicians working in 68 ICUs in Europe and the USA were collected daily during a 28-day period. The quality of the ethical climate in the ICUs was assessed via a validated questionnaire. We compared the combined endpoint (death, not at home or poor quality of life at 1 year) of patients with PECs and the time from PECs until written treatment-limitation decisions (TLDs) and death across the four climates defined via cluster analysis. RESULTS: Of the 4747 eligible clinicians, 2992 (63%) evaluated the ethical climate in their ICU. Of the 321 and 623 patients not admitted for monitoring only in ICUs with a good (n = 12, 18%) and poor (n = 24, 35%) climate, 36 (11%) and 74 (12%), respectively were identified with PECs by at least two clinicians. Of the 35 and 71 identified patients with an available combined endpoint, 100% (95% CI 90.0-1.00) and 85.9% (75.4-92.0) (P = 0.02) attained that endpoint. The risk of death (HR 1.88, 95% CI 1.20-2.92) or receiving a written TLD (HR 2.32, CI 1.11-4.85) in patients with PECs by at least two clinicians was higher in ICUs with a good climate than in those with a poor one. The differences between ICUs with an average climate, with (n = 12, 18%) or without (n = 20, 29%) nursing involvement at the end of life, and ICUs with a poor climate were less obvious but still in favour of the former. CONCLUSION: Enhancing the quality of the ethical climate in the ICU may improve both the identification of patients receiving excessive care and the decision-making process at the end of life.


Subject(s)
Intensive Care Units , Organizational Culture , Quality of Life , Unnecessary Procedures , Age Factors , Europe , Humans , Intensive Care Units/ethics , Prospective Studies
19.
Matern Child Nutr ; 14(1)2018 01.
Article in English | MEDLINE | ID: mdl-28782300

ABSTRACT

Maternal capabilities-qualities of mothers that enable them to leverage skills and resources into child health-hold potential influence over mother's adoption of child caring practices, including infant and young child feeding. We developed a survey (n = 195) that assessed the associations of 4 dimensions of maternal capabilities (social support, psychological health, decision making, and empowerment) with mothers' infant and young child feeding practices and children's nutritional status in Uganda. Maternal responses were converted to categorical subscales and an overall index. Scale reliability coefficients were moderate to strong (α range = 0.49 to 0.80). Mothers with higher social support scores were more likely to feed children according to the minimum meal frequency (odds ratio [OR] [95% confidence interval (CI)] = 1.38 [1.10, 1.73]), dietary diversity (OR [95% CI] = 1.56 [1.15, 2.11]), iron rich foods, (OR [95% CI] = 1.47 [1.14, 1.89]), and minimally acceptable diet (OR [95% CI] = 1.55 [1.10, 2.21]) indicators. Empowerment was associated with a greater likelihood of feeding a minimally diverse and acceptable diet. The maternal capabilities index was significantly associated with feeding the minimum number of times per day (OR [95% CI] = 1.29 [1.03, 1.63]), dietary diversity (OR [95% CI] = 1.44 [1.06, 1.94]), and minimally acceptable diet (OR [95% CI] = 1.43 [1.01, 2.01]). Mothers with higher psychological satisfaction were more likely to have a stunted child (OR [95% CI] = 1.31 [1.06, 1.63]). No other associations between the capabilities scales and child growth were significant. Strengthening social support for mothers and expanding overall maternal capabilities hold potential for addressing important underlying determinants of child feeding in the Ugandan context.


Subject(s)
Diet, Healthy , Feeding Methods , Infant Nutritional Physiological Phenomena , Models, Psychological , Mothers , Parenting , Social Support , Child Development , Cross-Sectional Studies , Decision Making , Diet, Healthy/ethnology , Diet, Healthy/psychology , Feeding Methods/adverse effects , Feeding Methods/psychology , Female , Freedom , Humans , Infant , Infant Nutritional Physiological Phenomena/ethnology , Infant, Newborn , Male , Mothers/psychology , Nutrition Surveys , Nutritional Status/ethnology , Parenting/psychology , Patient Compliance/ethnology , Patient Compliance/psychology , Personal Satisfaction , Power, Psychological , Self Concept , Uganda
20.
Intensive Care Med ; 44(1): 22-37, 2018 01.
Article in English | MEDLINE | ID: mdl-29218379

ABSTRACT

INTRODUCTION: While prone positioning (PP) has been shown to improve patient survival in moderate to severe acute respiratory distress syndrome (ARDS) patients, the rate of application of PP in clinical practice still appears low. AIM: This study aimed to determine the prevalence of use of PP in ARDS patients (primary endpoint), the physiological effects of PP, and the reasons for not using it (secondary endpoints). METHODS: The APRONET study was a prospective international 1-day prevalence study performed four times in April, July, and October 2016 and January 2017. On each study day, investigators in each ICU had to screen every patient. For patients with ARDS, use of PP, gas exchange, ventilator settings and plateau pressure (Pplat) were recorded before and at the end of the PP session. Complications of PP and reasons for not using PP were also documented. Values are presented as median (1st-3rd quartiles). RESULTS: Over the study period, 6723 patients were screened in 141 ICUs from 20 countries (77% of the ICUs were European), of whom 735 had ARDS and were analyzed. Overall 101 ARDS patients had at least one session of PP (13.7%), with no differences among the 4 study days. The rate of PP use was 5.9% (11/187), 10.3% (41/399) and 32.9% (49/149) in mild, moderate and severe ARDS, respectively (P = 0.0001). The duration of the first PP session was 18 (16-23) hours. Measured with the patient in the supine position before and at the end of the first PP session, PaO2/FIO2 increased from 101 (76-136) to 171 (118-220) mmHg (P = 0.0001) driving pressure decreased from 14 [11-17] to 13 [10-16] cmH2O (P = 0.001), and Pplat decreased from 26 [23-29] to 25 [23-28] cmH2O (P = 0.04). The most prevalent reason for not using PP (64.3%) was that hypoxemia was not considered sufficiently severe. Complications were reported in 12 patients (11.9%) in whom PP was used (pressure sores in five, hypoxemia in two, endotracheal tube-related in two ocular in two, and a transient increase in intracranial pressure in one). CONCLUSIONS: In conclusion, this prospective international prevalence study found that PP was used in 32.9% of patients with severe ARDS, and was associated with low complication rates, significant increase in oxygenation and a significant decrease in driving pressure.


Subject(s)
Positive-Pressure Respiration , Prone Position , Respiratory Distress Syndrome , Aged , Cross-Sectional Studies , Female , Humans , Male , Middle Aged , Prospective Studies , Respiratory Distress Syndrome/therapy
SELECTION OF CITATIONS
SEARCH DETAIL
...