Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
J Neurotrauma ; 41(13-14): 1494-1508, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38204190

ABSTRACT

Traumatic brain injury (TBI) is a leading cause of death and disability worldwide. Disparities exist in the populations that acquire TBIs, however, with a greater burden and poorer outcomes associated with communities of color and lower socioeconomic status. To combat health inequities such as these, institutions have begun to target social determinants of health (SDoH), which are environmental factors that affect health outcomes and risks. The SDoH may play a role in sustaining a TBI and provide modifiable targets for action to reduce the risk of TBI, especially in high-risk communities. In this study, we describe the existing literature regarding SDoH and their association with sustaining a TBI. We performed a scoping review with a comprehensive search of the Ovid MEDLINE/Embase databases. To summarize the literature, this review adapts the World Health Organization's Commission on SDoH's conceptual framework. Fifty-nine full-text articles, including five focusing on lower and middle-income countries, met our study criteria. Results of the scoping review indicate that several structural determinants of health were associated with TBI risk. Lower educational attainment and income levels were associated with higher odds of TBI. In addition, multiple studies highlight that minority populations were identified as having higher odds of TBI than their White counterparts. Literature highlighting intermediate determinants of health examined in this review describes associations between sustaining a TBI and rurality, work environment, medical conditions, medication/substance use, and adversity. Recommended exploration into lesser-researched SDoH is discussed, and the expansion of this review to other aspects of the TBI continuum is warranted.


Subject(s)
Brain Injuries, Traumatic , Social Determinants of Health , Humans , Brain Injuries, Traumatic/epidemiology , Socioeconomic Factors , Health Status Disparities
3.
J Surg Res ; 250: 179-187, 2020 06.
Article in English | MEDLINE | ID: mdl-32070837

ABSTRACT

BACKGROUND: We sought to understand differences in surgical practice, compensation, personal life, and health and wellness between male and female trauma surgeons. METHODS: An electronic survey study of members of The Eastern Association for the Surgery of Trauma was carried out. Using univariate and bivariate analyses, we compared the differences in surgical practice, compensation, family life, and health status among female and male trauma surgeons and used chi-squared tests for categorical variables. Analyses were performed using SPSS (Version 25, IBM). RESULTS: The overall response rate was 37.4%. Women reported working more than 80 h a week more commonly (30% versus 23%; P < 0.001), yet reported lower incomes, with 57% of female surgeons reporting before-tax incomes of $300,000 or higher, compared with 83% of male surgeons (P < 0.001). These differences persisted when adjusting for academic versus nonacademic practices. Gender-based salary disparity remained significant when adjusting for the age of the respondent. Divorce rates and never married status were significantly higher for women (9% versus 4%; P < 0.001 and 19% versus 4%; P < 0.001, respectively). Women surgeons also report higher rates of not having children compared with male surgeons (48% versus 13%; P < 0.001). There were no major age-adjusted health status differences reported between male and female surgeons. CONCLUSIONS: This study highlights contemporary disparities in salaries, practice, and family life between male and female trauma surgeons. Overall, trauma surgeons do not report gender-based differences in health and wellness metrics but have ongoing disparity in compensation and family life.


Subject(s)
Physicians, Women/statistics & numerical data , Salaries and Fringe Benefits/statistics & numerical data , Sexism/statistics & numerical data , Surgeons/statistics & numerical data , Adult , Age Factors , Female , Health Status , Health Surveys/statistics & numerical data , Humans , Male , Marital Status/statistics & numerical data , Middle Aged , Physicians, Women/economics , Practice Patterns, Physicians'/statistics & numerical data , Societies, Medical/statistics & numerical data , Surgeons/economics , United States , Wounds and Injuries/surgery
4.
Am J Emerg Med ; 37(1): 61-66, 2019 01.
Article in English | MEDLINE | ID: mdl-29724580

ABSTRACT

OBJECTIVE: We sought to develop a practical Bedside Score for the diagnosis of cholecystitis and test its accuracy against the Tokyo Guidelines (TG13). METHODS: We conducted a retrospective study of 438 patients undergoing urban, academic Emergency Department (ED) evaluation of RUQ pain. Symptoms, physical signs, ultrasound signs, and labs were scoring system candidates. A random split-sample approach was used to develop and validate a new clinical score. Multivariable regression analysis using development data was conducted to identify predictors of cholecystitis. Cutoff values were chosen to ensure positive/negative predictive values (PPV, NPV) of at least 0.95. The score was externally validated in 80 patients at a different hospital undergoing RUQ pain evaluation. RESULTS: 230 patients (53%) had cholecystitis. Five variables predicted cholecystitis and were included in the scores: gallstones, gallbladder thickening, clinical or ultrasonographic Murphy's sign, RUQ tenderness, and post-prandial symptoms. A clinical prediction score was developed. When dichotomized at 4, overall accuracy for acute cholecystitis was 90% for the development cohort, 82% and 86% for the internal and external validation cohorts; TG13 accuracy was 62%-79%. CONCLUSIONS: A clinical prediction score for cholecystitis demonstrates accuracy equivalent to TG13. Use of this score may streamline work-up by decreasing the need for comprehensive ultrasound evaluation and CRP measurement and may shorten ED length of stay.


Subject(s)
Cholecystitis, Acute/diagnosis , Diagnostic Techniques, Digestive System , Emergency Service, Hospital , Gallstones/diagnosis , Point-of-Care Systems , Adult , Cholecystitis, Acute/etiology , Diagnosis, Differential , Diagnostic Techniques, Digestive System/standards , Female , Gallstones/complications , Humans , Male , Middle Aged , Practice Guidelines as Topic , Predictive Value of Tests , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Tokyo
5.
Diabetes Care ; 40(8): 1082-1089, 2017 Aug.
Article in English | MEDLINE | ID: mdl-28611053

ABSTRACT

OBJECTIVE: In this study, we aimed to explore the mechanism by which TCF7L2 rs7903146 risk allele confers susceptibility to impaired glucose tolerance (IGT) or type 2 diabetes (T2D) in obese adolescents. RESEARCH DESIGN AND METHODS: The rs7903146 variant in the TCF7L2 gene was genotyped in a multiethnic cohort of 955 youths. All subjects underwent an oral glucose tolerance test with the use of the Oral Minimal Model to assess insulin secretion, and 33 subjects underwent a hyperinsulinemic-euglycemic clamp. In 307 subjects, a follow-up oral glucose tolerance test was repeated after 3.11 ± 2.36 years. RESULTS: The TCF7L2 rs7903146 risk allele was associated with higher 2-h glucose levels in Caucasians (P = 0.006) and African Americans (P = 0.009), and a trend was seen also in Hispanics (P = 0.072). Also, the T allele was associated with decreased ß-cell responsivity and IGT (P < 0.05). Suppression of endogenous hepatic glucose production was lower in subjects with the risk variant (P = 0.006). Finally, the odds of showing IGT/T2D at follow-up were higher in subjects carrying the minor allele (odds ratio 2.224; 95% CI 1.370-3.612; P = 0.0012). CONCLUSIONS: The rs7903146 variant in the TCF7L2 gene increases the risk of IGT/T2D in obese adolescents by impairing ß-cell function, and hepatic insulin sensitivity predicts the development of IGT/T2D over time.


Subject(s)
Diabetes Mellitus, Type 2/genetics , Pediatric Obesity/genetics , Prediabetic State/genetics , Transcription Factor 7-Like 2 Protein/genetics , Adolescent , Alleles , Body Mass Index , Child , Cohort Studies , Diabetes Mellitus, Type 2/blood , Female , Follow-Up Studies , Genetic Predisposition to Disease , Glucose Intolerance/blood , Glucose Intolerance/genetics , Glucose Tolerance Test , Humans , Insulin/blood , Insulin/metabolism , Insulin Resistance , Insulin Secretion , Insulin-Secreting Cells/metabolism , Liver/metabolism , Longitudinal Studies , Male , Pediatric Obesity/blood , Prediabetic State/blood , Racial Groups/genetics , Risk Factors
6.
Nutr Clin Pract ; 32(2): 175-181, 2017 Apr.
Article in English | MEDLINE | ID: mdl-28107096

ABSTRACT

BACKGROUND: Macronutrient deficiency in critical illness is associated with worse outcomes. We hypothesized that an aggressive enteral nutrition (EN) protocol would result in higher macronutrient delivery and fewer late infections. METHODS: We enrolled adult surgical intensive care unit (ICU) patients receiving >72 hours of EN from July 2012 to June 2014. Our intervention consisted of increasing protein prescription (2.0-2.5 vs 1.5-2.0 g/kg/d) and compensatory feeds for EN interruption. We compared the intervention group with historical controls. To test the association of the aggressive EN protocol with the risk of late infections (defined as occurring >96 hours after ICU admission), we performed a Poisson regression analysis, while controlling for age, sex, body mass index (BMI), Acute Physiology and Chronic Health Evaluation II (APACHE II) score, and exposure to gastrointestinal surgery. RESULTS: The study cohort comprised 213 patients, who were divided into the intervention group (n = 119) and the historical control group (n = 94). There was no difference in age, sex, BMI, admission category, or Injury Severity Score between the groups. Mean APACHE II score was higher in the intervention group (17 ± 8 vs 14 ± 6, P = .002). The intervention group received more calories (19 ± 5 vs 17 ± 6 kcal/kg/d, P = .005) and protein (1.2 ± 0.4 vs 0.8 ± 0.3 g/kg/d, P < .001), had a higher percentage of prescribed calories (77% vs 68%, P < .001) and protein (93% vs 64%, P < .001), and accumulated a lower overall protein deficit (123 ± 282 vs 297 ± 233 g, P < .001). On logistic regression, the intervention group had fewer late infections (adjusted odds ratio, 0.34; 95% confidence interval, 0.14-0.83). CONCLUSIONS: In surgical ICU patients, implementation of an aggressive EN protocol resulted in greater macronutrient delivery and fewer late infections.


Subject(s)
Cross Infection/prevention & control , Enteral Nutrition , Malnutrition/therapy , APACHE , Adult , Aged , Body Mass Index , Critical Illness/therapy , Dietary Proteins/administration & dosage , Female , Follow-Up Studies , Humans , Intensive Care Units , Length of Stay , Logistic Models , Male , Middle Aged , Nutritional Status , Prospective Studies , Treatment Outcome
7.
Nutr Clin Pract ; 31(1): 86-90, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26673198

ABSTRACT

BACKGROUND: Calorie/protein deficit in the surgical intensive care unit (SICU) is associated with worse clinical outcomes. It is customary to initiate enteral nutrition (EN) at a low rate and increase to goal (RAMP-UP). Increasing evidence suggests that RAMP-UP may contribute to iatrogenic malnutrition. We sought to determine what proportion of total SICU calorie/protein deficit is attributable to RAMP-UP. MATERIALS AND METHODS: This is a retrospective study of a prospectively collected registry of adult patients (N = 109) receiving at least 72 hours of EN in the SICU according to the RAMP-UP protocol (July 2012-June 2014). Subjects receiving only trophic feeds or with interrupted EN during RAMP-UP were excluded. Deficits were defined as the amount of prescribed calories/protein minus the actual amount received. RAMP-UP deficit was defined as the deficit between EN initiation and arrival at goal rate. Data included demographics, nutritional prescription/delivery, and outcomes. RESULTS: EN was started at a median of 34.0 hours (interquartile range [IQR], 16.5-53.5) after ICU admission, with a mean duration of 8.7 ± 4.3 days. The median total caloric deficit was 2185 kcal (249-4730), with 900 kcal (551-1562) attributable to RAMP-UP (41%). The protein deficit was 98.5 g (27.5-250.4), with 51.9 g (20.6-83.3) caused by RAMP-UP (53%). CONCLUSIONS: In SICU patients initiating EN, the RAMP-UP period accounted for 41% and 53% of the overall caloric and protein deficits, respectively. Starting EN immediately at goal rate may eliminate a significant proportion of macronutrient deficit in the SICU.


Subject(s)
Critical Care/statistics & numerical data , Enteral Nutrition/adverse effects , Intensive Care Units/statistics & numerical data , Malnutrition/etiology , Nutritional Status , Aged , Critical Care/methods , Dietary Proteins/administration & dosage , Dietary Proteins/analysis , Energy Intake , Enteral Nutrition/methods , Female , Humans , Male , Malnutrition/epidemiology , Middle Aged , Prospective Studies , Registries , Retrospective Studies , Time Factors
8.
JPEN J Parenter Enteral Nutr ; 40(1): 37-44, 2016 Jan.
Article in English | MEDLINE | ID: mdl-25926426

ABSTRACT

BACKGROUND: Macronutrient deficit in the surgical intensive care unit (ICU) is associated with worse in-hospital outcomes. We hypothesized that increased caloric and protein deficit is also associated with a lower likelihood of discharge to home vs transfer to a rehabilitation or skilled nursing facility. MATERIALS AND METHODS: Adult surgical ICU patients receiving >72 hours of enteral nutrition (EN) between March 2012 and May 2014 were included. Patients with absolute contraindications to EN, <72-hour ICU stay, moribund state, EN prior to surgical ICU admission, or previous ICU admission within the same hospital stay were excluded. Subjects were dichotomized by cumulative caloric (<6000 vs ≥ 6000 kcal) and protein deficit (<300 vs ≥ 300 g). Baseline characteristics and outcomes were compared using Wilcoxon rank and χ(2) tests. To test the association of macronutrient deficit with discharge destination (home vs other), we performed a logistic regression analysis, controlling for plausible confounders. RESULTS: In total, 213 individuals were included. Nineteen percent in the low-caloric deficit group were discharged home compared with 6% in the high-caloric deficit group (P = .02). Age, body mass index (BMI), Acute Physiology and Chronic Health Evaluation II (APACHE II), and initiation of EN were not significantly different between groups. On logistic regression, adjusting for BMI and APACHE II score, the high-caloric and protein-deficit groups were less likely to be discharged home (odds ratio [OR], 0.28; 95% confidence interval [CI], 0.08-0.96; P = .04 and OR, 0.29; 95% CI, 0.0-0.89, P = .03, respectively). CONCLUSIONS: In surgical ICU patients, inadequate macronutrient delivery is associated with lower rates of discharge to home. Improved nutrition delivery may lead to better clinical outcomes after critical illness.


Subject(s)
Dietary Proteins/administration & dosage , Energy Intake , Nutrition Therapy , Patient Discharge , Protein-Energy Malnutrition/prevention & control , APACHE , Aged , Body Mass Index , Critical Illness , Enteral Nutrition , Female , Follow-Up Studies , Hospitalization , Humans , Intensive Care Units , Length of Stay , Logistic Models , Male , Middle Aged , Nutritional Status , Prospective Studies , Treatment Outcome
9.
J Trauma Acute Care Surg ; 79(5): 812-6, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26496106

ABSTRACT

BACKGROUND: Gangrenous cholecystitis (GC) is difficult to diagnose preoperatively in the patient with suspected acute cholecystitis. We sought to characterize preoperative risk factors and post-operative complications. METHODS: Pathology reports of all patients undergoing cholecystectomy for suspected acute cholecystitis from June 2010 to January 2014 and admitted through the emergency department were examined. Patients with GC were compared with those with acute/chronic cholecystitis (AC/CC). Data collected included demographics, preoperative signs and symptoms, radiologic studies, operative details, and clinical outcomes. RESULTS: Thirty-eight cases of GC were identified and compared with 171 cases of AC/CC. Compared with AC/CC, GC patients were more likely to be older (57 years vs. 41 years, p < 0.001), of male sex (63% vs. 31%, p < 0.001), hypertensive (47% vs. 22%, p = 0.002), hyperlipidemic (29% vs. 14%, p = 0.026), and diabetic (24% vs. 8%, p = 0.006). GC patients were more likely to have a fever (29% vs. 12%, p = 0.007) and less likely to have nausea/vomiting (61% vs. 80%, p = 0.019) or an impacted gallstone on ultrasound (US) (8% vs. 26%, p = 0.017). Otherwise, there was no significant difference in clinical or US findings. Among GC patients, US findings were absent (8%, n = 3) or minimal (42%, n = 16). Median time from emergency department registration to US (3.3 hours vs. 2.8 hours, p = 0.28) was similar, but US to operation was longer (41.2 hours vs. 18.4 hours, p < 0.001), conversion to open cholecystectomy was more common (37% vs. 10%, p < 0.001), and hospital stay was longer (median, 4 days vs. 2 days, p < 0.0001). Delay in surgical consultation occurred in 16% of GC patients compared with 1% of AC patients (p < 0.001). CONCLUSION: Demographic features may be predictive of GC. Absent or minimal US signs occur in 50%, and delay in surgical consultation is common. Postoperative morbidity is greater for patients with GC compared with those with AC/CC. LEVEL OF EVIDENCE: Epidemiologic study, level III; therapeutic study, level IV.


Subject(s)
Cholecystitis, Acute/surgery , Cholecystitis/surgery , Delayed Diagnosis/mortality , Postoperative Complications/mortality , Referral and Consultation , Adult , Cholecystectomy/methods , Cholecystectomy/mortality , Cholecystitis/diagnostic imaging , Cholecystitis/mortality , Cholecystitis/pathology , Cholecystitis, Acute/diagnostic imaging , Cholecystitis, Acute/mortality , Cholecystitis, Acute/pathology , Cohort Studies , Female , Gangrene/mortality , Gangrene/pathology , Gangrene/surgery , Humans , Male , Middle Aged , Postoperative Complications/physiopathology , Preoperative Care/methods , Prognosis , Retrospective Studies , Risk Assessment , Severity of Illness Index , Survival Rate , Tomography, X-Ray Computed/methods , Treatment Outcome , Ultrasonography, Doppler/methods
10.
Surg Infect (Larchmt) ; 16(5): 509-12, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26375322

ABSTRACT

BACKGROUND: Ultrasound (US) is the first-line diagnostic study for evaluating gallstone disease and is considered the test of choice for diagnosing acute cholecystitis (AC). However, computed tomography (CT) is used widely for the evaluation of abdominal pain and is often obtained as a first abdominal imaging test, particularly in cases in which typical clinical signs of AC are absent or other possible diagnoses are being considered. We hypothesized that CT is more sensitive than US for diagnosing AC. METHODS: A prospective registry of all urgent cholecystectomies performed by our acute care surgery service between June 2008 and January 2014 was searched for cases of AC. The final diagnosis was based on operative findings and pathology. Patients were classified into two groups according to pre-operative radiographic work-up: US only or CT and US. The US group was compared with the CT and US group with respect to clinical and demographic characteristics. For patients undergoing both tests the sensitivity of the two tests was compared. RESULTS: One hundred one patients with AC underwent both US and CT. Computed tomography was more sensitive than US for the diagnosis of AC (92% versus 79%, p=0.015). Ultrasound was more sensitive than CT for identification of cholelithiasis (87% versus 60%, p<0.01). Patients undergoing both tests prior to surgery were more likely to be older, male, have medical comorbidities, and lack typical clinical signs of AC. CONCLUSIONS: Computed tomography is more sensitive than US for the diagnosis of AC and is most often used in patients without typical clinical signs of AC.


Subject(s)
Cholecystitis, Acute/diagnosis , Diagnostic Tests, Routine/methods , Gallbladder/diagnostic imaging , Radiography, Abdominal/methods , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Adult , Female , Humans , Male , Middle Aged , Prospective Studies , Sensitivity and Specificity
11.
J Surg Res ; 198(2): 346-50, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26187187

ABSTRACT

BACKGROUND: Enteral nutrition (EN) delivery in the surgical intensive care unit (ICU) is often suboptimal as it is commonly interrupted for procedures. We hypothesized that continuing perioperative nutrition or providing compensatory nutrition would improve caloric delivery without increasing morbidity. MATERIALS AND METHODS: We enrolled 10 adult surgical ICU patients receiving EN who were scheduled for elective bedside percutaneous tracheostomy. In these patients (fed group), either perioperative EN was maintained or compensatory nutrition was provided. We compared the amount of calories delivered, caloric deficits, and the rate of complications of these patients with those of 22 contemporary controls undergoing tracheostomy while adhering to the traditional American Society of Anesthesiology nil per os guidelines (unfed group). We defined caloric deficit as the difference between prescribed calories and actual delivered calories. RESULTS: There was no difference in demographic characteristics between the two groups. On the day of procedure, the fed group had higher median delivered calories (1706 kcal; interquartile range [IQR], 1481-2009 versus 588 kcal; IQR, 353-943; P < 0.0001) and received a higher percentage of prescribed calories (92%; IQR, 82%-97% versus 34%; IQR, 24%-51%; P < 0.0001). Median caloric deficit on the day of the procedure was significantly lower in the fed group (175 kcal; IQR, 49-340 versus 1133 kcal; IQR, 660-1365; P < 0.0001). There were no differences in total overall ICU complications per patient, gastrointestinal complications on the day of procedure, or total infectious complications per patient between the two groups. CONCLUSIONS: In our pilot study, perioperative and compensatory nutrition resulted in higher caloric delivery and was not associated with increased morbidity.


Subject(s)
Critical Care/methods , Enteral Nutrition/statistics & numerical data , Perioperative Care/methods , Tracheostomy , Aged , Critical Care/statistics & numerical data , Female , Humans , Intensive Care Units/statistics & numerical data , Male , Middle Aged , Pilot Projects
12.
J Trauma Acute Care Surg ; 78(2): 265-70; discussion 270-1, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25757110

ABSTRACT

BACKGROUND: Graduated driving licensing (GDL) programs phase in driving privileges for teenagers. We aimed to evaluate the effect of the 2007 GDL law on the incidence of total motor vehicle crashes (tMVCs) and fatal motor vehicle crashes (fMVCs) among teenagers in Massachusetts. METHODS: The Fatality Analysis and Reporting System, the Missouri Census Data Center, and the Massachusetts Department of Transportation databases were all used to create and compare the incidence of tMVCs and fMVCs before (2002-2006) and after (2007-2011) the law enactment. The following three driver age groups were studied: 16 years to 17 years (evaluating the law effect), 18 years to 20 years (evaluating the sustainability of the effect), and 25 years to 29 years (control group). As a sensitivity analysis, we compared the incidence rates per population and per licenses issued. RESULTS: tMVCs decreased following the law for all three age groups (16-17 years, from 7.6 to 4.8 per 1,000 people, p < 0.0001; 18-20 years, from 8.5 to 6.4 per 1,000 people, p < 0.0001; 25-29 years, from 6.2 to 5.2 per 1,000 people, p < 0.0001), but the percentage decrease in tMVC rates was less in the control group (37%, 25%, and 15%, respectively; both p's < 0.0001). The rates of fMVC also decreased in the age groups of 16 years to 17 years (from 14.0 to 8.6 per 100,000 people, p = 0.0006), 18 years to 20 years (from 21.2 to 13.7 per 100,000 people, p < 0.0001), and 25 years to 29 years (from 14.4 to 11.0 per 100,000 people, p < 0.0001). All of these results were confirmed in the sensitivity analyses. CONCLUSION: The 2007 Massachusetts GDL was associated with a decreased incidence of teenager tMVCs and fMVCs, and the effect was sustainable. This study provides further support to develop, implement, enforce, and maintain GDL programs aimed at preventing MVCs and their related mortality in the young novice driver population. LEVEL OF EVIDENCE: Epidemiologic/prognostic study, level III.


Subject(s)
Accidents, Traffic/prevention & control , Automobile Driver Examination/legislation & jurisprudence , Automobile Driving/legislation & jurisprudence , Licensure/legislation & jurisprudence , Accidents, Traffic/statistics & numerical data , Adolescent , Female , Humans , Male , Massachusetts/epidemiology , Young Adult
13.
Am Surg ; 81(3): 282-8, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25760205

ABSTRACT

The position of the base of the appendix during advancing gestational age is based on inadequate data. Therefore, the proper location for an appendectomy incision during pregnancy is highly unclear. This study investigated the location of the appendix during pregnancy to determine the optimal location for an incision in pregnant patients with appendicitis relative to McBurney's point. Magnetic resonance images (MRIs) were reviewed independently by two fellowship-trained abdominal MRI radiologists blinded to the imaging report. The distance of the appendix from anatomic landmarks was measured in a total of 114 pregnant women with an abdominal or pelvic MRI who were admitted between 2001 and 2011 at a Level I trauma center. Patients with a history of appendectomy were excluded. The distance from the base of the appendix to McBurney's point changed over the course of the gestation by only 1.2 cm and which did not amount to a clinically or statistically significant change in position. Our data provide evidence that there is minimal upward or lateral displacement of the appendix during pregnancy, and therefore its distance from the McBurney's point remains essentially unchanged. These findings justify the use of the McBurney's incision for appendectomy during pregnancy regardless of the trimester.


Subject(s)
Appendectomy/methods , Appendix/pathology , Appendix/surgery , Pregnancy Complications/pathology , Pregnancy Complications/surgery , Adolescent , Adult , Body Weights and Measures , Female , Humans , Magnetic Resonance Imaging , Pregnancy , Pregnancy Trimesters , Retrospective Studies , Young Adult
14.
J Emerg Trauma Shock ; 8(1): 34-8, 2015.
Article in English | MEDLINE | ID: mdl-25709251

ABSTRACT

BACKGROUND: Excessive crystalloid administration is common and associated with negative outcomes in critically ill trauma patients. Continuous furosemide infusion (CFI) to remove excessive fluid has not been previously described in this population. We hypothesized that a goal-directed CFI is more effective for fluid removal than intermittent bolus injection (IBI) diuresis without excess incidence of hypokalemia or renal failure. MATERIALS AND METHODS: CFI cases were prospectively enrolled between November 2011 and August 2012, and matched to historic IBI controls by age, gender, Injury Severity Score (ISS), and net fluid balance (NFB) at diuresis initiation. Paired and unpaired analyses were performed to compare groups. The primary endpoints were net fluid balance, potassium and creatinine levels. Secondary endpoints included intensive care unit (ICU) and hospital length of stay (LOS), ventilator-free days (VFD), and mortality. RESULTS: 55 patients were included, with 19 cases and 36 matched controls. Mean age was 54 years, mean ISS was 32.7, and mean initial NFB was +7.7 L. After one day of diuresis with CFI vs. IBI, net 24 h fluid balance was negative (-0.55 L vs. +0.43 L, P = 0.026) only for the CFI group, and there was no difference in potassium and creatinine levels. Cumulative furosemide dose (59.4mg vs. 25.4mg, P < 0.001) and urine output (4.2 L vs. 2.8 L, P < 0.001) were also significantly increased with CFI vs. IBI. There were no statistically significant differences in ICU LOS, hospital LOS, VFD, or mortality. CONCLUSIONS: Compared to IBI, goal-directed diuresis by CFI is more successful in achieving net negative fluid balance in patients with fluid overload with no detrimental side effects on renal function or patient outcome.

15.
J Trauma Acute Care Surg ; 76(2): 424-30, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24458048

ABSTRACT

BACKGROUND: Of the patients with a Clostridium difficile infection, 2% to 8% will progress to fulminant C. difficile colitis (fCDC), which carries high morbidity and mortality. No system exists to rapidly identify patients at risk for developing fCDC and possibly in need of surgical intervention. Our aim was to design a simple and accurate risk scoring system (RSS) for daily clinical practice. METHODS: We prospectively enrolled all patients diagnosed with a C. difficile infection and compared patients with and without fCDC. An expert panel, combined with data derived from previous studies, identified four risk factors, and a multivariable logistic regression model was performed to determine their effect in predicting fCDC. The RSS was created based on the predictive power of each factor, and calibration, discrimination, and test characteristics were subsequently determined. In addition, the RSS was compared with a previously proposed severity scoring system. RESULTS: A total of 746 patients diagnosed with C. difficile infection were enrolled between November 2010 and October 2012. Based on the log (odds ratio) of each risk factor, age greater than 70 years was assigned 2 points, white blood cell count equal to or greater than 20,000/µL or equal to or less than 2,000/µL was assigned 1 point, cardiorespiratory failure was assigned 7 points, and diffuse abdominal tenderness on physical examination was assigned 6 points. With the use of this system, the discriminatory value of the RSS (c statistic) was 0.98 (95% confidence interval, 0.96-1).The Hosmer-Lemeshow goodness-of-fit test showed a p value of 0.78, and the Brier score was 0.019. A value of 6 points was determined to be the threshold for reliably dividing low-risk ( <6) from high-risk (≥ 6) patients. CONCLUSION: The RSS is a valid and reliable tool to identify at the bedside patients who are at risk for developing fCDC. External validation is needed before widespread implementation. LEVEL OF EVIDENCE: Prognostic study, level II.


Subject(s)
Clostridioides difficile/isolation & purification , Enterocolitis, Pseudomembranous/diagnosis , Enterocolitis, Pseudomembranous/epidemiology , Risk Assessment/methods , Severity of Illness Index , Adult , Age Distribution , Analysis of Variance , Anti-Bacterial Agents/therapeutic use , Cohort Studies , Disease Progression , Enterocolitis, Pseudomembranous/drug therapy , Female , Humans , Incidence , Logistic Models , Male , Middle Aged , Multivariate Analysis , Predictive Value of Tests , Prognosis , Prospective Studies , Reproducibility of Results , Sex Distribution , Statistics, Nonparametric , Survival Analysis , Treatment Outcome
16.
J Trauma Acute Care Surg ; 75(3): 398-403, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23928742

ABSTRACT

BACKGROUND: Therapeutic angioembolization is a relatively new "rescue treatment" modality for gastrointestinal hemorrhage (GIH) for unstable patients who fail primary treatment approaches; however, the effectiveness of this treatment and the incidence of ischemic necrosis following embolization for acute GIH are poorly described. The purpose of this study was to evaluate the effectiveness and safety of "rescue" transcatheter superselective angioembolization (SSAE) for the treatment of hemodynamically unstable patients with GIH. METHODS: A 10-year retrospective review of all hemodynamically unstable patients (systolic blood pressure < 90 mm Hg and ongoing transfusion requirement) who underwent "rescue" SSAE for GIH after failed endoscopic management was performed. All patients with evidence of active contrast extravasation were included. Data were collected on demographics, comorbidities, clinical presentation, and type of intravascular angioembolic agent used. Outcomes included technical success (cessation of extravasation), clinical success (no rebleeding requiring intervention within 30 days), and incidence of ischemic complications. RESULTS: Ninety-eight patients underwent SSAE for GIH during the study period; 47 were excluded owing to lack of active contrast extravasation. Of the remaining 51 patients, 22 (43%) presented with a lower GIH and 29 (57%) with upper GIH. The majority underwent embolization with a permanent agent (71%), while the remaining patients received either a temporary agent (16%) or a combination (14%). The overall technical and clinical success rates were 98% and 71%, respectively. Of the 14 patients with technical success but clinical failure (rebleeding within 30 days) and the 1 patient with technical failure, 4 were managed successfully with reembolization, while 2 underwent successful endoscopic therapy, and 9 had surgical resections. Only one patient had an ischemic complication (small bowel necrosis) requiring resection. CONCLUSION: SSAE, with reembolization if necessary, is an effective rescue treatment modality for hemodynamically unstable patients with active GIH. Of the patients, 20% will fail SSAE and require additional intervention. Ischemic complications are extremely rare. LEVEL OF EVIDENCE: Therapeutic study, level IV.


Subject(s)
Embolization, Therapeutic , Gastrointestinal Hemorrhage/therapy , Embolization, Therapeutic/adverse effects , Female , Gastrointestinal Hemorrhage/mortality , Gastrointestinal Hemorrhage/physiopathology , Gastrointestinal Tract/blood supply , Hemodynamics/physiology , Humans , Ischemia/etiology , Male , Middle Aged , Patient Outcome Assessment , Retrospective Studies
17.
J Spec Oper Med ; 13(1): 29-33, 2013.
Article in English | MEDLINE | ID: mdl-23526319

ABSTRACT

INTRODUCTION: The usefulness of heart rate variability (HRV) and heart rate complexity (HRC) analysis as a potential triage tool has been limited by the inability to perform real-time analysis on a portable, handheld monitoring platform. Through a multidisciplinary effort of academia and industry, we report on the development of a rugged, handheld and noninvasive device that provides HRV and HRC analysis in real-time in critically ill patients. METHODS: After extensive re-engineering, real-time HRV and HRC analyses were incorporated into an existing, rugged, handheld monitoring platform. Following IRB approval, the prototype device was used to monitor 20 critically ill patients and 20 healthy controls to demonstrate real-world discriminatory potential. Patients were compared to healthy controls using a Student?s t test as well as repeated measures analysis. Receiver operator characteristic (ROC) curves were generated for HRV and HRC. RESULTS: Critically ill patients had a mean APACHE-2 score of 15, and over 50% were mechanically ventilated and requiring vasopressor support. HRV and HRC were both lower in the critically ill patients compared to healthy controls (p < 0.0001) and remained so after repeated measures analysis. The area under the ROC for HRV and HRC was 0.95 and 0.93, respectively. CONCLUSIONS: This is the first demonstration of real-time, handheld HRV and HRC analysis. This prototype device successfully discriminates critically ill patients from healthy controls. This may open up possibilities for real-world use as a trauma triage tool, particularly on the battlefield.


Subject(s)
Critical Illness , Heart Rate , Entropy , Humans , Monitoring, Physiologic , Triage
SELECTION OF CITATIONS
SEARCH DETAIL
...