Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Ann Surg ; 274(6): e957-e965, 2021 12 01.
Article in English | MEDLINE | ID: mdl-31714315

ABSTRACT

OBJECTIVE: To determine whether persistent opioid use after injury is associated with subsequent long-term development of clinically recognized opioid abuse. SUMMARY BACKGROUND DATA: Opioid abuse is an epidemic in the United States and trauma can initiate persistent use; however, it remains unclear whether persistent opioid use contributes to the subsequent development of opioid abuse. The care of combat casualties by the Departments of Defense and Veterans Affairs uniquely allows investigation of this long-term outcome. METHODS: This retrospective cohort study randomly selected 10,000 battle-injured United States military personnel. We excluded patients who died during initial hospitalization or within 180 days of discharge, had a preinjury opioid abuse diagnosis, or had missing data in a preselected variable. We defined persistent opioid use as filling an opioid prescription 3 to 6 months after discharge and recorded clinically recognized opioid abuse using relevant diagnosis codes. RESULTS: After exclusion, 9284 subjects were analyzed, 2167 (23.3%) of whom developed persistent opioid use. During a median follow-up time of 8 years, 631 (6.8%) patients developed clinically recognized opioid abuse with a median time to diagnosis of 3 years. Injury severity and discharge opioid prescription amount were associated with persistent opioid use after trauma. After adjusting for patient and injury-specific factors, persistent opioid use was associated with the long-term development of clinically recognized opioid abuse (adjusted hazard ratio, 2.39; 95% confidence interval, 1.99-2.86). CONCLUSIONS: Nearly a quarter of patients filled an opioid prescription 3 to 6 months after discharge, and this persistent use was associated with long-term development of opioid abuse.


Subject(s)
Analgesics, Opioid/therapeutic use , Military Personnel , Opioid-Related Disorders/epidemiology , Wounds and Injuries/drug therapy , Adult , Female , Humans , Male , Retrospective Studies , Risk Factors , United States/epidemiology
2.
Endocr Pract ; 25(9): 935-942, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31170363

ABSTRACT

Objective: The clinical benefit of adding a glucagon-like peptide-1 receptor agonist (GLP-1RA) to basal-bolus or very high dose insulin regimens is unclear. This study investigated the impact of adding a GLP-1RA to a spectrum of insulin regimens (basal, basal-bolus, and U-500) to determine the impact on hemoglobin A1c (HbA1c), weight loss, and total daily insulin dose (TDD) over the course of 12 months. Methods: A retrospective chart review was conducted on 113 participants with type 2 diabetes mellitus using insulin therapy. Each participant's HbA1c, body weight, and TDD were recorded prior to initiation of GLP-1RA therapy and at the 3, 6, and 12-month time points while on combination therapy. Results: Across all participants, the HbA1c values decreased significantly from a baseline of 8.9 (74 mmol/mol) ± 0.14% to 8.2 (66 mmol/mol) ± 0.14% (P<.01) in the first 3 months, 8.0 (64 mmol/mol) ± 0.12% (P<.01) at 6 months, to 8.3 (67 mmol/mol) ± 0.14% (P<.01) at 12 months. There was no significant decrease in weight or TDD with the addition of a GLP-1RA overall or in different insulin groups. However, there was a clinically significant decrease in weight over the study duration. Conclusion: The results of this study suggest that adding a GLP-1RA to various insulin regimens may help to achieve glycemic goals while avoiding the less desirable side effects of weight gain and increasing insulin regimens. However, the expected weight loss and decrease in TDD may not be as sizable in the clinical setting. Abbreviations: DCOE = Diabetes Center of Excellence; DM = diabetes mellitus; GLP-1RA = glucagon-like peptide-1 receptor agonist; HbA1c = hemoglobin A1c; RCT = randomized controlled trial; TDD = total daily dose.


Subject(s)
Diabetes Mellitus, Type 2 , Insulin/therapeutic use , Glucagon-Like Peptide-1 Receptor , Glycated Hemoglobin , Humans , Hypoglycemic Agents , Retrospective Studies
3.
Intern Emerg Med ; 13(8): 1239-1247, 2018 Dec.
Article in English | MEDLINE | ID: mdl-29502329

ABSTRACT

Emergency department (ED) providers have limited time to evaluate patients at risk for opioid misuse. A validated tool to assess the risk for aberrant opioid behavior may mitigate adverse sequelae associated with prescription opioid misuse. We sought to determine if SOAPP-R, COMM, and provider gestalt were able to identify patients at risk for prescription opioid misuse as determined by pharmacy records at 12 months. We conducted a prospective observational study of adult patients in a high volume US ED. Patients completed the SOAPP-R and COMM, and treating EM providers evaluated patients' opioid misuse risk. We performed variable-centered, person-centered, and hierarchical cluster analyses to determine whether provider gestalt, SOAPP-R, or COMM, or a combination, predicted higher misuse risk. The primary outcome was the number of opioid prescriptions at 12 months according to pharmacy records. For 169 patients (mean age 43 years, 51% female, 73% white), correlation analysis showed a strong relationship between SOAPP-R and COMM with predicting the number of opioid prescriptions dispensed at 12 months. Provider scores estimating opioid misuse were not related to SOAPP-R and only weakly associated with COMM. In our adjusted regression models, provider gestalt and SOAPP-R uniquely predicted opioid prescriptions at 6 and 12 months. Using designated cutoff scores, only SOAPP-R detected a difference in the number of opioid prescriptions. Cluster analysis revealed that provider gestalt, SOAPP-R, and COMM scores jointly predicted opioid prescriptions. Provider gestalt and self-report instruments uniquely predicted the number of opioid prescriptions in ED patients. A combination of gestalt and self-assessment scores can be used to identify at-risk patients who otherwise miss the cutoff scores for SOAPP-R and COMM.


Subject(s)
Emergency Service, Hospital/trends , Mass Screening/methods , Opioid-Related Disorders/diagnosis , Pain Measurement/standards , Adult , Analgesics, Opioid/adverse effects , Analgesics, Opioid/therapeutic use , Chi-Square Distribution , Emergency Service, Hospital/organization & administration , Female , Humans , Male , Mass Screening/statistics & numerical data , Middle Aged , Opioid-Related Disorders/epidemiology , Pain/drug therapy , Pain Measurement/methods , Prospective Studies , Risk Assessment/methods , Statistics, Nonparametric
4.
Am J Ther ; 24(2): e150-e156, 2017.
Article in English | MEDLINE | ID: mdl-26963723

ABSTRACT

Opioid misuse is a growing epidemic among the civilian and military communities. Five hundred prospective, anonymous surveys were collected in the emergency department waiting room of a military tertiary care hospital over 3 weeks. Demographics, medical and military characteristics were investigated for association with opioid use. Univariate logistic models were used to characterize the probability of misuse in relation to the demographic, medical, and military-specific variables. Traumatic brain injury (TBI) and posttraumatic stress disorder were investigated within different age cohorts with adjustment for deployment. The opioid misuse rate disclosed by the subject was 31%. Subjects with TBI were less likely to misuse opioids. We found a trend among younger cohorts to have a higher likelihood for misusing opioids when diagnosed with TBI or posttraumatic stress disorder with history of deployment in the past 5 years. The most common form of misuse was using a previously prescribed medication for a new pain. Traumatic brain injury and/or enrollment in post-deployment recovery programs maybe protective against opioid misuse. Chronic opioid use among young soldiers maybe viewed as a weakness that could influence opioid misuse. Younger cohorts of active duty service members could be at higher risk for misuse. Efforts to enhance close monitoring of misuse should address these at-risk populations.


Subject(s)
Analgesics, Opioid/therapeutic use , Brain Injuries, Traumatic/epidemiology , Military Personnel/statistics & numerical data , Opioid-Related Disorders/epidemiology , Pain/drug therapy , Prescription Drug Misuse/statistics & numerical data , Stress Disorders, Post-Traumatic/epidemiology , War-Related Injuries/epidemiology , Adult , Age Factors , Cohort Studies , Female , Humans , Logistic Models , Male , Middle Aged , Prospective Studies , Risk Factors , Self Report , Surveys and Questionnaires , United States/epidemiology , Young Adult
5.
Ann Emerg Med ; 67(2): 196-205.e3, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26164644

ABSTRACT

STUDY OBJECTIVE: Diphenhydramine is a moderately lipophilic antihistamine with sodium channel blockade properties. It is consumed recreationally for mild hallucinogenic and hypnotic effects and causes dysrhythmias, seizures, and death with overdose. Intravenous lipid emulsion is a novel agent used to treat lipophilic drug overdose. Two case reports describe clinical improvement with intravenous lipid emulsion after diphenhydramine toxicity, but no prospective studies have been reported. Our objective is to determine whether intravenous lipid emulsion improved hypotension compared with sodium bicarbonate for severe diphenhydramine toxicity in a model of critically ill swine. METHODS: Twenty-four swine weighing 45 to 55 kg were infused with diphenhydramine at 1 mg/kg per minute until the mean arterial pressure reached 60% of baseline. Subjects were randomized to receive intravenous lipid emulsion (bolus of 7 mL/kg and then 0.25 mL/kg per minute) or sodium bicarbonate (2 mEq/kg plus an equal volume of normal saline solution). We measured pulse rate, systolic blood pressure, mean arterial pressure, cardiac output, QRS interval, and serum diphenhydramine level. Twelve animals per group provided a power of 0.8 and α of .05 to detect a 50% difference in mean arterial pressure. We assessed differences between groups with a repeated-measures linear model (MIXED) and Kaplan-Meier estimation methods. We compared systolic blood pressure, mean arterial pressure, and cardiac output with repeated measures ANOVA. RESULTS: Baseline weight, hemodynamic parameters, QRS interval, time to hypotension, and diphenhydramine dose required to achieve hypotension were similar between groups. After hypotension was reached, there was no overall difference between intravenous lipid emulsion and sodium bicarbonate groups for cardiac output or QRS intervals; however, there were transient differences in mean arterial pressure and systolic blood pressure, favoring intravenous lipid emulsion (difference: mean arterial pressure, sodium bicarbonate versus intravenous lipid emulsion -20.7 [95% confidence interval -31.6 to -9.8]; systolic blood pressure, sodium bicarbonate versus intravenous lipid emulsion -24.8 [95% confidence interval -37.6 to -12.1]). Time to death was similar. One intravenous lipid emulsion and 2 sodium bicarbonate pigs survived. End-of-study mean total serum diphenhydramine levels were similar. The mean lipid layer diphenhydramine level was 6.8 µg/mL (SD 3.1 µg/mL) and mean aqueous layer level 8.6 µg/mL (SD 5.5 µg/mL). CONCLUSION: In our study of diphenhydramine-induced hypotensive swine, we found no difference in hypotension, QRS widening, or diphenhydramine levels in aqueous layers between intravenous lipid emulsion and sodium bicarbonate.


Subject(s)
Diphenhydramine/toxicity , Fat Emulsions, Intravenous/pharmacology , Hypotension/chemically induced , Hypotension/drug therapy , Animals , Disease Models, Animal , Female , Hemodynamics , Pilot Projects , Sodium Bicarbonate/pharmacology , Swine
7.
Mil Med ; 179(5): 462-70, 2014 May.
Article in English | MEDLINE | ID: mdl-24806489

ABSTRACT

Overweight and obesity prevalence has increased over the past 30 years. Few studies have looked at the enrolled Military Health System (MHS) population (2.2 million per year). This descriptive study examined trends in overweight and obesity in both children and adults from fiscal years 2009 to 2012 and compared them to the U.S. population. Prevalence in MHS children decreased over time for overweight (14.2-13.8%) and obesity (11.7-10.9%). Active duty adults showed an increase in overweight prevalence (52.7-53.4%) and a decrease in obesity prevalence (18.9-18.3%). For nonactive duty, both overweight and obesity prevalence remained relatively unchanged around 33%. For both children and adults, overweight and obesity prevalence increased with age, except for obesity in the nonactive duty ≥ 65 subgroup. When compared to the United States by gender and age, MHS children generally had a lower overweight and obesity prevalence, active duty adults had higher overweight and lower obesity prevalence, and nonactive duty adults had comparable overweight and obesity prevalence, except for obesity in both men in the 40 to 59 subgroup and women in ≥ 60 subgroup. More research on the MHS population is needed to identify risk factors and modifiable health behaviors that could defeat the disease of obesity.


Subject(s)
Military Personnel/statistics & numerical data , Obesity/epidemiology , Adolescent , Adult , Aged , Body Mass Index , Child , Child, Preschool , Female , Humans , Male , Middle Aged , Overweight/epidemiology , Prevalence , United States/epidemiology , Young Adult
8.
J Med Toxicol ; 10(4): 364-8, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24844460

ABSTRACT

Simulation-based teaching (SIM) is a common method for medical education. SIM exposes residents to uncommon scenarios that require critical, timely actions. SIM may be a valuable training method for critically ill poisoned patients whose diagnosis and treatment depend on key clinical findings. Our objective was to compare medical simulation (SIM) to traditional lecture-based instruction (LEC) for training emergency medicine (EM) residents in the acute management of critically ill poisoned patients. EM residents completed two pre-intervention questionnaires: (1) a 24-item multiple-choice test of four toxicological emergencies and (2) a questionnaire using a five-point Likert scale to rate the residents' comfort level in diagnosing and treating patients with specific toxicological emergencies. After completing the pre-intervention questionnaires, residents were randomized to SIM or LEC instruction. Two toxicologists and three EM physicians presented four toxicology topics to both groups in four 20-min sessions. One group was in the simulation center, and the other in a lecture hall. Each group then repeated the multiple-choice test and questionnaire immediately after instruction and again at 3 months after training. Answers were not discussed. The primary outcome was comparison of immediate mean post-intervention test scores and final scores 3 months later between SIM and LEC groups. Test score outcomes between groups were compared at each time point (pre-test, post-instruction, 3-month follow-up) using Wilcoxon rank sum test. Data were summarized by descriptive statistics. Continuous variables were characterized by means (SD) and tested using t tests or Wilcoxon rank sum. Categorical variables were summarized by frequencies (%) and compared between training groups with chi-square or Fisher's exact test. Thirty-two EM residents completed pre- and post-intervention tests and comfort questionnaires on the study day. Both groups had higher post-intervention mean test scores (p < 0.001), but the LEC group showed a greater improvement compared to the SIM group (5.6 [2.3] points vs. 3.6 [2.4], p = 0.02). At the 3-month follow-up, 24 (75 %) tests and questionnaires were completed. There was no improvement in 3-month mean test scores in either group compared to immediate post-test scores. The SIM group had higher final mean test scores than the LEC group (16.6 [3.1] vs. 13.3 [2.2], p = 0.009). SIM and LEC groups reported similar diagnosis and treatment comfort level scores at baseline and improved equally after instruction. At 3 months, there was no difference between groups in comfort level scores for diagnosis or treatment. Lecture-based teaching was more effective than simulation-based instruction immediately after intervention. At 3 months, the SIM group showed greater retention than the LEC group. Resident comfort levels for diagnosis and treatment were similar regardless of the type of education.


Subject(s)
Education, Medical/methods , Emergency Medicine/education , Internship and Residency , Patient Simulation , Toxicology/education , Adult , Clinical Competence , Female , Humans , Male , Prospective Studies , Surveys and Questionnaires
9.
Ann Emerg Med ; 64(6): 612-9, 2014 Dec.
Article in English | MEDLINE | ID: mdl-24746273

ABSTRACT

STUDY OBJECTIVE: Hydroxocobalamin is a Food and Drug Administration-approved antidote for cyanide poisoning. Cobinamide is a potential antidote that contains 2 cyanide-binding sites. To our knowledge, no study has directly compared hydroxocobalamin with cobinamide in a severe, cyanide-toxic large-animal model. Our objective is to compare the time to return of spontaneous breathing in swine with acute cyanide-induced apnea treated with intravenous hydroxocobalamin, intravenous cobinamide, or saline solution (control). METHODS: Thirty-three swine (45 to 55 kg) were intubated, anesthetized, and instrumented (continuous mean arterial pressure and cardiac output monitoring). Anesthesia was adjusted to allow spontaneous breathing with FiO2 of 21% during the experiment. Cyanide was continuously infused intravenously until apnea occurred and lasted for 1 minute (time zero). Animals were then randomly assigned to receive intravenous hydroxocobalamin (65 mg/kg), cobinamide (12.5 mg/kg), or saline solution and monitored for 60 minutes. A sample size of 11 animals per group was selected according to obtaining a power of 80%, an α of .05, and an SD of 0.17 in mean time to detect a 20% difference in time to spontaneous breathing. We assessed differences in time to death among groups, using Kaplan-Meier estimation methods, and compared serum lactate, blood pH, cardiac output, mean arterial pressure, respiratory rate, and minute ventilation time curves with repeated-measures ANOVA. RESULTS: Baseline weights and vital signs were similar among groups. The time to apnea and cyanide dose required to achieve apnea were similar. At time zero, mean cyanide blood and lactate concentrations and reduction in mean arterial pressure from baseline were similar. In the saline solution group, 2 of 11 animals survived compared with 10 of 11 in the hydroxocobalamin and cobinamide groups (P<.001 between the 2 treated groups and the saline solution group). Time to return of spontaneous breathing after antidote was similar between hydroxocobalamin and cobinamide (1 minute 48 seconds versus 1 minute 49 seconds, respectively). Blood cyanide concentrations became undetectable at the end of the study in both antidote-treated groups, and no statistically significant differences were detected between the 2 groups for mean arterial pressure, cardiac output, respiratory rate, lactate, or pH. CONCLUSION: Both hydroxocobalamin and cobinamide rescued severely cyanide-poisoned swine from apnea in the absence of assisted ventilation. The dose of cobinamide was one fifth that of hydroxocobalamin.


Subject(s)
Antidotes/therapeutic use , Apnea/drug therapy , Cobamides/therapeutic use , Cyanides/poisoning , Hydroxocobalamin/therapeutic use , Animals , Apnea/chemically induced , Disease Models, Animal , Female , Hemodynamics/drug effects , Infusions, Intravenous , Poisoning/drug therapy , Poisoning/physiopathology , Random Allocation , Swine
10.
Med Sci Sports Exerc ; 46(10): 1951-9, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24674973

ABSTRACT

PURPOSE: The purpose of this study was to compare body mass index (BMI) and abdominal circumference (AC) in discriminating individual musculoskeletal injury risk within a large population. We also sought to determine whether age or sex modulates the interaction between body habitus and injury risk. METHODS: We conducted a retrospective cohort study involving 67,904 US Air Force personnel from 2005 to 2011. Subjects were stratified by age, sex, BMI, adjusted BMI, and AC. New musculoskeletal injuries were recorded relative to body habitus and time elapsed from the start of study. RESULTS: Cox proportional hazards regression revealed increased HR for musculoskeletal injury in those with high-risk AC (males, >39 inches; females, >36 inches) compared with HR in those with low-risk AC (males, ≤35 inches; females, ≤32 inches) in all age categories (18-24 yr: HR = 1.567, 95% confidence interval (CI) = 1.327-1.849; 25-34 yr: HR = 2.089, 95% CI = 1.968-2.218; ≥35 yr: HR = 1.785, 95% CI = 1.651-1.929). HR for obese (BMI, ≥30 kg·m) compared with that for normal individuals (BMI, <25 kg·m) were less elevated. Kaplan-Meier curves showed a dose-response relation in all age groups but most prominently in 25- to 34-yr-old participants. Time to injury was consistently lowest in 18- to 24-yr-old participants. Score chi-square values, indicating comparative strength of each model for injury risk estimation in our cohort, were higher for AC than those for BMI or adjusted BMI within all age groups. CONCLUSIONS: AC is a better predictor of musculoskeletal injury risk than BMI in a large military population. Although absolute injury risk is greatest in 18- to 24-yr-old participants, the effect of obesity on injury risk is greatest in 25- to 34-yr-old participants. There is a dose-response relation between obesity and musculoskeletal injury risk, an effect seen with both BMI and AC.


Subject(s)
Body Mass Index , Musculoskeletal System/injuries , Obesity, Abdominal/complications , Waist Circumference , Adolescent , Adult , Age Factors , Female , Humans , Male , Military Personnel , Retrospective Studies , Risk Assessment , Sex Factors , Time Factors , Young Adult
11.
Mil Med ; 178(9): 986-93, 2013 Sep.
Article in English | MEDLINE | ID: mdl-24005548

ABSTRACT

Evidence-based articles have demonstrated an increase in diabetes prevalence, but diabetes prevalence in the enrolled Military Health System population was previously understudied. Variability in diabetes prevalence rates calculated from 5 groups of algorithms was examined in the Military Health System population (3 million enrollees per year) from fiscal years 2006 to 2010. Time trend analysis and rate comparisons to the U.S. population were also performed. Increasing linear trends in diabetes prevalence from 2006 to 2010 were seen in all algorithms, though considerable rate variation was observed within each study year. Prevalence increased with age, except for a slight decrease in those ≥75 years. Overall diagnosed diabetes prevalence ranged from 7.26% to 11.22% in 2006 and from 8.29% to 13.55% in 2010. Prevalence among active duty members remained stable, but a significant upward trend was observed among nonactive duty members across study years. Age-standardized rates among nonactive duty females were higher than the U.S. population rates from 2006 to 2010. This study demonstrates prevalence rate variability because of differing case algorithms and shows evidence of a growing diabetes population in the Military Health System, specifically within the nonactive duty 45 years and older demographic groups. Further research of this population should focus on validation of case definitions.


Subject(s)
Diabetes Mellitus/epidemiology , Health Benefit Plans, Employee/statistics & numerical data , Military Personnel/statistics & numerical data , Adolescent , Adult , Aged , Algorithms , Female , Humans , Male , Middle Aged , Prevalence , United States/epidemiology , Young Adult
12.
J Trauma ; 71(2 Suppl 3): S318-28, 2011 Aug.
Article in English | MEDLINE | ID: mdl-21814099

ABSTRACT

BACKGROUND: Several recent military and civilian trauma studies demonstrate that improved outcomes are associated with early and increased use of plasma-based resuscitation strategies. However, outcomes associated with platelet transfusions are poorly characterized. We hypothesized that increased platelet:red blood cells (RBC) ratios would decrease hemorrhagic death and improve survival after massive transfusion (MT). METHODS: A transfusion database of patients transported from the scene to 22 Level I Trauma Centers over 12 months in 2005 to 2006 was reviewed. MT was defined as receiving ≥ 10 RBC units within 24 hours of admission. To mitigate survival bias, 25 patients who died within 60 minutes of arrival were excluded from analysis. Six random donor platelet units were considered equal to a single apheresis platelet unit. Admission and outcome data associated with the low (>1:20), medium (1:2), and high (1:1) platelet:RBC ratios were examined. These groups were based on the median value of the tertiles for the ratio of platelets:RBC units. RESULTS: Two thousand three hundred twelve patients received at least one unit of blood and 643 received an MT. Admission vital signs, INR, temperature, pH, Glasgow Coma Scale, Injury Severity Score, and age were similar between platelet ratio groups. The average admission platelet counts were lower in the patients who received the high platelet:RBC ratio versus the low ratio (192 vs. 216, p = 0.03). Patients who received MT were severely injured, with a mean (± standard deviation) Injury Severity Score of 33 ± 16 and received 22 ± 15 RBCs and 11 ± 14 platelets within 24 hours of injury. Increased platelet ratios were associated with improved survival at 24 hours and 30 days (p < 0.001 for both). Truncal hemorrhage as a cause of death was decreased (low: 67%, medium: 60%, high: 47%, p = 0.04). Multiple organ failure mortality was increased (low: 7%, medium: 16%, high: 27%, p = 0.003), but overall 30-day survival was improved (low: 52%, medium: 57%, high: 70%) in the high ratio group (medium vs. high: p = 0.008; low vs. high: p = 0.007). CONCLUSION: Similar to recently published military data, transfusion of platelet:RBC ratios of 1:1 was associated with improved early and late survival, decreased hemorrhagic death and a concomitant increase in multiple organ failure-related mortality. Based on this large retrospective study, increased and early use of platelets may be justified, pending the results of prospective randomized transfusion data.


Subject(s)
Blood Transfusion , Hemorrhage/blood , Hemorrhage/therapy , Wounds and Injuries/blood , Wounds and Injuries/mortality , Adult , Emergency Service, Hospital , Erythrocyte Count , Female , Hemorrhage/mortality , Humans , Male , Middle Aged , Platelet Count , Predictive Value of Tests , Retrospective Studies , Survival Rate , Treatment Outcome , Wounds and Injuries/therapy , Young Adult
13.
J Trauma ; 71(2 Suppl 3): S343-52, 2011 08.
Article in English | MEDLINE | ID: mdl-21814102

ABSTRACT

BACKGROUND: The effect of blood component ratios on the survival of patients with traumatic brain injury (TBI) has not been studied. METHODS: A database of patients transfused in the first 24 hours after admission for injury from 22 Level I trauma centers over an 18-month period was queried to find patients who (1) met different definitions of massive transfusion (5 units red blood cell [RBC] in 6 hours vs. 10 units RBC in 24 hours), (2) received high or low ratios of platelets or plasma to RBC units (<1:2 vs. ≥ 1:2), and (3) had severe TBI (head abbreviated injury score ≥ 3) (TBI+). RESULTS: Of 2,312 total patients, 850 patients were transfused with ≥ 5 RBC units in 6 hours and 807 could be classified into TBI+ (n = 281) or TBI- (n = 526). Six hundred forty-three patients were transfused with ≥ 10 RBC units in 24 hours with 622 classified into TBI+ (n = 220) and TBI- (n = 402). For both high-risk populations, a high ratio of platelets:RBCs (not plasma) was independently associated with improved 30-day survival for patients with TBI+ and a high ratio of plasma:RBCs (not platelets) was independently associated with improved 30-day survival in TBI- patients. CONCLUSIONS: High platelet ratio was associated with improved survival in TBI+ patients while a high plasma ratio was associated with improved survival in TBI- patients. Prospective studies of blood product ratios should include TBI in the analysis for determination of optimal use of ratios on outcome in injured patients.


Subject(s)
Blood Component Transfusion , Brain Injuries/mortality , Brain Injuries/therapy , Adult , Brain Injuries/blood , Erythrocyte Count , Female , Humans , Male , Middle Aged , Platelet Count , Retrospective Studies , Survival Rate , Trauma Centers , Treatment Outcome , Young Adult
14.
Tex Dent J ; 126(11): 1097-109, 2009 Nov.
Article in English | MEDLINE | ID: mdl-20041570

ABSTRACT

OBJECTIVES: To estimate the prevalence of erosive tooth wear in children aged 12-17 years in the southwest region of San Antonio, Texas, within Bexar County. METHODS: A convenience sample of 307 children aged 12-17 years was selected from two junior high schools. The population consisted predominantly of Hispanic Mexican Americans. The true prevalence of erosive tooth wear within the US is known from only one study, and then only for limited sectors of the population. The Tooth Wear Index, Screening for Oral Health using the Association of State and Territorial Dental Directors (ASTDD) criteria and oral health and dietary assessment questionnaires were used as survey parameters. The questionnaire included data on detailed dietary habits relating primarily to the consumption of acidic beverages and foods. RESULTS: The overall prevalence of erosion within our convenience sample was 5.5 percent. All affected children showed erosive tooth wear low in severity and confined to the enamel with no exposed dentin. A chi-square test was performed to test for associations between the presence of erosion and consumption level of certain acidic foods at a significance level of 5 percent. Few significant and consistent associations were found between erosive tooth wear and consumption frequency categories of groups of acidic foods and beverages using a non-validated food intake questionnaire on purported risk foods. Soda drinks were associated. Mexican acidic foods were not. CONCLUSION: This study indicated a low prevalence and low severity of dental erosion in a convenience sample of children aged 12-17 years in southwest San Antonio, Texas. Issues of sampling and response bias preclude these findings being generalized to other populations and regions.The results should be viewed with caution. Because the local consumption of some purported risk foods appears to be increasing, this study provides a base-line for future assessments of erosive tooth wear in this population.


Subject(s)
Feeding Behavior , Tooth Erosion/epidemiology , Acids , Adolescent , Carbonated Beverages/adverse effects , Chi-Square Distribution , Child , Female , Humans , Male , Mexican Americans/statistics & numerical data , Prevalence , Risk Factors , Severity of Illness Index , Surveys and Questionnaires , Texas/epidemiology , Tooth Erosion/ethnology , Urban Population
SELECTION OF CITATIONS
SEARCH DETAIL
...