Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 39
Filter
1.
Front Microbiol ; 13: 840725, 2022.
Article in English | MEDLINE | ID: mdl-35432287

ABSTRACT

For a long time, veterinary drugs and chemical additives have been widely used in livestock and poultry breeding to improve production performance. However, problems such as drug residues in food are causing serious concerns. The use of functional plants and their extracts to improve production performance is becoming increasingly popular. This study aimed to evaluate the effect of Cistanche deserticola in sheep feed on rumen flora and to analyze the causes to provide a theoretical basis for the future use of Cistanche deserticola as a functional substance to improve sheep production performance. A completely randomized experimental design was adopted using 24 six-month-old sheep males divided into four groups (six animals in each group) which were fed a basic diet composed of alfalfa and tall fescue grass. The C. deserticola feed was provided to sheep at different levels (0, 2, 4, and 6%) as experimental treatments. On the last day (Day 75), ruminal fluid was collected through a rumen tube for evaluating changes in rumen flora. The test results showed that Prevotella_1, Lactobacillus, and Rikenellaceae_RC9_gut_group were the dominant species at the genus level in all samples. Lactobacillus, Rikenellaceae_RC9_gut_group, Ruminococcaceae_NK4A214_group, Butyrivibrio_2, and Christensenellaceae_R-7_group differed significantly in relative abundance among the treatment groups. The polysaccharides in C. deserticola was the major factor influencing the alteration in rumen flora abundance, and had the functions of improving rumen fermentation environment and regulating rumen flora structure, etc. Hence, C. deserticola can be used to regulate rumen fermentation in grazing sheep to improve production efficiency.

3.
Animals (Basel) ; 10(4)2020 Apr 12.
Article in English | MEDLINE | ID: mdl-32290579

ABSTRACT

This study is targeted at evaluating whether C. deserticola addition promotes digestion, nitrogen and energy use, and methane production of sheep fed on fresh forage from alfalfa/tall fescue pastures. The sheep feeding trial was conducted with four addition levels with C. deserticola powder, and a basal diet of fresh alfalfa (Medicago sativa) and tall fescue (Festuca arundinacea). Addition levels of 4% and 6% improved average body weight gain (BWG) by 215.71 and 142.86 g/d, and feed conversion ratio (FCR) by 0.20 and 0.14, respectively. Digestibility of dry matter (DM), organic matter (OM), neutral detergent fiber (NDF), and ether extract (EE) was 62.25%, 65.18%, 58.75%, and 47.25% under the addition level of 2%, which is greater than that in the control group. C. deserticola addition improved energy utilization efficiency, while addition levels of 2% and 4% increased nitrogen intake and deposited nitrogen. Overall, C. deserticola has the potential to improve growth performance, digestion of sheep, so it has suitability to be used as a feed additive.

4.
Pathol Res Pract ; 216(2): 152767, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31812438

ABSTRACT

Conventional acinic cell carcinoma (CACC) represents a prototypical low-grade salivary malignancy. Rarely, acinic cell carcinoma (ACC) can demonstrate aggressive features (zones of necrosis, apoptosis, varying nuclear atypia) warranting classification as "ACC with high-grade transformation" (HGT-ACC) or "dedifferentiated" ACC. This study reports ten new cases of HGT-ACC. There is potential for subtlety in recognizing high-grade transformation and distinguishing discrete nodules of necrosis from cytology aspiration changes. We compared immunohistochemical (IHC) profiles, specifically ß-catenin (bCAT) and cyclin D1 expression, which have been touted as potentially helpful in this context. We quantified morphology (primary axis nucleus, nuclear area and perimeter) in HGT-ACC and CACC. Clinical outcome is known for eight HGT-ACC patients; three patients developed locoregional or distant metastases, five remained disease-free. Nine of ten HGT-ACC expressed strong, diffuse, membranous bCAT. CACC demonstrated lower intensity of membranous bCAT expression. Strong, diffuse nuclear cyclin D1 was seen in five of ten HGT-ACC whereas no CACC demonstrated cyclin D1 with distribution greater than 50 %. The quantified nuclear morphologic features of CACC and HGT-ACC demonstrated overlapping means values. Maximum values for nuclear primary axis, area, and perimeter were greater for HGT-ACC versus CACC, corresponding to a subpopulation of larger tumor cells in HGT-ACC. The poor outcome associated with HGT-ACC justifies its recognition, which should alter surgical approach with respect to elective neck dissection or possible facial nerve sacrifice. With respect to ancillary IHC studies, strong, diffuse membranous bCAT expression, with or without strong nuclear cyclin D1 ≥ 50 % distribution or Ki67 index ≥ 25 % supports this diagnosis.


Subject(s)
Carcinoma, Acinar Cell/pathology , Cyclin D1/metabolism , Salivary Gland Neoplasms/pathology , beta Catenin/metabolism , Adult , Aged , Carcinoma, Acinar Cell/diagnosis , Carcinoma, Acinar Cell/metabolism , Cell Nucleus/metabolism , Cell Transformation, Neoplastic , Female , Humans , Immunohistochemistry , Ki-67 Antigen/metabolism , Male , Middle Aged , Prognosis , Salivary Gland Neoplasms/diagnosis , Salivary Gland Neoplasms/metabolism
5.
Endocr Pathol ; 29(1): 80-85, 2018 Mar.
Article in English | MEDLINE | ID: mdl-29396810

ABSTRACT

Extranodal extension (ENE) is a prognostic indicator of aggressiveness for papillary thyroid cancer (PTC). The association between the size of metastatic nodes and the prevalence of ENE has not been previously explored. However, there is a common belief that small lymph nodes with metastatic disease do not significantly impact patient outcome. This study investigates the relationship between the prevalence of ENE and the size of a positive lymph node. Linear dimensions and malignant histological characteristics of 979 metastatic lymph nodes from 152 thyroid cancer patients were retrospectively analyzed. Data was analyzed using chi-square tests and multilevel logistic regression modeling. ENE was present in 144 of 979 lymph nodes; the sizes of the involved lymph nodes ranged from 0.9 to 44 mm. ENE was identified in 7.8% of lymph nodes measuring ≤ 5 mm, 18.9% between 6 and 10 mm, 23.1% between 11 and 15 mm, 25.0% between 16 and 20 mm, and 14.0% between 21 and 25 mm in size. The association between node size and ENE status was significant (odds ratio (OR) = 1.07, confidence interval (CI) = [1.04, 1.11]). The size of the metastatic focus directly correlated with ENE (OR = 1.07, 95% CI = [1.07, 1.14], p value < 0.001). Increasing lymph node size increases the likelihood of ENE for metastatic PTC. Importantly, small positive lymph nodes can also harbor ENE to a significant extent. Further studies are required to determine the clinical and prognostic significance of lymph node size and the presence of ENE.


Subject(s)
Carcinoma, Papillary/pathology , Lymphatic Metastasis/pathology , Thyroid Neoplasms/pathology , Adolescent , Adult , Aged , Aged, 80 and over , Child , Female , Humans , Male , Middle Aged , Prevalence , Retrospective Studies , Thyroid Cancer, Papillary , Young Adult
6.
Acad Emerg Med ; 25(1): 65-75, 2018 01.
Article in English | MEDLINE | ID: mdl-28940546

ABSTRACT

BACKGROUND: Emergency department (ED) acuity is the general level of patient illness, urgency for clinical intervention, and intensity of resource use in an ED environment. The relative strength of commonly used measures of ED acuity is not well understood. METHODS: We performed a retrospective cross-sectional analysis of ED-level data to evaluate the relative strength of association between commonly used proxy measures with a full spectrum measure of ED acuity. Common measures included the percentage of patients with Emergency Severity Index (ESI) scores of 1 or 2, case mix index (CMI), academic status, annual ED volume, inpatient admission rate, percentage of Medicare patients, and patients seen per attending-hour. Our reference standard for acuity is the proportion of high-acuity charts (PHAC) coded and billed according to the Centers for Medicare and Medicaid Service's Ambulatory Payment Classification (APC) system. High-acuity charts included those APC 4 or 5 or critical care. PHAC was represented as a fractional response variable. We examined the strength of associations between common acuity measures and PHAC using Spearman's rank correlation coefficients (rs ) and regression models including a quasi-binomial generalized linear model and linear regression. RESULTS: In our univariate analysis, the percentage of patients ESI 1 or 2, CMI, academic status, and annual ED volume had statistically significant associations with PHAC. None explained more than 16% of PHAC variation. For regression models including all common acuity measures, academic status was the only variable significantly associated with PHAC. CONCLUSION: Emergency Severity Index had the strongest association with PHAC followed by CMI and annual ED volume. Academic status captures variability outside of that explained by ESI, CMI, annual ED volume, percentage of Medicare patients, or patients per attending per hour. All measures combined only explained only 42.6% of PHAC variation.


Subject(s)
Emergency Service, Hospital/statistics & numerical data , Patient Acuity , Cross-Sectional Studies , Humans , Insurance, Health/statistics & numerical data , Retrospective Studies , Triage/statistics & numerical data , United States
7.
Anesth Analg ; 125(5): 1616-1626, 2017 11.
Article in English | MEDLINE | ID: mdl-28806206

ABSTRACT

BACKGROUND: Evaluation and treatment of chronic pain worldwide are limited by the lack of standardized assessment tools incorporating consistent definitions of pain chronicity and specific queries of known social and psychological risk factors for chronic pain. The Vanderbilt Global Pain Survey (VGPS) was developed as a tool to address these concerns, specifically in the low- and middle-income countries where global burden is highest. METHODS: The VGPS was developed using standardized and cross-culturally validated metrics, including the Brief Pain Inventory and World Health Organization Disability Assessment Scale, as well as the Pain Catastrophizing Scale, the Fibromyalgia Survey Questionnaire along with queries about pain attitudes to assess the prevalence of chronic pain and disability along with its psychosocial and emotional associations. The VGPS was piloted in both Nepal and India over a 1-month period in 2014, allowing for evaluation of this tool in 2 distinctly diverse cultures. RESULTS: Prevalence of chronic pain in Nepal and India was consistent with published data. The Nepali cohort displayed a pain point prevalence of 48%-50% along with some form of disability present in approximately one third of the past 30 days. Additionally, 11% of Nepalis recorded pain in 2 somatic sites and 39% of those surveyed documented a history of a traumatic event. In the Indian cohort, pain point prevalence was approximately 24% to 41% based on the question phrasing, and any form of disability was present in 6 of the last 30 days. Of the Indians surveyed, 11% reported pain in 2 somatic sites, with only 4% reporting a previous traumatic event. Overall, Nepal had significantly higher chronic pain prevalence, symptom severity, widespread pain, and self-reported previous traumatic events, yet lower reported pain severity. CONCLUSIONS: Our findings confirm prevalent chronic pain, while revealing pertinent cultural differences and survey limitations that will inform future assessment strategies. Specific areas for improvement identified in this VGPS pilot study included survey translation methodology, redundancy of embedded metrics and cultural limitations in representative sampling and in detecting the prevalence of mental health illness, catastrophizing behavior, and previous traumatic events. International expert consensus is needed.


Subject(s)
Chronic Pain/epidemiology , Activities of Daily Living , Adult , Central Nervous System Sensitization , Chronic Pain/diagnosis , Chronic Pain/physiopathology , Chronic Pain/psychology , Cost of Illness , Cultural Characteristics , Disability Evaluation , Female , Health Knowledge, Attitudes, Practice , Health Status , Health Surveys , Humans , Illness Behavior , India/epidemiology , Male , Middle Aged , Nepal/epidemiology , Pain Measurement , Pain Perception , Pilot Projects , Prevalence , Young Adult
8.
Am J Cardiol ; 120(8): 1293-1297, 2017 Oct 15.
Article in English | MEDLINE | ID: mdl-28826895

ABSTRACT

Recent studies suggest that the use of preoperative ß blockers in cardiac surgery may not provide improved mortality rates and may even contribute to negative clinical outcomes. We therefore assessed the role of ß blockers on several outcomes after cardiac surgery (delirium, acute kidney injury [AKI], stroke, atrial fibrillation (AF), mortality, and hospital length of stay) in 4,076 patients who underwent elective coronary artery bypass grafting, coronary artery bypass grafting + valve, or valve cardiac surgery from November 1, 2009, to September 30, 2015, at Vanderbilt Medical Center. Clinical data from 2 prospectively collected datasets at our institution were reviewed: the Cardiac Surgery Perioperative Outcomes Database and the Society of Thoracic Surgeons Database. Preoperative ß-blocker use was defined by Society of Thoracic Surgeons guidelines as patients receiving a ß blocker within 24 hours preceding surgery. Of the included patients, 2,648 (65.0%) were administered a ß blocker within 24 hours before surgery. Adjusting for possible confounders, preoperative ß-blocker use was associated with increased odds of AKI stage 2 (odds ratio 1.96, 95% confidence interval 1.19 to 3.24, p <0.01). There was no evidence that ß-blocker use had an independent association with postoperative delirium, AKI stages 1 and 3, stroke, AF, mortality, or prolonged length of stay. A secondary propensity score analysis did not show a marginal association between ß-blocker use and any outcome. In conclusion, we did not find significant evidence that preoperative ß-blocker use was associated with postoperative delirium, AF, AKI, stroke, or mortality.


Subject(s)
Adrenergic beta-Antagonists/therapeutic use , Cardiac Surgical Procedures/methods , Heart Diseases/surgery , Postoperative Complications/prevention & control , Preoperative Care/methods , Female , Follow-Up Studies , Humans , Incidence , Male , Middle Aged , Postoperative Complications/epidemiology , Retrospective Studies , Survival Rate/trends , Tennessee/epidemiology , Time Factors , Treatment Outcome
9.
Can J Anaesth ; 64(11): 1129-1137, 2017 Nov.
Article in English | MEDLINE | ID: mdl-28718100

ABSTRACT

PURPOSE: Cardiopulmonary bypass (CPB) induces a significant inflammatory response that may increase the risk for delirium. We hypothesized that exposure to CPB during coronary artery bypass grafting (CABG) surgery would correlate with an increased risk of delirium. METHODS: We reviewed clinical data from two databases at our medical centre - the Cardiac Surgery Perioperative Outcomes Database and the Society of Thoracic Surgeons Database. Patients undergoing elective CABG surgery (on-pump and off-pump) from November 1, 2009 to September 30, 2015 were included in the study. Delirium was defined as any postoperative positive Confusion Assessment Method for the Intensive Care Unit exam during the intensive care unit stay. We performed logistic regression to isolate the association between CPB exposure and delirium adjusted for predetermined risk factors and potential confounders. RESULTS: During the study period, 2,280 patients underwent elective CABG surgery, with 384 patients (16.9%) exposed to CPB. Delirium was diagnosed in 451 patients (19.8%). Exposure to CPB showed a significant independent association with delirium. Patients exposed to CPB for 142 min (90th percentile of CPB duration) vs those exposed for 54 min (10th percentile) had an adjusted relative risk (RR) of delirium of 2.18 (95% confidence interval [CI], 1.39 to 3.07; P = 0.002) vs a RR of 1.51 (95% CI, 0.92 to 2.29; P = 0.10), respectively. CONCLUSIONS: The use and duration of cardiopulmonary bypass were associated with an increased risk of delirium in patients undergoing CABG surgery. TRIAL REGISTRATION: www.clinicaltrials.gov , NCT02548975. Registered 4 September 2015.


Subject(s)
Cardiopulmonary Bypass/adverse effects , Coronary Artery Bypass/adverse effects , Delirium/etiology , Postoperative Complications/epidemiology , Aged , Cardiopulmonary Bypass/methods , Cohort Studies , Coronary Artery Bypass/methods , Coronary Artery Bypass, Off-Pump/adverse effects , Coronary Artery Bypass, Off-Pump/methods , Databases, Factual , Delirium/epidemiology , Female , Humans , Intensive Care Units , Logistic Models , Male , Middle Aged , Risk Factors
10.
J Am Geriatr Soc ; 65(6): 1333-1338, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28263444

ABSTRACT

BACKGROUND: The natural course and clinical significance of delirium in the emergency department (ED) is unclear. OBJECTIVES: We sought to (1) describe the extent to which delirium in the ED persists into hospitalization (ED delirium duration) and (2) determine how ED delirium duration is associated with 6-month functional status and cognition. DESIGN: Prospective cohort study. SETTING: Tertiary care, academic medical center. PARTICIPANTS: ED patients ≥65 years old who were admitted to the hospital. MEASUREMENTS: The modified Brief Confusion Assessment Method was used to ascertain delirium in the ED and hospital. Premorbid and 6-month function were determined using the Older American Resources and Services Activities of Daily Living (OARS ADL) questionnaire which ranged from 0 (completely dependent) to 28 (completely dependent). Premorbid and 6-month cognition were determined using the short form Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) which ranged from 1 to 5 (severe dementia). Multiple linear regression was performed to determine if ED delirium duration was associated with 6-month function and cognition adjusted for baseline OARS ADL and IQCODE, and other confounders. RESULTS: A total of 228 older ED patients were enrolled. Of the 105 patients who were delirious in the ED, 81 (77.1%) patients' delirium persisted into hospitalization. For every ED delirium duration day, the 6-month OARS ADL decreased by 0.63 points (95% CI: -1.01 to -0.24), indicating poorer function. For every ED delirium duration day, the 6-month IQCODE increased 0.06 points (95% CI: 0.01-0.10) indicating poorer cognition. CONCLUSIONS: Delirium in the ED is not a transient event and frequently persists into hospitalization. Longer ED delirium duration is associated with an incremental worsening of 6-month functional and cognitive outcomes.


Subject(s)
Cognition , Delirium/complications , Emergency Service, Hospital/statistics & numerical data , Hospitalization/trends , Activities of Daily Living , Cognition Disorders , Humans , Length of Stay , Prognosis , Prospective Studies , Risk Factors
11.
Acad Emerg Med ; 24(6): 701-709, 2017 06.
Article in English | MEDLINE | ID: mdl-28261908

ABSTRACT

BACKGROUND: In the absence of the existing acute coronary syndrome (ACS) guidelines directing the clinical practice implementation of emergency department (ED) screening and diagnosis, there is variable screening and diagnostic clinical practice across ED facilities. This practice diversity may be warranted. Understanding the variability may identify opportunities for more consistent practice. METHODS: This is a cross-sectional clinical practice epidemiology study with the ED as the unit of analysis characterizing variability in the ACS evaluation across 62 diverse EDs. We explored three domains of screening and diagnostic practice: 1) variability in criteria used by EDs to identify patients for an early electrocardiogram (ECG) to diagnose ST-elevation myocardial infarction (STEMI), 2) nonuniform troponin biomarker and formalized pre-troponin risk stratification use for the diagnosis of non-ST-elevation myocardial infarction (NSTEMI), and 3) variation in the use of noninvasive testing (NIVT) to identify obstructive coronary artery disease or detect inducible ischemia. RESULTS: We found that 85% of EDs utilize a formal triage protocol to screen patients for an early ECG to diagnose STEMI. Of these, 17% use chest pain as the sole criteria. For the diagnosis of NSTEMI, 58% use intervals ≥4 hours for a second troponin and 34% routinely risk stratify before troponin testing. For the diagnosis of noninfarction ischemia, the median percentage of patients who have NIVT performed during their ED visit is 5%. The median percentage of patients referred for NIVT in hospital (observation or admission) is 61%. Coronary CT angiography is used in 66% of EDs. Exercise treadmill testing is the most frequently reported first-line NIVT (42%). CONCLUSION: Our results suggest highly variable ACS screening and clinical practice.


Subject(s)
Acute Coronary Syndrome/diagnosis , Emergency Service, Hospital/statistics & numerical data , Hospitalization/statistics & numerical data , Acute Coronary Syndrome/epidemiology , Aged , Biomarkers/blood , Chest Pain/diagnosis , Coronary Angiography/statistics & numerical data , Cross-Sectional Studies , Electrocardiography/statistics & numerical data , Exercise Test/statistics & numerical data , Female , Humans , Male , Myocardial Infarction/diagnosis , Troponin/blood
12.
J Am Heart Assoc ; 6(3)2017 Feb 23.
Article in English | MEDLINE | ID: mdl-28232323

ABSTRACT

BACKGROUND: Timely diagnosis of ST-segment elevation myocardial infarction (STEMI) in the emergency department (ED) is made solely by ECG. Obtaining this test within 10 minutes of ED arrival is critical to achieving the best outcomes. We investigated variability in the timely identification of STEMI across institutions and whether performance variation was associated with the ED characteristics, the comprehensiveness of screening criteria, and the STEMI screening processes. METHODS AND RESULTS: We examined STEMI screening performance in 7 EDs, with the missed case rate (MCR) as our primary end point. The MCR is the proportion of primarily screened ED patients diagnosed with STEMI who did not receive an ECG within 15 minutes of ED arrival. STEMI was defined by hospital discharge diagnosis. Relationships between the MCR and ED characteristics, screening criteria, and STEMI screening processes were assessed, along with differences in door-to-ECG times for captured versus missed patients. The overall MCR for all 7 EDs was 12.8%. The lowest and highest MCRs were 3.4% and 32.6%, respectively. The mean difference in door-to-ECG times for captured and missed patients was 31 minutes, with a range of 14 to 80 minutes of additional myocardial ischemia time for missed cases. The prevalence of primarily screened ED STEMIs was 0.09%. EDs with the greatest informedness (sensitivity+specificity-1) demonstrated superior performance across all other screening measures. CONCLUSIONS: The 29.2% difference in MCRs between the highest and lowest performing EDs demonstrates room for improving timely STEMI identification among primarily screened ED patients. The MCR and informedness can be used to compare screening across EDs and to understand variable performance.


Subject(s)
Early Diagnosis , Electrocardiography/methods , Emergency Service, Hospital/organization & administration , Quality Assurance, Health Care , ST Elevation Myocardial Infarction/diagnosis , Triage , Aged , Female , Humans , Male , Middle Aged , Prevalence , Prospective Studies , ST Elevation Myocardial Infarction/epidemiology , ST Elevation Myocardial Infarction/physiopathology , Time Factors , United States/epidemiology
13.
J Am Geriatr Soc ; 65(2): 313-322, 2017 Feb.
Article in English | MEDLINE | ID: mdl-28198565

ABSTRACT

OBJECTIVES: To determine the effect and cost-effectiveness of training nonnursing staff to provide feeding assistance for nutritionally at-risk nursing home (NH) residents. DESIGN: Randomized, controlled trial. SETTING: Five community NHs. PARTICIPANTS: Long-stay NH residents with an order for caloric supplementation (N = 122). INTERVENTION: Research staff provided an 8-hour training curriculum to nonnursing staff. Trained staff were assigned to between-meal supplement or snack delivery for the intervention group; the control group received usual care. MEASUREMENTS: Research staff used standardized observations and weighed-intake methods to measure frequency of between-meal delivery, staff assistance time, and resident caloric intake. RESULTS: Fifty staff (mean 10 per site) completed training. The intervention had a significant effect on between-meal caloric intake (F = 56.29, P < .001), with the intervention group consuming, on average, 163.33 (95% CI = 120.19-206.47) calories per person per day more than the usual care control group. The intervention costs were $1.27 per person per day higher than usual care (P < .001). The incremental cost-effectiveness ratio for the intervention was 134 kcal per dollar. The increase in cost was due to the higher frequency and number of snack items given per person per day and the associated staff time to provide assistance. CONCLUSION: It is cost effective to train nonnursing staff to provide caloric supplementation, and this practice has a positive effect on residents' between-meal intake.


Subject(s)
Diet Therapy , Frail Elderly , Inservice Training , Nursing Homes , Aged, 80 and over , Body Weight , Cost-Benefit Analysis , Energy Intake , Female , Humans , Male , Malnutrition/prevention & control , Program Evaluation
14.
Arch Pathol Lab Med ; 141(3): 423-430, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28055241

ABSTRACT

CONTEXT: - Little is known regarding the reporting quality of meta-analyses in diagnostic pathology. OBJECTIVE: - To compare reporting quality of meta-analyses in diagnostic pathology and medicine and to examine factors associated with reporting quality of diagnostic pathology meta-analyses. DESIGN: - Meta-analyses were identified in 12 major diagnostic pathology journals without specifying years and 4 major medicine journals in 2006 and 2011 using PubMed. Reporting quality of meta-analyses was evaluated using the 27-item checklist of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement published in 2009. A higher PRISMA score indicates higher reporting quality. RESULTS: - Forty-one diagnostic pathology meta-analyses and 118 medicine meta-analyses were included. Overall, reporting quality of meta-analyses in diagnostic pathology was lower than that in medicine (median [interquartile range] = 22 [15, 25] versus 27 [23, 28], P < .001). Compared with medicine meta-analyses, diagnostic pathology meta-analyses less likely reported 23 of the 27 items (85.2%) on the PRISMA checklist, but more likely reported the data items. Higher reporting quality of diagnostic pathology meta-analyses was associated with recent publication years (later than 2009 versus 2009 or earlier, P = .002) and non-North American first authors (versus North American, P = .001), but not journal publisher's location (P = .11). Interestingly, reporting quality was not associated with adjusted citation ratio for meta-analyses in either diagnostic pathology or medicine (P = .40 and P = .09, respectively). CONCLUSIONS: - Meta-analyses in diagnostic pathology had lower reporting quality than those in medicine. Reporting quality of diagnostic pathology meta-analyses is linked to publication year and first author's location, but not to journal publisher's location or article's adjusted citation ratios. More research and education on meta-analysis methodology may improve the reporting quality of diagnostic pathology meta-analyses.


Subject(s)
Meta-Analysis as Topic , Pathology , Humans
15.
Am J Geriatr Psychiatry ; 25(3): 233-242, 2017 Mar.
Article in English | MEDLINE | ID: mdl-27623552

ABSTRACT

OBJECTIVES: To determine how delirium subtyped by level of arousal at initial presentation affects 6-month mortality. DESIGN: This was a preplanned secondary analysis of two prospective cohort studies. SETTING: Academic tertiary care emergency department (ED). PARTICIPANTS: 1,084 ED patients who were 65 years old or older. MEASUREMENTS: At the time of enrollment, trained research personnel performed the Confusion Assessment Method for the Intensive Care Unit and the Richmond Agitation Sedation Score to determine delirium and level of arousal, respectively. Patients were categorized as having no delirium, delirium with normal arousal, delirium with decreased arousal, or delirium with increased arousal. Death was ascertained by medical record review and the Social Security Death Index. Cox proportional hazard regression was used to analyze the association between delirium arousal subtypes and 6-month mortality. RESULTS: Delirium with normal arousal was the only subtype that was significantly associated with increased 6-month mortality (hazard ratio [HR]: 3.1, 95% confidence interval [CI]: 1.3-7.4) compared with the no delirium group after adjusting for confounders. The HRs for delirium with decreased and increased arousal were 1.4 (95% CI: 0.9-2.1) and 1.3 (95% CI: 0.3-5.4), respectively. CONCLUSIONS: Delirious ED patients with normal arousal at initial presentation had a threefold increased hazard of death within 6 months compared with patients without delirium. There was a trend towards increased hazard of death in delirious ED patients with decreased arousal, but this relationship did not reach statistical significance. These data suggest that subtyping delirium by arousal may have prognostic value but requires confirmation with a larger study.


Subject(s)
Arousal/physiology , Delirium , Emergency Service, Hospital , Aged , Aged, 80 and over , Delirium/classification , Delirium/mortality , Delirium/physiopathology , Female , Humans , Male , Prospective Studies
16.
Clin J Am Soc Nephrol ; 11(12): 2177-2185, 2016 12 07.
Article in English | MEDLINE | ID: mdl-27827311

ABSTRACT

BACKGROUND AND OBJECTIVES: Diabetes is the leading cause of ESRD. Glucose control improves kidney outcomes. Most patients eventually require treatment intensification with second-line medications; however, the differential effects of those therapies on kidney function are unknown. DESIGN, SETTING, PARTICIPANTS & MEASUREMENTS: We studied a retrospective cohort of veterans on metformin monotherapy from 2001 to 2008 who added either insulin or sulfonylurea and were followed through September of 2011. We used propensity score matching 1:4 for those who intensified with insulin versus sulfonylurea, respectively. The primary composite outcome was persistent decline in eGFR≥35% from baseline (GFR event) or a diagnosis of ESRD. The secondary outcome was a GFR event, ESRD, or death. Outcome risks were compared using marginal structural models to account for time-varying covariates. The primary analysis required persistence with the intensified regimen. An effect modification of baseline eGFR and the intervention on both outcomes was evaluated. RESULTS: There were 1989 patients on metformin and insulin and 7956 patients on metformin and sulfonylurea. Median patient age was 60 years old (interquartile range, 54-67), median hemoglobin A1c was 8.1% (interquartile range, 7.1%-9.9%), and median creatinine was 1.0 mg/dl (interquartile range, 0.9-1.1). The rate of GFR event or ESRD (primary outcome) was 31 versus 26 per 1000 person-years for those who added insulin versus sulfonylureas, respectively (adjusted hazard ratio, 1.27; 95% confidence interval, 0.99 to 1.63). The rate of GFR event, ESRD, or death was 64 versus 49 per 1000 person-years, respectively (adjusted hazard ratio, 1.33; 95% confidence interval, 1.11 to 1.59). Tests for a therapy by baseline eGFR interaction for both the primary and secondary outcomes were not significant (P=0.39 and P=0.12, respectively). CONCLUSIONS: Among patients who intensified metformin monotherapy, the addition of insulin compared with a sulfonylurea was not associated with a higher rate of kidney outcomes but was associated with a higher rate of the composite outcome that included death. These risks were not modified by baseline eGFR.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Kidney Failure, Chronic/epidemiology , Kidney Failure, Chronic/prevention & control , Metformin/therapeutic use , Sulfonylurea Compounds/therapeutic use , Aged , Comparative Effectiveness Research , Creatinine/blood , Diabetes Mellitus, Type 2/blood , Drug Therapy, Combination , Female , Glomerular Filtration Rate , Glycated Hemoglobin/metabolism , Humans , Incidence , Kidney Failure, Chronic/mortality , Kidney Failure, Chronic/physiopathology , Male , Middle Aged , Retrospective Studies , Tennessee/epidemiology
17.
PLoS One ; 11(7): e0159624, 2016.
Article in English | MEDLINE | ID: mdl-27438007

ABSTRACT

INTRODUCTION: Soft tissue calcification, including both dystrophic calcification and heterotopic ossification, may occur following injury. These lesions have variable fates as they are either resorbed or persist. Persistent soft tissue calcification may result in chronic inflammation and/or loss of function of that soft tissue. The molecular mechanisms that result in the development and maturation of calcifications are uncertain. As a result, directed therapies that prevent or resorb soft tissue calcifications remain largely unsuccessful. Animal models of post-traumatic soft tissue calcification that allow for cost-effective, serial analysis of an individual animal over time are necessary to derive and test novel therapies. We have determined that a cardiotoxin-induced injury of the muscles in the posterior compartment of the lower extremity represents a useful model in which soft tissue calcification develops remote from adjacent bones, thereby allowing for serial analysis by plain radiography. The purpose of the study was to design and validate a method for quantifying soft tissue calcifications in mice longitudinally using plain radiographic techniques and an ordinal scoring system. METHODS: Muscle injury was induced by injecting cardiotoxin into the posterior compartment of the lower extremity in mice susceptible to developing soft tissue calcification. Seven days following injury, radiographs were obtained under anesthesia. Multiple researchers applied methods designed to standardize post-image processing of digital radiographs (N = 4) and quantify soft tissue calcification (N = 6) in these images using an ordinal scoring system. Inter- and intra-observer agreement for both post-image processing and the scoring system used was assessed using weighted kappa statistics. Soft tissue calcification quantifications by the ordinal scale were compared to mineral volume measurements (threshold 450.7mgHA/cm3) determined by µCT. Finally, sample-size calculations necessary to discriminate between a 25%, 50%, 75%, and 100% difference in STiCSS score 7 days following burn/CTX induced muscle injury were determined. RESULTS: Precision analysis demonstrated substantial to good agreement for both post-image processing (κ = 0.73 to 0.90) and scoring (κ = 0.88 to 0.93), with low inter- and intra-observer variability. Additionally, there was a strong correlation in quantification of soft tissue calcification between the ordinal system and by mineral volume quantification by µCT (Spearman r = 0.83 to 0.89). The ordinal scoring system reliably quantified soft tissue calcification in a burn/CTX-induced soft tissue calcification model compared to non-injured controls (Mann-Whitney rank test: P = 0.0002, ***). Sample size calculations revealed that 6 mice per group would be required to detect a 50% difference in STiCSS score with a power of 0.8. Finally, the STiCSS was demonstrated to reliably quantify soft tissue calcification [dystrophic calcification and heterotopic ossification] by radiographic analysis, independent of the histopathological state of the mineralization. CONCLUSIONS: Radiographic analysis can discriminate muscle injury-induced soft tissue calcification from adjacent bone and follow its clinical course over time without requiring the sacrifice of the animal. While the STiCSS cannot identify the specific type of soft tissue calcification present, it is still a useful and valid method by which to quantify the degree of soft tissue calcification. This methodology allows for longitudinal measurements of soft tissue calcification in a single animal, which is relatively less expensive, less time-consuming, and exposes the animal to less radiation than in vivo µCT. Therefore, this high-throughput, longitudinal analytic method for quantifying soft tissue calcification is a viable alternative for the study of soft tissue calcification.


Subject(s)
Muscle, Skeletal/diagnostic imaging , Muscular Diseases/diagnostic imaging , Myositis Ossificans/diagnostic imaging , Ossification, Heterotopic/diagnostic imaging , Animals , Bone and Bones , Calcinosis , Humans , Image Processing, Computer-Assisted , Mice , Muscle, Skeletal/physiopathology , Muscular Diseases/physiopathology , Myositis Ossificans/physiopathology , Ossification, Heterotopic/physiopathology
18.
BMC Endocr Disord ; 16(1): 32, 2016 Jun 02.
Article in English | MEDLINE | ID: mdl-27255309

ABSTRACT

BACKGROUND: To describe common type 2 diabetes treatment intensification regimens, patients' characteristics and changes in glycated hemoglobin (HbA1c) and body mass index (BMI). METHODS: We constructed a national retrospective cohort of veterans initially treated for diabetes with either metformin or sulfonylurea from 2001 through 2008, using Veterans Health Administration (VHA) and Medicare data. Patients were followed through September, 2011 to identify common diabetes treatment intensification regimens. We evaluated changes in HbA1c and BMI post-intensification for metformin-based regimens. RESULTS: We identified 323,857 veterans who initiated diabetes treatment. Of these, 55 % initiated metformin, 43 % sulfonylurea and 2 % other regimens. Fifty percent (N = 89,057) of metformin initiators remained on metformin monotherapy over a median follow-up 58 months (interquartile range [IQR] 35, 74). Among 80,725 patients who intensified metformin monotherapy, the four most common regimens were addition of sulfonylurea (79 %), thiazolidinedione [TZD] (6 %), or insulin (8 %), and switch to insulin monotherapy (2 %). Across these regimens, median HbA1c values declined from a range of 7.0-7.8 % (53-62 mmol/mol) at intensification to 6.6-7.0 % (49-53 mmol/mol) at 1 year, and remained stable up to 3 years afterwards. Median BMI ranged between 30.5 and 32 kg/m(2) at intensification and increased very modestly in those who intensified with oral regimens, but 1-2 kg/m(2) over 3 years among those who intensified with insulin-based regimens. CONCLUSIONS: By 1 year post-intensification of metformin monotherapy, HbA1c declined in all four common intensification regimens, and remained close to 7 % in subsequent follow-up. BMI increased substantially for those on insulin-based regimens.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Metformin/therapeutic use , Sulfonylurea Compounds/therapeutic use , Thiazolidinediones/therapeutic use , Body Mass Index , Cohort Studies , Glycated Hemoglobin/metabolism , Humans , Hypoglycemic Agents/administration & dosage , Insulin/administration & dosage , Metformin/administration & dosage , Practice Guidelines as Topic , Retrospective Studies , Sulfonylurea Compounds/administration & dosage , Thiazolidinediones/administration & dosage , Treatment Outcome
19.
Pain Med ; 17(9): 1658-63, 2016 09.
Article in English | MEDLINE | ID: mdl-27121891

ABSTRACT

OBJECTIVE: Patients in remote areas lack access to specialist care and pain management services. In order to provide pain management care to patients remote from our center, we created a telemedicine pain clinic (telepain) at Massachusetts General Hospital (MGH) in Boston, MA to extend services to the Island of Martha's Vineyard. DESIGN: Over 13 months, 238 telepain video clinic evaluations were conducted. A pain physician visited the island 1-2 days per month and performed 121 interventions. Given the novelty of telemedicine clinics, we surveyed patients to gauge satisfaction and identify perceived weaknesses in our approach that could be addressed. Forty-nine consecutive patients answered a 14-question, 5-point balanced Likert-scale survey with 1 (no, definitely not) being most negative and 5 (yes, definitely) being most positive. SETTING: Patients on Martha's Vineyard referred for pain management consultation services via telemedicine. PATIENTS: Forty-nine consecutive patients evaluated via telemedicine. INTERVENTIONS: Likert-scale survey administered. MEASURES: Questions measured patient impressions of video-based visits with their doctor, convenience of the visit, concerns about privacy, and whether they would recommend such a visit, among other items. RESULTS: Mean respondent scores for each question were >4.3 indicating a favorable impression of the telepain clinic experience. Lowest mean scores were found when respondents were asked to compare the care they received by telepain versus an in-person visit, or whether they were able to develop a friendly relationship with the doctor. CONCLUSIONS: The results suggest an overall positive reception of telepain by patients, yet highlight the challenge of building a patient-physician relationship remotely.


Subject(s)
Pain Management/methods , Telemedicine/methods , Humans , Massachusetts , Patient Satisfaction , Surveys and Questionnaires
20.
J Gen Intern Med ; 31(6): 638-46, 2016 06.
Article in English | MEDLINE | ID: mdl-26921160

ABSTRACT

BACKGROUND: Type 2 diabetes patients often initiate treatment with a sulfonylurea and subsequently intensify their therapy with insulin. However, information on optimal treatment regimens for these patients is limited. OBJECTIVE: To compare risk of cardiovascular disease (CVD) and hypoglycemia between sulfonylurea initiators who switch to or add insulin. DESIGN: This was a retrospective cohort assembled using national Veterans Health Administration (VHA), Medicare, and National Death Index databases. PARTICIPANTS: Veterans who initiated diabetes treatment with a sulfonylurea between 2001 and 2008 and intensified their regimen with insulin were followed through 2011. MAIN MEASURES: The association between insulin versus sulfonylurea + insulin and time to CVD or hypoglycemia were evaluated using Cox proportional hazard models in a 1:1 propensity score-matched cohort. CVD included hospitalization for acute myocardial infarction or stroke, or cardiovascular mortality. Hypoglycemia included hospitalizations or emergency visits for hypoglycemia, or outpatient blood glucose measurements <60 mg/dL. Subgroups included age < 65 and ≥ 65 years and estimated glomerular filtration rate ≥ 60 and < 60 ml/min. KEY FINDINGS: There were 1646 and 3728 sulfonylurea monotherapy initiators who switched to insulin monotherapy or added insulin, respectively. The 1596 propensity score-matched patients in each group had similar baseline characteristics at insulin initiation. The rate of CVD per 1000 person-years among insulin versus sulfonylurea + insulin users were 49.3 and 56.0, respectively [hazard ratio (HR) 0.85, 95 % confidence interval (CI) 0.64, 1.12]. Rates of first and recurrent hypoglycemia events per 1000 person-years were 74.0 and 100.0 among insulin users compared to 78.9 and 116.8 among sulfonylurea plus insulin users, yielding HR (95 % CI) of 0.94 (0.76, 1.16) and 0.87 (0.69, 1.10), respectively. Subgroup analysis results were consistent with the main findings. CONCLUSIONS: Compared to sulfonylurea users who added insulin, those who switched to insulin alone had numerically lower CVD and hypoglycemia events, but these differences in risk were not statistically significant.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Hypoglycemic Agents/therapeutic use , Insulin/therapeutic use , Sulfonylurea Compounds/therapeutic use , Adult , Aged , Cardiovascular Diseases/chemically induced , Cardiovascular Diseases/epidemiology , Cohort Studies , Comparative Effectiveness Research/methods , Diabetes Mellitus, Type 2/epidemiology , Drug Substitution , Drug Therapy, Combination , Female , Humans , Hypoglycemia/chemically induced , Hypoglycemia/epidemiology , Hypoglycemic Agents/adverse effects , Insulin/adverse effects , Male , Middle Aged , Retrospective Studies , Sulfonylurea Compounds/adverse effects , United States/epidemiology , Veterans
SELECTION OF CITATIONS
SEARCH DETAIL
...