Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 107
Filtrar
1.
Glomerular Dis ; 4(1): 105-118, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39015841

RESUMO

Introduction: Patients with primary glomerular disease (GN) have unique management needs. We describe the design of a user-centered, patient-facing electronic health (eHealth) tool to support GN management. Methods: We surveyed patients and GN expert nephrologists on disease management tasks, educational needs, and barriers and facilitators of eHealth tool use. Results were summarized and presented to patients, nephrologists, engineers, and a behavioral and implementation science expert in stakeholder meetings to jointly design an eHealth tool. Key themes from the meetings are described using rapid qualitative analysis. Results: Sixty-six patients with minimal change disease, focal segmental glomerulosclerosis, IgA nephropathy, and membranous nephropathy responded to the survey, as well as 25 nephrologists from the NIH-funded Cure Glomerulonephropathy study network. Overall, patients performed fewer management tasks and acknowledged fewer informational needs than recommended by nephrologists. Patients were more knowledgeable about eHealth tools than nephrologists. Nine patient stakeholders reflected on the survey findings and noted a lack of awareness of key recommended management tasks and receiving little guidance from nephrologists on using eHealth. Key themes and concepts from the stakeholder meetings about eHealth tool development included the need for customizable design, trustworthy sources, seamless integration with other apps and clinical workflow, and reliable data tracking. The final design of our eHealth tool, the UrApp System, has 5 core features: "Profile" generates personalized data tracking, educational information, facilitation with provider discussions and inputting other preferences; "Data Tracking" displays patient health data with the ability to communicate important trends to patients and nephrologists; "Resources" provides trusted education information in a personalized manner; "Calendar" displays key events and generate reminders; and "Journal" facilitates information documentation using written or audio notes. Conclusion: Our theory- and evidenced-based, stakeholder-engaged design process created designs for an eHealth tool to support the unique needs of patients with GN, optimized for effectiveness and implementation.

2.
Neurourol Urodyn ; 2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38860440

RESUMO

INTRODUCTION: Patients with neurogenic lower urinary tract dysfunction (NGLUTD) who require catheterization either with clean intermittent catheters (CIC) or indwelling catheters suffer with frequent urinary tract infections (UTIs). This study assessed the efficacy, patient persistence, satisfaction, and the impact on quality of life (QoL) of gentamicin nightly bladder instillations with 15 mg. METHODS: This is a prospective survey of 36 patients with NGLUTD and recurrent UTIs prescribed long-term gentamicin to prevent UTIs. Eligible patients completed a questionnaire about their use and satisfaction with gentamicin therapy, as well as survey questionnaires to address QoL. A retrospective chart review was also performed to obtain medical history, confirm drug persistence, and obtain accurate UTI data for the 12 months preceding and after starting instillations. RESULTS: The rate of laboratory proven symptomatic UTI requiring antibiotic treatment decreased from 3.9 to 1.1 infections per year with no increase in antibiotic resistance and no significant side effects reported by patients. Eight patients stopped therapy before a full year for various reasons, but the remaining 72% of patients have continued to use the therapy now with a mean of 4.2 years later. Satisfaction among those continuing the medication was very high. CONCLUSION: Gentamicin bladder instillations with 15 mg nightly in patients with indwelling catheters or CIC with NGLUTD are very effective and safe with high patient satisfaction. This therapy can be maintained long-term with continued efficacy.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38692485

RESUMO

BACKGROUND: Oral immunotherapy (OIT) is a promising treatment for food allergy. Prior studies demonstrate significant differences among food-allergic individuals across race, ethnicity, and socioeconomic groups. Disparities in OIT have not been evaluated. OBJECTIVE: We assessed disparities in the use of OIT in patients with peanut allergy based on race, ethnicity, and socioeconomic status at a single academic medical center. METHODS: We identified 1028 peanut-allergic patients younger than 18 years receiving care in the University of Michigan food allergy clinics. Of these, 148 patients who underwent peanut OIT (treatment group) were compared with the 880 patients who avoided peanut (control group). Pertinent demographic and socioeconomic characteristics were compared. RESULTS: There were no differences in gender or ethnicity between the OIT and control groups. However, Black patients comprised 18% of the control group but only 4.1% of the OIT treatment group (P < .0001). The proportion of patients with private insurance was significantly higher in the treatment group compared with the control group (93.2% vs 82.2%, P = .0004). Finally, the neighborhood affluence index, a census-based measure of the relative socioeconomic prosperity of a neighborhood, was significantly higher in the OIT group than the control group (0.51 ± 0.18 vs 0.47 ± 0.19, P = .015), whereas the neighborhood disadvantage index, a census-based measure of the relative socioeconomic disadvantage of a neighborhood, was significantly lower (0.082 ± 0.062 vs 0.10 ± 0.093, P = .020). CONCLUSIONS: Significant racial and economic disparities exist at our institution between peanut-allergic individuals who receive OIT and those who do not. Efforts to understand the basis for these disparities are important to ensure that patients have equitable access to OIT.

5.
Neuroimaging Clin N Am ; 34(2): 215-224, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38604706

RESUMO

This review article discusses the role of MR imaging-based biomarkers in understanding and managing hemorrhagic strokes, focusing on intracerebral hemorrhage (ICH) and aneurysmal subarachnoid hemorrhage. ICH is a severe type of stroke with high mortality and morbidity rates, primarily caused by the rupture of small blood vessels in the brain, resulting in hematoma formation. MR imaging-based biomarkers, including brain iron quantification, ultra-early erythrolysis detection, and diffusion tensor imaging, offer valuable insights for hemorrhagic stroke management. These biomarkers could improve early diagnosis, risk stratification, treatment monitoring, and patient outcomes in the future, revolutionizing our approach to hemorrhagic strokes.


Assuntos
Acidente Vascular Cerebral Hemorrágico , Acidente Vascular Cerebral , Humanos , Imagem de Tensor de Difusão , Ferro , Encéfalo/diagnóstico por imagem , Hemorragia Cerebral/complicações , Hemorragia Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/diagnóstico por imagem , Biomarcadores , Imageamento por Ressonância Magnética
6.
Am J Sports Med ; 52(6): 1527-1534, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38600806

RESUMO

BACKGROUND: Patellofemoral instability commonly occurs during sports activities. The return to sports (RTS) rate for pediatric patients after bilateral medial patellofemoral ligament reconstruction (MPFLR) is unknown. PURPOSE/HYPOTHESIS: The purpose of this study was to evaluate RTS outcomes for pediatric patients undergoing bilateral MPFLR. It was hypothesized that (1) fewer pediatric patients would RTS after bilateral MPFLR compared with unilateral MPFLR and that (2) for those in the bilateral cohort who were able to RTS, fewer patients would attain the same level of play as or higher level than the preinjury level. STUDY DESIGN: Cohort study; Level of evidence, 3. METHODS: We prospectively collected RTS data on retrospectively identified matched cohorts of patients aged ≤18 years who underwent unilateral and bilateral MPFLR. We matched each participant with bilateral MPFLR at a 1 to 2 ratio with a participant with unilateral MPFLR by concomitant procedure, age, and sex. Postoperative complications and preoperative imaging measurements were collected from medical records. Patient-reported outcomes were obtained using a current Single Assessment Numeric Evaluation score collected at the time of primary outcome data. RESULTS: We matched 16 participants (mean age, 14 years) who underwent bilateral MPFLR to 32 participants (mean age, 14.3 years) in a corresponding unilateral MPFLR cohort. We found a significant decrease in RTS rates for pediatric patients after bilateral MPFLR when compared with unilateral MPFLR (69% vs 94%; P = .03). Among those who returned to sports, there was no difference in the level of play achieved. For participants who did not RTS or returned at a lower level of play after bilateral MPFLR, 57% cited fear of reinjury as the primary reason. There were no differences in postoperative complications or current Single Assessment Numeric Evaluation scores between cohorts. The bilateral cohort had a significantly higher Caton-Deschamps index compared with the unilateral cohort, although the absolute difference was small (1.3 vs 1.2; P = .005). CONCLUSION: We found that pediatric patients have a lower RTS rate after bilateral MPFLR when compared with a matched unilateral MPFLR cohort. No differences in the level of play were achieved among those who returned to sports. Fear of reinjury was a commonly cited reason for not returning to sports.


Assuntos
Articulação Patelofemoral , Volta ao Esporte , Humanos , Adolescente , Masculino , Feminino , Criança , Estudos Retrospectivos , Articulação Patelofemoral/cirurgia , Instabilidade Articular/cirurgia , Traumatismos em Atletas/cirurgia , Procedimentos de Cirurgia Plástica , Medidas de Resultados Relatados pelo Paciente , Ligamentos Articulares/cirurgia
7.
Am J Kidney Dis ; 2024 Mar 05.
Artigo em Inglês | MEDLINE | ID: mdl-38452919

RESUMO

RATIONALE & OBJECTIVE: Glomerular disorders have a highly variable clinical course, and biomarkers that reflect the molecular mechanisms underlying their progression are needed. Based on our previous work identifying plasminogen as a direct cause of podocyte injury, we designed this study to test the association between urine plasmin(ogen) (ie, plasmin and its precursor plasminogen) and end-stage kidney disease (ESKD). STUDY DESIGN: Multicenter cohort study. SETTING & PARTICIPANTS: 1,010 patients enrolled in the CureGN Cohort with biopsy-proven glomerular disease (focal segmental glomerulosclerosis, membranous nephropathy, and immunoglobulin A nephropathy). PREDICTORS: The main predictor was urine plasmin(ogen) at baseline. Levels were measured by an electrochemiluminescent immunoassay developed de novo. Traditional clinical and analytical characteristics were used for adjustment. The ratio of urine plasmin(ogen)/expected plasmin(ogen) was evaluated as a predictor in a separate model. OUTCOME: Progression to ESKD. ANALYTICAL APPROACH: Cox regression was used to examine the association between urinary plasmin(ogen) and time to ESKD. Urinary markers were log2 transformed to approximate normal distribution and normalized to urinary creatinine (Log2uPlasminogen/cr, Log2 urinary protein/cr [UPCR]). Expected plasmin(ogen) was calculated by multiple linear regression. RESULTS: Adjusted Log2uPlasminogen/cr was significantly associated with ESKD (HR per doubling Log2 uPlasminogen/cr 1.31 [95% CI, 1.22-1.40], P<0.001). Comparison of the predictive performance of the models including Log2 uPlasminogen/cr, Log2 UPCR, or both markers showed the plasmin(ogen) model superiority. The ratio of measured/expected urine plasmin(ogen) was independently associated with ESKD: HR, 0.41 (95% CI, 0.22-0.77) if ratio<0.8 and HR 2.42 (95% CI, 1.54-3.78) if ratio>1.1 (compared with ratio between 0.8 and 1.1). LIMITATIONS: Single plasmin(ogen) determination does not allow for the study of changes over time. The use of a cohort of mostly white patients and the restriction to patients with 3 glomerular disorders limits the external validity of our analysis. CONCLUSIONS: Urinary plasmin(ogen) and the ratio of measured/expected plasmin(ogen) are independently associated with ESKD in a cohort of patients with glomerular disease. Taken together with our previous experimental findings, urinary plasmin(ogen) could be a useful biomarker in prognostic decision making and a target for the development of novel therapies in patients with proteinuria and glomerular disease. PLAIN-LANGUAGE SUMMARY: Glomerular diseases are an important cause of morbidity and mortality in patients of all ages. Knowing the individual risk of progression to dialysis or transplantation would help to plan the follow-up and treatment of these patients. Our work studies the usefulness of urinary plasminogen as a marker of progression in this context, since previous studies indicate that plasminogen may be involved in the mechanisms responsible for the progression of these disorders. Our work in a sample of 1,010 patients with glomerular disease demonstrates that urinary plasminogen (as well as the ratio of measured to expected plasminogen) is associated with the risk of progression to end-stage kidney disease. Urine plasminogen exhibited good performance and, if further validated, could enable risk stratification for timely interventions in patients with proteinuria and glomerular disease.

8.
J Cyst Fibros ; 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38490920

RESUMO

BACKGROUND: Iron deficiency (ID) is a common extrapulmonary manifestation in cystic fibrosis (CF). CF transmembrane conductance regulator (CFTR) modulator therapies, particularly highly-effective modulator therapy (HEMT), have drastically improved health status in a majority of people with CF. We hypothesize that CFTR modulator use is associated with improved markers of ID. METHODS: In a multicenter retrospective cohort study across 4 United States CF centers 2012-2022, the association between modulator therapies and ID laboratory outcomes was estimated using multivariable linear mixed effects models overall and by key subgroups. Summary statistics describe the prevalence and trends of ID, defined a priori as transferrin saturation (TSAT) <20 % or serum iron <60 µg/dL (<10.7 µmol/L). RESULTS: A total of 568 patients with 2571 person-years of follow-up were included in analyses. Compared to off modulator therapy, HEMT was associated with +8.4 % TSAT (95 % confidence interval [CI], +6.3-10.6 %; p < 0.0001) and +34.4 µg/dL serum iron (95 % CI, +26.7-42.1 µg/dL; p < 0.0001) overall; +5.4 % TSAT (95 % CI, +2.8-8.0 %; p = 0.0001) and +22.1 µg/dL serum iron (95 % CI, +13.5-30.8 µg/dL; p < 0.0001) in females; and +11.4 % TSAT (95 % CI, +7.9-14.8 %; p < 0.0001) and +46.0 µg/dL serum iron (95 % CI, +33.3-58.8 µg/dL; p < 0.0001) in males. Ferritin was not different in those taking modulator therapy relative to off modulator therapy. Hemoglobin was overall higher with use of modulator therapy. The prevalence of ID was high throughout the study period (32.8 % in those treated with HEMT). CONCLUSIONS: ID remains a prevalent comorbidity in CF, despite availability of HEMT. Modulator use, particularly of HEMT, is associated with improved markers for ID (TSAT, serum iron) and anemia (hemoglobin).

9.
Alcohol Clin Exp Res (Hoboken) ; 48(4): 680-691, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38546532

RESUMO

BACKGROUND: While sleep and circadian rhythms are recognized contributors to the risk for alcohol use and related problems, few studies have examined whether objective sleep and circadian measures can predict future alcohol use in humans, and no such studies have been conducted in adults. This study examined whether any baseline sleep and/or circadian characteristics of otherwise healthy adults predicted their alcohol use over the subsequent 12 months. METHODS: Participants (21-42 years) included 28 light and 50 heavy drinkers. At baseline, a comprehensive range of self-reported and objective sleep/circadian measures was assessed via questionnaires, wrist actigraphy, and measurement of dim light melatonin onset and circadian photoreceptor responsivity. Following this, the number of alcoholic drinks per week and binge drinking episodes per month were assessed quarterly over the subsequent 12 months. Anticipated effects of alcohol (stimulation, sedation, and rewarding aspects) were also assessed quarterly over the 12 months. Analyses included generalized linear mixed-effects models and causal mediation analysis. RESULTS: Across the range of measures, only self-reported insomnia symptoms and a longer total sleep time at baseline predicted more drinks per week and binges per month (ps <0.02). There was a trend for the anticipated alcohol effect of wanting more alcohol at the 6-month timepoint to mediate the relationship between insomnia symptoms at baseline and drinks per week at 12 months (p = 0.069). CONCLUSIONS: These results suggest that in otherwise healthy adults, insomnia symptoms, even if subclinical, are a significant predictor of future drinking, and appear to outweigh the influence of circadian factors on future drinking, at least in otherwise healthy adults. Insomnia symptoms may be a modifiable target for reducing the risk of alcohol misuse.

10.
Abdom Radiol (NY) ; 49(6): 2145-2154, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38400982

RESUMO

PURPOSE: Radiologists with diverse training, specialization, and habits interpret imaging in the Emergency Department. It is necessary to understand if their variation predicts differential value. The purpose of this study was to determine whether attending radiologist variation predicts major clinical outcomes in adult Emergency Department patients imaged with ultrasound for right upper quadrant pain. METHODS: Consecutive ED patients imaged with ultrasound for RUQ pain from 10/8/2016 to 8/10/2022 were included (N = 7097). The primary outcome was prediction of hospital admission by signing attending radiologist. Secondary outcomes included: ED and hospital length of stay (LOS), 30-day mortality, 30-day re-presentation rate, subspecialty consultation, advanced imaging follow up (HIDA, MRI, CT), and intervention (ERCP, drainage or surgery). Sample size was determined a priori (detectable effect size: w = 0.06). Data were adjusted for demographic data, Elixhauser comorbidities, number of ED visits in prior year, clinical data, and system factors (38 covariates). P-values were corrected for multiple comparisons (false discovery rate-adjusted p-values). RESULTS: The included ultrasounds were read by 35 radiologists (median exams/radiologist: 145 [74.5-241.5]). Signing radiologist did not predict hospitalization (p = 0.85), abdominopelvic surgery or intervention within 30 days, re-presentation to the Emergency Department within 30 days, or subspecialty consultation. Radiologist did predict difference in Emergency Department length of stay (p < 0.001) although this difference was small and imprecise. HIDA was mentioned variably by radiologists (range 0-19%, p < 0.001), and mention of HIDA in the ultrasound report increased 10-fold the odds of HIDA being performed in the next 72 h (odds ratio 10.4 [8.0-13.4], p < 0.001). CONCLUSION: Radiologist variability did not predict meaningful outcome differences for patients with right upper quadrant pain undergoing ultrasound in the Emergency Department, but when radiologists mention HIDA in their reports, it predicts a 10-fold increase in the odds a HIDA is performed. Radiologists are relied on for interpretation that shapes subsequent patient care, and it is important to consider how radiologist variability can influence both outcome and resource utilization.


Assuntos
Dor Abdominal , Serviço Hospitalar de Emergência , Radiologistas , Ultrassonografia , Humanos , Feminino , Masculino , Pessoa de Meia-Idade , Ultrassonografia/métodos , Radiologistas/estatística & dados numéricos , Dor Abdominal/diagnóstico por imagem , Estudos Retrospectivos , Adulto , Tempo de Internação/estatística & dados numéricos , Idoso
11.
Clin Microbiol Infect ; 30(4): 499-506, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38163481

RESUMO

OBJECTIVES: Diagnostic error in the use of respiratory cultures for ventilator-associated pneumonia (VAP) fuels misdiagnosis and antibiotic overuse within intensive care units. In this prospective quasi-experimental study (NCT05176353), we aimed to evaluate the safety, feasibility, and efficacy of a novel VAP-specific bundled diagnostic stewardship intervention (VAP-DSI) to mitigate VAP over-diagnosis/overtreatment. METHODS: We developed and implemented a VAP-DSI using an interruptive clinical decision support tool and modifications to clinical laboratory workflows. Interventions included gatekeeping access to respiratory culture ordering, preferential use of non-bronchoscopic bronchoalveolar lavage for culture collection, and suppression of culture results for samples with minimal alveolar neutrophilia. Rates of adverse safety outcomes, positive respiratory cultures, and antimicrobial utilization were compared between mechanically ventilated patients (MVPs) in the 1-year post-intervention study cohort (2022-2023) and 5-year pre-intervention MVP controls (2017-2022). RESULTS: VAP-DSI implementation did not associate with increases in adverse safety outcomes but did associate with a 20% rate reduction in positive respiratory cultures per 1000 MVP days (pre-intervention rate 127 [95% CI: 122-131], post-intervention rate 102 [95% CI: 92-112], p < 0.01). Significant reductions in broad-spectrum antibiotic days of therapy per 1000 MVP days were noted after VAP-DSI implementation (pre-intervention rate 1199 [95% CI: 1177-1205], post-intervention rate 1149 [95% CI: 1116-1184], p 0.03). DISCUSSION: Implementation of a VAP-DSI was safe and associated with significant reductions in rates of positive respiratory cultures and broad-spectrum antimicrobial use. This innovative trial of a VAP-DSI represents a novel avenue for intensive care unit antimicrobial stewardship. Multicentre trials of VAP-DSIs are warranted.


Assuntos
Pneumonia Associada à Ventilação Mecânica , Humanos , Antibacterianos/uso terapêutico , Unidades de Terapia Intensiva , Pneumonia Associada à Ventilação Mecânica/diagnóstico , Pneumonia Associada à Ventilação Mecânica/tratamento farmacológico , Pneumonia Associada à Ventilação Mecânica/microbiologia , Estudos Prospectivos , Estudos de Viabilidade
12.
Allergy Asthma Proc ; 45(1): 24-32, 2024 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-38151730

RESUMO

Background: Mask use is recommended to reduce the transmission of severe acute respiratory syndrome coronavirus 2. The safety of mask use in adults and children with asthma is unknown. Objective: The objective of this study is to evaluate the effect of mask use on peripheral oxygen saturation (SpO2) in those with and those without asthma. Methods: A two-stage cross-sectional study was performed. In the first stage, the SpO2 concentration in adults and children with and without asthma was measured with the adults and children at rest during mask use. In the second stage, children years 6-17 performed a 6-minute walk test while wearing masks. The SpO2 concentration was measured before the exercise and at 3 and 6 minutes into exercise. Subjective dyspnea was evaluated by using the Pediatric Dyspnea Scale (PDS). Results: In the first stage, SpO2 levels in 393 subjects were analyzed. In the second stage, 50 pediatric subjects were included, 25 with and 25 without asthma. There was no difference in SpO2 levels between those with and those without asthma in adults and children wearing masks while at rest, with median SpO2 98% in both groups. There was no difference in oxygen saturation or reported level of dyspnea between the children with asthma and children without asthma performing the 6-minute walk test while wearing masks. Median SpO2 levels were at or near 99% in the asthma and non-asthma groups at all time points. Median PDS scores were similar between the asthma and non-asthma groups. Conclusion: Mask use did not affect SpO2 in adults and children at rest or in children performing low-to-moderate intensity exercise. These findings were consistent in those with and without asthma.


Assuntos
Asma , Saturação de Oxigênio , Adulto , Humanos , Criança , Estudos Transversais , Dispneia/etiologia , SARS-CoV-2
13.
JMIR Form Res ; 7: e43099, 2023 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-37707948

RESUMO

BACKGROUND: Caregivers of people with chronic illnesses often face negative stress-related health outcomes and are unavailable for traditional face-to-face interventions due to the intensity and constraints of their caregiver role. Just-in-time adaptive interventions (JITAIs) have emerged as a design framework that is particularly suited for interventional mobile health studies that deliver in-the-moment prompts that aim to promote healthy behavioral and psychological changes while minimizing user burden and expense. While JITAIs have the potential to improve caregivers' health-related quality of life (HRQOL), their effectiveness for caregivers remains poorly understood. OBJECTIVE: The primary objective of this study is to evaluate the dose-response relationship of a fully automated JITAI-based self-management intervention involving personalized mobile app notifications targeted at decreasing the level of caregiver strain, anxiety, and depression. The secondary objective is to investigate whether the effectiveness of this mobile health intervention was moderated by the caregiver group. We also explored whether the effectiveness of this intervention was moderated by (1) previous HRQOL measures, (2) the number of weeks in the study, (3) step count, and (4) minutes of sleep. METHODS: We examined 36 caregivers from 3 disease groups (10 from spinal cord injury, 11 from Huntington disease, and 25 from allogeneic hematopoietic cell transplantation) in the intervention arm of a larger randomized controlled trial (subjects in the other arm received no prompts from the mobile app) designed to examine the acceptability and feasibility of this intensive type of trial design. A series of multivariate linear models implementing a weighted and centered least squares estimator were used to assess the JITAI efficacy and effect. RESULTS: We found preliminary support for a positive dose-response relationship between the number of administered JITAI messages and JITAI efficacy in improving caregiver strain, anxiety, and depression; while most of these associations did not meet conventional levels of significance, there was a significant association between high-frequency JITAI and caregiver strain. Specifically, administering 5-6 messages per week as opposed to no messages resulted in a significant decrease in the HRQOL score of caregiver strain with an estimate of -6.31 (95% CI -11.76 to -0.12; P=.046). In addition, we found that the caregiver groups and the participants' levels of depression in the previous week moderated JITAI efficacy. CONCLUSIONS: This study provides preliminary evidence to support the effectiveness of the self-management JITAI and offers practical guidance for designing future personalized JITAI strategies for diverse caregiver groups. TRIAL REGISTRATION: ClinicalTrials.gov NCT04556591; https://clinicaltrials.gov/ct2/show/NCT04556591.

14.
J Oral Maxillofac Surg ; 81(10): 1301-1310, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37507104

RESUMO

PURPOSE: Penicillins are a potent antibiotic in managing odontogenic infections, but 10% of the population is labelled as allergic to these drugs. This has limited their use and resulted in increased utilization of health care resources as well as complications associated with alternative antibiotics. The purpose of the study was to measure the association between patients labeled as penicillin allergic and treatment outcomes in a sample of patients treated for complicated odontogenic infections. Additionally, we sought to investigate antibiotic resistance patterns in these patients. MATERIALS AND METHODS: A retrospective cohort study was performed at the Michigan Medicine health care system to include patients who were treated for complicated odontogenic infections by oral and maxillofacial surgery between 2016 and 2020. Complicated odontogenic infection was defined as any odontogenic infection requiring admission and surgical management in the operating room. The primary predictor variable was the penicillin allergy label, which was determined by chart review and not confirmed with formal testing. Outcomes were measures of disease severity. The primary outcome variable was hospital length of stay. Secondary outcome variables were ICU admission (yes/no), repeat computed tomography scan(s), repeat surgery (yes/no), and re-admission (yes/no). Co-variates included were age, sex (male/female), tobacco use status, diabetes, immunocompromised state, number of spaces involved, white blood cell count upon admission and insurance status. For our secondary aim, the primary predictor variable was again penicillin allergy and outcome variable was antibiotic resistance as determined by wound culture results following surgical intervention. Negative binomial regression and logistic regression analyses were performed. P < .05 was considered significant. RESULTS: A total of 150 patients met the inclusion criteria and of those 17.3% reported as penicillin allergic. Patients labelled as penicillin allergic did not differ significantly from patients without penicillin allergy label in terms of treatment outcomes. Age, diabetes, and immunosuppression were associated with an increased length of stay. Patients labelled as penicillin allergic were at significantly higher risk for antibiotic resistance (relative risk = 2.34; 95% confidence interval, 1.66 to 3.32; P < .001), specifically clindamycin resistance (relative risk = 3.17; 95% confidence interval, 1.93 to 5.18; P < .001). CONCLUSIONS: Penicillin allergy was significantly associated with clindamycin resistance. There were similar outcomes amongst patients with and without a penicillin allergy label despite antibiotic differences. Delabeling efforts for patients with a reported penicillin allergy must be considered and local nomograms for antibiotic selection should be used by providers when seeking alternative antibiotics.


Assuntos
Diabetes Mellitus , Hipersensibilidade a Drogas , Hipersensibilidade , Humanos , Masculino , Feminino , Clindamicina , Estudos Retrospectivos , Antibacterianos/uso terapêutico , Penicilinas/efeitos adversos , Hipersensibilidade a Drogas/tratamento farmacológico , Hipersensibilidade a Drogas/epidemiologia , Hipersensibilidade/tratamento farmacológico
15.
J Clin Invest ; 133(16)2023 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-37402149

RESUMO

BACKGROUNDFood allergy (FA) is a growing health problem requiring physiologic confirmation via the oral food challenge (OFC). Many OFCs result in clinical anaphylaxis, causing discomfort and risk while limiting OFC utility. Transepidermal water loss (TEWL) measurement provides a potential solution to detect food anaphylaxis in real time prior to clinical symptoms. We evaluated whether TEWL changes during an OFC could predict anaphylaxis onset.METHODSPhysicians and nurses blinded to the TEWL results conducted and adjudicated the results of all 209 OFCs in this study. A study coordinator measured TEWL throughout the OFC and had no input on the OFC conduct. TEWL was measured 2 ways in 2 separate groups. First, TEWL was measured using static, discrete measurements. Second, TEWL was measured using continuous monitoring. Participants who consented provided blood samples before and after the OFCs for biomarker analyses.RESULTSTEWL rose significantly (2.93 g/m2/h) during reactions and did not rise during nonreacting OFCs (-1.00 g/m2/h). Systemic increases in tryptase and IL-3 were also detected during reactions, providing supporting biochemical evidence of anaphylaxis. The TEWL rise occurred 48 minutes earlier than clinically evident anaphylaxis. Continuous monitoring detected a significant rise in TEWL that presaged positive OFCs, but no rise was seen in the OFCs that resulted in no reaction, providing high predictive specificity (96%) for anaphylaxis against nonreactions 38 minutes prior to anaphylaxis onset.CONCLUSIONSDuring OFCs, a TEWL rise anticipated a positive clinical challenge. TEWL presents a monitoring modality that may predict food anaphylaxis and facilitate improvements in OFC safety and tolerability.


Assuntos
Anafilaxia , Hipersensibilidade Alimentar , Humanos , Anafilaxia/diagnóstico , Anafilaxia/etiologia , Hipersensibilidade Alimentar/diagnóstico , Alimentos , Alérgenos
16.
J Patient Rep Outcomes ; 7(1): 57, 2023 06 26.
Artigo em Inglês | MEDLINE | ID: mdl-37358716

RESUMO

PURPOSE: Establishing the psychometric reliability and validity of new measures is an ongoing process. More work is needed in to confirm the clinical utility of the TBI-CareQOL measurement development system in both an independent cohort of caregivers of traumatic brain injury (TBI), as well as in additional caregiver groups. METHODS: An independent cohort of caregivers of people with TBI (n = 139), as well as three new diverse caregiver cohorts (n = 19 caregivers of persons with spinal cord injury, n = 21 caregivers for persons with Huntington disease, and n = 30 caregivers for persons with cancer), completed 11 TBI-CareQOL measures (caregiver strain; caregiver-specific anxiety; anxiety; depression; anger; self-efficacy; positive affect and well-being; perceived stress; satisfaction with social roles and activities; fatigue; sleep-related impairment), as well as two additional measures to examine convergent and discriminant validity (PROMIS Global Health; the Caregiver Appraisal Scale). RESULTS: Findings support the internal consistency reliability (all alphas > 0.70 with the vast majority being > 0.80 across the different cohorts) of the TBI-CareQOL measures. All measures were free of ceiling effects, and the vast majority were also free of floor effects. Convergent validity was supported by moderate to high correlations between the TBI-CareQOL and related measures, while discriminant validity was supported by low correlations between the TBI-CareQOL measures and unrelated constructs. CONCLUSION: Findings indicate that the TBI-CareQOL measures have clinical utility in caregivers of people with TBI, as well as in other caregiver groups. As such, these measures should be considered as important outcome measures for clinical trials aiming to improve caregiver outcomes.


Assuntos
Lesões Encefálicas Traumáticas , Militares , Veteranos , Humanos , Cuidadores , Reprodutibilidade dos Testes , Qualidade de Vida , Inquéritos e Questionários , Estudos Transversais , Lesões Encefálicas Traumáticas/diagnóstico
17.
JAMA Surg ; 158(7): e231112, 2023 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-37133836

RESUMO

Importance: Intravenous (IV) contrast medium is sometimes withheld due to risk of complication or lack of availability in patients undergoing computed tomography (CT) for abdominal pain. The risk from withholding contrast medium is understudied. Objective: To determine the diagnostic accuracy of unenhanced abdominopelvic CT using contemporaneous contrast-enhanced CT as the reference standard in emergency department (ED) patients with acute abdominal pain. Design, Setting, and Participants: This was an institutional review board-approved, multicenter retrospective diagnostic accuracy study of 201 consecutive adult ED patients who underwent dual-energy contrast-enhanced CT for the evaluation of acute abdominal pain from April 1, 2017, through April 22, 2017. Three blinded radiologists interpreted these scans to establish the reference standard by majority rule. IV and oral contrast media were then digitally subtracted using dual-energy techniques. Six different blinded radiologists from 3 institutions (3 specialist faculty and 3 residents) interpreted the resulting unenhanced CT examinations. Participants included a consecutive sample of ED patients with abdominal pain who underwent dual-energy CT. Exposure: Contrast-enhanced and virtual unenhanced CT derived from dual-energy CT. Main outcome: Diagnostic accuracy of unenhanced CT for primary (ie, principal cause[s] of pain) and actionable secondary (ie, incidental findings requiring management) diagnoses. The Gwet interrater agreement coefficient was calculated. Results: There were 201 included patients (female, 108; male, 93) with a mean age of 50.1 (SD, 20.9) years and mean BMI of 25.5 (SD, 5.4). Overall accuracy of unenhanced CT was 70% (faculty, 68% to 74%; residents, 69% to 70%). Faculty had higher accuracy than residents for primary diagnoses (82% vs 76%; adjusted odds ratio [OR], 1.83; 95% CI, 1.26-2.67; P = .002) but lower accuracy for actionable secondary diagnoses (87% vs 90%; OR, 0.57; 95% CI, 0.35-0.93; P < .001). This was because faculty made fewer false-negative primary diagnoses (38% vs 62%; OR, 0.23; 95% CI, 0.13-0.41; P < .001) but more false-positive actionable secondary diagnoses (63% vs 37%; OR, 2.11, 95% CI, 1.26-3.54; P = .01). False-negative (19%) and false-positive (14%) results were common. Interrater agreement for overall accuracy was moderate (Gwet agreement coefficient, 0.58). Conclusion: Unenhanced CT was approximately 30% less accurate than contrast-enhanced CT for evaluating abdominal pain in the ED. This should be balanced with the risk of administering contrast material to patients with risk factors for kidney injury or hypersensitivity reaction.


Assuntos
Abdome Agudo , Tomografia Computadorizada por Raios X , Adulto , Humanos , Masculino , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , Tomografia Computadorizada por Raios X/métodos , Dor Abdominal/diagnóstico por imagem , Dor Abdominal/etiologia , Serviço Hospitalar de Emergência
18.
J Palliat Med ; 26(9): 1188-1197, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37022771

RESUMO

Aim: Our aim was to examine how code status orders for patients hospitalized with COVID-19 changed over time as the pandemic progressed and outcomes improved. Methods: This retrospective cohort study was performed at a single academic center in the United States. Adults admitted between March 1, 2020, and December 31, 2021, who tested positive for COVID-19, were included. The study period included four institutional hospitalization surges. Demographic and outcome data were collected and code status orders during admission were trended. Data were analyzed with multivariable analysis to identify predictors of code status. Results: A total of 3615 patients were included with full code (62.7%) being the most common final code status order followed by do-not-attempt-resuscitation (DNAR) (18.1%). Time of admission (per every six months) was an independent predictor of final full compared to DNAR/partial code status (p = 0.04). Limited resuscitation preference (DNAR or partial) decreased from over 20% in the first two surges to 10.8% and 15.6% of patients in the last two surges. Other independent predictors of final code status included body mass index (p < 0.05), Black versus White race (0.64, p = 0.01), time spent in the intensive care unit (4.28, p = <0.001), age (2.11, p = <0.001), and Charlson comorbidity index (1.05, p = <0.001). Conclusions: Over time, adults admitted to the hospital with COVID-19 were less likely to have a DNAR or partial code status order with persistent decrease occurring after March 2021. A trend toward decreased code status documentation as the pandemic progressed was observed.


Assuntos
COVID-19 , Humanos , Adulto , Estados Unidos , Estudos Retrospectivos , Ordens quanto à Conduta (Ética Médica) , Pandemias , Hospitalização
19.
J Gen Intern Med ; 38(9): 2164-2178, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36964423

RESUMO

BACKGROUND: Housing security is a key social determinant of behavior related to health outcomes. OBJECTIVE: The purpose of this study was to develop a new patient-reported outcome measure that evaluates aspects of housing security for use in the Re-Engineered Discharge for Diabetes-Computer Adaptive Test (REDD-CAT) measurement system. DESIGN: Qualitative data, literature reviews, and cross-sectional survey study. PARTICIPANTS: A total of 225 people with T2DM provided responses to the items in this item pool. MAIN MEASURES: A new item pool that evaluates important aspects of housing security was developed using stakeholder data from focus groups of persons with T2DM. KEY RESULTS: For the Housing Affordability scale, factor analysis (both exploratory and confirmatory) supported the retention of six items. Of these items, none exhibited sparse cells or problems with monotonicity; no items were deleted due to low item-adjusted total score correlations. For the six affordability items, a constrained graded response model indicated no items exhibited misfit; thus, all were retained. No items indicated differential item functioning (examined for age, sex, education, race, and socioeconomic status). Thus, the final Affordability item bank comprised six items. A Housing Safety index (three items) and a Home Features index (eight items) were also developed. Reliability (i.e., internal consistency and test-retest reliability) and validity (i.e., convergent, discriminant, and known-groups) of the new measures were also supported. CONCLUSIONS: The REDD-CAT Housing Security Measure provides a reliable and valid assessment of housing affordability, safety, and home features in people with type 2 diabetes mellitus. Future work is needed to establish the clinical utility of this measure in other clinical populations.


Assuntos
Diabetes Mellitus Tipo 2 , Habitação , Humanos , Computadores , Conservação dos Recursos Naturais , Estudos Transversais , Psicometria , Reprodutibilidade dos Testes , Medidas de Segurança , Inquéritos e Questionários , Masculino , Feminino
20.
Palliat Med Rep ; 4(1): 79-88, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36969738

RESUMO

Objective: With Huntington disease (HD), a fatal neurodegenerative disease where the prevalence of suicidal thoughts and behavior (STB) remains elevated as compared to other neurological disorders, it is unknown whether STB and health-related quality of life (HRQoL) affect plans for the end of life or more broadly, advance care planning (ACP). Conversely, it is unknown whether ACP would provoke future changes to STB and HRQoL. Therefore, we sought to evaluate whether STB and HRQoL patient-reported outcomes (PROs) contribute to ACP and whether ACP relates to changes in STB and HRQoL at 24 months. Methods: HD-validated clinician- and patient-assessments (i.e., HRQoL PROs) were obtained at baseline enrollment, 12 and 24 months through our multi-center study (HDQLIFE™) throughout the United States among people with premanifest, early-stage, and late-stage manifest HD. We used linear mixed-effects models to determine the relationships between STB and HRQoL at baseline and HDQLIFE End of Life Planning at follow-up. Separate linear mixed-effects models were used to assess the relationship between HDQLIFE End of Life Planning at baseline, and HRQoL and STB at 12 and 24 months. False discovery rate adjustments were used to account for multiple comparisons. Results: At baseline enrollment, STB and HRQoL were not related to HDQLIFE End of Life Planning at 12 or 24 months. Similarly, at baseline, HDQLIFE End of Life Planning demonstrated no association with STB or HRQoL at 12 or 24 months. Interpretation: STB and HRQoL PROs do not significantly affect patient engagement with ACP. Most importantly, engaging in ACP does not cause untoward effects on HRQoL or STB for this rare neurodegenerative disease where the lifetime prevalence of STB approaches 30%.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...