Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38523131

ABSTRACT

INTRODUCTION: Retained hemothorax (HTX) is a common complication following thoracic trauma. Small studies demonstrate the benefit of thoracic cavity irrigation at the time of tube thoracostomy for the prevention of retained HTX. We sought to assess the effectiveness of chest irrigation in preventing retained HTX leading to a secondary surgical intervention. METHODS: We performed a single-center retrospective study from 2017-2021 at a Level I trauma center comparing bedside thoracic cavity irrigation via tube thoracostomy (TT) versus no irrigation. Using the trauma registry, patients with traumatic HTX were identified. Exclusion criteria were TT placement at an outside hospital, no TT within 24 hours of admission, thoracotomy or video-assisted thoracoscopic surgery (VATS) prior to or within 6 hours after TT placement, VATS as part of rib fixation or diaphragmatic repair, and death within 96 hours of admission. Bivariate and multivariable analyses were conducted. RESULTS: A total of 370 patients met the inclusion criteria, of whom 225 (61%) were irrigated. Patients who were irrigated were more likely to suffer a penetrating injury (41% vs 30%, p = 0.03) and less likely to have a flail chest (10% vs 21%, p = 0.01) (Table 1). On bivariate analysis, irrigation was associated with lower rates of VATS (6% vs 19%, p < 0.001) and retained HTX (10% vs 21%, p < 0.001) (Figure 1). The irrigated cohort had a shorter TT duration (4 vs 6 days, p < 0.001) and hospital length of stay (LOS) (7 vs 9 days, p = 0.04). On multivariable analysis, thoracic cavity irrigation had lower odds of VATS (aOR: 0.37, 95%CI: 0.30-0.54), retained HTX (aOR: 0.42, 95%CI: 0.25-0.74), and a shorter TT duration (ß: -1.58, 95%CI: -2.52, -0.75). CONCLUSION: Our 5-year experience with thoracic irrigation confirms findings from smaller studies that irrigation prevents retained HTX and decreases the need for surgical intervention. LEVEL OF EVIDENCE: Level III, Therapeutic/Care Management.

2.
Article in English | MEDLINE | ID: mdl-38437527

ABSTRACT

BACKGROUND: Delays in initiating venous thromboembolism (VTE) prophylaxis in patients suffering from traumatic brain injury (TBI) persist despite guidelines recommending early initiation. We hypothesized that the expansion of a Trauma Program Performance Improvement (PI) team will improve compliance of early (24-48 hour) initiation of VTE prophylaxis and will decrease VTE events in TBI patients. METHODS: We performed a single-center retrospective review of all TBI patients admitted to a Level I trauma center before (2015-2016,) and after (2019-2020,) the expansion of the Trauma Performance Improvement and Patient Safety (PIPS) team and the creation of trauma process and outcome dashboards. Exclusion criteria included discharge or death within 48 hours of admission, expanding intracranial hemorrhage on CT scan, and a neurosurgical intervention (craniotomy, pressure monitor, or drains) prior to chemoprophylaxis initiation. RESULTS: A total of 1,112 patients met the inclusion criteria, of which 54% (n = 604) were admitted after Trauma PIPS expansion. Following the addition of a dedicated PIPS nurse in the trauma program and creation of process dashboards, the time from stable CT to VTE prophylaxis initiation decreased (52 hours to 35 hours; p < 0.001) and more patients received chemoprophylaxis at 24-48 hours (59% from 36%, p < 0.001) after stable head CT. There was no significant difference in time from first head CT to stable CT (9 vs 9 hours; p = 0.15). The Contemporary group had a lower rate of VTE events (1% vs 4%; p < 0.001) with no increase in bleeding events (2% vs 2%; p = 0.97). On multivariable analysis, being in the Early cohort was an independent predictor of VTE events (aOR: 3.74; 95%CI: 1.45-6.16). CONCLUSION: A collaborative multidisciplinary Trauma PIPS team improves guideline compliance. Initiation of VTE chemoprophylaxis within 24-48 hours of stable head CT is safe and effective. LEVEL OF EVIDENCE: Level III, Therapeutic/Care Management.

3.
J Trauma Acute Care Surg ; 95(6): 935-942, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37418689

ABSTRACT

BACKGROUND: Understanding the expectations of early career acute care surgeons will help clarify the practice and employment models that will attract and retain high-quality surgeons, thereby sustaining our workforce. This study aimed to outline the clinical and academic preferences and priorities of early career acute care surgeons and to better define full-time employment. METHODS: A survey on clinical responsibilities, employment preferences, work priorities, and compensation was distributed to early career acute care surgeons in the first 5 years of practice. A subset of agreeable respondents underwent virtual semistructured interviews. Both quantitative and thematic analysis were used to describe current responsibilities, expectations, and perspectives. RESULTS: Of 471 surgeons, 167 responded (35%), the majority of whom were assistant professors within the first 3 years of practice (80%). The median desired clinical volume was 24 clinical weeks and 48 call shifts per year, 4 weeks less than their median current clinical volume. Most respondents (61%) preferred a service-based model. The top priorities cited in choosing a job were geography, work schedule, and compensation. Qualitative interviews identified themes related to defining full-time employment, first job expectations and realities, and the often-misaligned system and surgeon. CONCLUSION: Understanding the perspectives of early career surgeons entering the workforce is important particularly in the field of acute care surgery where no standard workload or practice model exists. The wide variety of expectations, practice models, and schedule preferences may lead to a mismatch between surgeon desires and employment expectation. Consistent employment standards across our specialty would provide a framework for sustainability. LEVEL OF EVIDENCE: Prognostic and Epidemiological; Level III.


Subject(s)
Surgeons , Humans , Employment , Surveys and Questionnaires , Workload , Personnel Staffing and Scheduling , Career Choice
4.
JAMA Ophthalmol ; 141(6): 534-541, 2023 06 01.
Article in English | MEDLINE | ID: mdl-37140901

ABSTRACT

Importance: Diagnostic information from administrative claims and electronic health record (EHR) data may serve as an important resource for surveillance of vision and eye health, but the accuracy and validity of these sources are unknown. Objective: To estimate the accuracy of diagnosis codes in administrative claims and EHRs compared to retrospective medical record review. Design, Setting, and Participants: This cross-sectional study compared the presence and prevalence of eye disorders based on diagnostic codes in EHR and claims records vs clinical medical record review at University of Washington-affiliated ophthalmology or optometry clinics from May 2018 to April 2020. Patients 16 years and older with an eye examination in the previous 2 years were included, oversampled for diagnosed major eye diseases and visual acuity loss. Exposures: Patients were assigned to vision and eye health condition categories based on diagnosis codes present in their billing claims history and EHR using the diagnostic case definitions of the US Centers for Disease Control and Prevention Vision and Eye Health Surveillance System (VEHSS) as well as clinical assessment based on retrospective medical record review. Main Outcome and Measures: Accuracy was measured as area under the receiver operating characteristic curve (AUC) of claims and EHR-based diagnostic coding vs retrospective review of clinical assessments and treatment plans. Results: Among 669 participants (mean [range] age, 66.1 [16-99] years; 357 [53.4%] female), identification of diseases in billing claims and EHR data using VEHSS case definitions was accurate for diabetic retinopathy (claims AUC, 0.94; 95% CI, 0.91-0.98; EHR AUC, 0.97; 95% CI, 0.95-0.99), glaucoma (claims AUC, 0.90; 95% CI, 0.88-0.93; EHR AUC, 0.93; 95% CI, 0.90-0.95), age-related macular degeneration (claims AUC, 0.87; 95% CI, 0.83-0.92; EHR AUC, 0.96; 95% CI, 0.94-0.98), and cataracts (claims AUC, 0.82; 95% CI, 0.79-0.86; EHR AUC, 0.91; 95% CI, 0.89-0.93). However, several condition categories showed low validity with AUCs below 0.7, including diagnosed disorders of refraction and accommodation (claims AUC, 0.54; 95% CI, 0.49-0.60; EHR AUC, 0.61; 95% CI, 0.56-0.67), diagnosed blindness and low vision (claims AUC, 0.56; 95% CI, 0.53-0.58; EHR AUC, 0.57; 95% CI, 0.54-0.59), and orbital and external diseases (claims AUC, 0.63; 95% CI, 0.57-0.69; EHR AUC, 0.65; 95% CI, 0.59-0.70). Conclusion and Relevance: In this cross-sectional study of current and recent ophthalmology patients with high rates of eye disorders and vision loss, identification of major vision-threatening eye disorders based on diagnosis codes in claims and EHR records was accurate. However, vision loss, refractive error, and other broadly defined or lower-risk disorder categories were less accurately identified by diagnosis codes in claims and EHR data.


Subject(s)
Big Data , Glaucoma , Humans , Female , Aged , Male , Retrospective Studies , Cross-Sectional Studies , Routinely Collected Health Data , Blindness
5.
JMIR Public Health Surveill ; 9: e44552, 2023 03 07.
Article in English | MEDLINE | ID: mdl-36881468

ABSTRACT

BACKGROUND: Self-reported questions on blindness and vision problems are collected in many national surveys. Recently released surveillance estimates on the prevalence of vision loss used self-reported data to predict variation in the prevalence of objectively measured acuity loss among population groups for whom examination data are not available. However, the validity of self-reported measures to predict prevalence and disparities in visual acuity has not been established. OBJECTIVE: This study aimed to estimate the diagnostic accuracy of self-reported vision loss measures compared to best-corrected visual acuity (BCVA), inform the design and selection of questions for future data collection, and identify the concordance between self-reported vision and measured acuity at the population level to support ongoing surveillance efforts. METHODS: We calculated accuracy and correlation between self-reported visual function versus BCVA at the individual and population level among patients from the University of Washington ophthalmology or optometry clinics with a prior eye examination, randomly oversampled for visual acuity loss or diagnosed eye diseases. Self-reported visual function was collected via telephone survey. BCVA was determined based on retrospective chart review. Diagnostic accuracy of questions at the person level was measured based on the area under the receiver operator curve (AUC), whereas population-level accuracy was determined based on correlation. RESULTS: The survey question, "Are you blind or do you have serious difficulty seeing, even when wearing glasses?" had the highest accuracy for identifying patients with blindness (BCVA ≤20/200; AUC=0.797). The highest accuracy for detecting any vision loss (BCVA <20/40) was achieved by responses of "fair," "poor," or "very poor" to the question, "At the present time, would you say your eyesight, with glasses or contact lenses if you wear them, is excellent, good, fair, poor, or very poor" (AUC=0.716). At the population level, the relative relationship between prevalence based on survey questions and BCVA remained stable for most demographic groups, with the only exceptions being groups with small sample sizes, and these differences were generally not significant. CONCLUSIONS: Although survey questions are not considered to be sufficiently accurate to be used as a diagnostic test at the individual level, we did find relatively high levels of accuracy for some questions. At the population level, we found that the relative prevalence of the 2 most accurate survey questions were highly correlated with the prevalence of measured visual acuity loss among nearly all demographic groups. The results of this study suggest that self-reported vision questions fielded in national surveys are likely to yield an accurate and stable signal of vision loss across different population groups, although the actual measure of prevalence from these questions is not directly analogous to that of BCVA.


Subject(s)
Blindness , Telephone , Humans , Retrospective Studies , Blindness/epidemiology , Blindness/etiology , Self Report , Visual Acuity
7.
Ocul Immunol Inflamm ; 30(2): 357-363, 2022 Feb 17.
Article in English | MEDLINE | ID: mdl-35442873

ABSTRACT

The objective grading of anterior chamber inflammation (ACI) has remained a challenge in the field of uveitis. While the grading criteria produced by the Standardization of Uveitis Nomenclature (SUN) International Workshop have been widely adopted, limitations exist including interobserver variability and grading confined to discrete categories rather than a continuous measurement. Since the earliest iterations of optical coherence tomography (OCT), ACI has been assessed using anterior segment OCT and shown to correlate with slit-lamp findings. However, widespread use of this approach has not been adopted. Barriers to standardization include variability in OCT devices across clinical settings, lack of standardization of image acquisition protocols, varying quantification methods, and the difficulty of distinguishing inflammatory cells from other cell types. Modern OCT devices and techniques in artificial intelligence show promise in expanding the clinical applicability of anterior segment OCT for the grading of ACI.


Subject(s)
Uveitis, Anterior , Uveitis , Anterior Chamber/diagnostic imaging , Artificial Intelligence , Humans , Inflammation/diagnostic imaging , Tomography, Optical Coherence/methods , Uveitis, Anterior/diagnostic imaging
8.
Ophthalmol Sci ; 1(3): 100041, 2021 Sep.
Article in English | MEDLINE | ID: mdl-36275940

ABSTRACT

Purpose: To evaluate whether cataract surgery is associated with decreased risks of central retinal vein occlusion (CRVO) or branch retinal vein occlusion (BRVO) development using the American Academy of Ophthalmology Intelligent Research in Sight (IRIS®) Registry. Design: Retrospective database study of the IRIS Registry data. Participants: Patients in the IRIS Registry who underwent cataract surgery and 1:1 matched control participants from the IRIS Registry using a decision tree classifier as a propensity model. Methods: Control and treatment groups initially were selected using Current Procedural Terminology codes for uncomplicated cataract surgery and other straightforward criteria. To accomplish treatment-control matching, a decision tree classifier was trained to classify patients as treatment versus control based on a set of chosen predictors for treatment, where best-corrected visual acuity and age were the most important predictors. Treatment and control participants subsequently were matched using the classifier, the visit dates, and the identifications of the practice. Cox regression was performed on the matched groups to measure the hazard ratio (HR) of retinal vein occlusion development adjusted for age, sex, race, primary insurance type, and previous diagnosis of diabetic retinopathy (DR), glaucoma, and narrow angles. Main Outcome Measure: The HR of retinal vein occlusion developing in patients who underwent cataract surgery compared with matched control participants. Results: The HRs for CRVO and BRVO developing in patients who underwent cataract surgery compared with matched control participants who did not during the first year after either cataract surgery or baseline visit were 1.26 [95% confidence interval [CI], 1.16-1.38; P < 0.001] and 1.27 [95% CI, 1.19-1.36; P < 0.001], respectively, after controlling for age, sex, race, insurance, and history of DR, glaucoma, and narrow angles. Diabetic retinopathy was the strongest predictor associated with CRVO (2.79 [95% CI, 2.43-3.20; P < 0.001]) and BRVO (2.35 [95% CI, 2.09-2.64; P < 0.001]) development after cataract surgery. Conclusions: Cataract surgery is associated with a small increase in risk of retinal vein occlusions within the first year; however, the incidence is low and likely not clinically significant.

SELECTION OF CITATIONS
SEARCH DETAIL
...