Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 38
Filter
1.
Dig Dis Sci ; 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38658506

ABSTRACT

BACKGROUND AND AIMS: This study evaluates the cost burdens of inpatient care for chronic hepatitis B (CHB). We aimed to stratify the patients based on the presence of cirrhosis and conduct subgroup analyses on patient demographics and medical characteristics. METHODS: The 2016-2019 National Inpatient Sample was used to select individuals diagnosed with CHB. The weighted charge estimates were derived and converted to admission costs, adjusting for inflation to the year 2016, and presented in United States Dollars. These adjusted values were stratified using select patient variables. To assess the goodness-of-fit for each trend, we graphed the data across the respective years, expressed in a chronological sequence with format (R2, p-value). Analysis of CHB patients was carried out in three groups: the composite CHB population, the subset of patients with cirrhosis, and the subset of patients without cirrhosis. RESULTS: From 2016 to 2019, the total costs of hospitalizations in CHB patients were $603.82, $737.92, $758.29, and $809.01 million dollars from 2016 to 2019, respectively. We did not observe significant cost trends in the composite CHB population or in the cirrhosis and non-cirrhosis cohorts. However, we did find rising costs associated with age older than 65 (0.97, 0.02), white race (0.98, 0.01), Hispanic ethnicity (1.00, 0.001), and Medicare coverage (0.95, 0.02), the significance of which persisted regardless of the presence of cirrhosis. Additionally, inpatients without cirrhosis who had comorbid metabolic dysfunction-associated steatotic liver disease (MASLD) were also observed to have rising costs (0.96, 0.02). CONCLUSIONS: We did not find a significant increase in overall costs with CHB inpatients, regardless of the presence of cirrhosis. However, certain groups are more susceptible to escalating costs. Therefore, increased screening and nuanced vaccination planning must be optimized in order to prevent and mitigate these growing cost burdens on vulnerable populations.

2.
Ophthalmol Sci ; 4(4): 100468, 2024.
Article in English | MEDLINE | ID: mdl-38560278

ABSTRACT

Purpose: Use of the electronic health record (EHR) has motivated the need for data standardization. A gap in knowledge exists regarding variations in existing terminologies for defining diabetic retinopathy (DR) cohorts. This study aimed to review the literature and analyze variations regarding codified definitions of DR. Design: Literature review and quantitative analysis. Subjects: Published manuscripts. Methods: Four graders reviewed PubMed and Google Scholar for peer-reviewed studies. Studies were included if they used codified definitions of DR (e.g., billing codes). Data elements such as author names, publication year, purpose, data set type, and DR definitions were manually extracted. Each study was reviewed by ≥ 2 authors to validate inclusion eligibility. Quantitative analyses of the codified definitions were then performed to characterize the variation between DR cohort definitions. Main Outcome Measures: Number of studies included and numeric counts of billing codes used to define codified cohorts. Results: In total, 43 studies met the inclusion criteria. Half of the included studies used datasets based on structured EHR data (i.e., data registries, institutional EHR review), and half used claims data. All but 1 of the studies used billing codes such as the International Classification of Diseases 9th or 10th edition (ICD-9 or ICD-10), either alone or in addition to another terminology for defining disease. Of the 27 included studies that used ICD-9 and the 20 studies that used ICD-10 codes, the most common codes used pertained to the full spectrum of DR severity. Diabetic retinopathy complications (e.g., vitreous hemorrhage) were also used to define some DR cohorts. Conclusions: Substantial variations exist among codified definitions for DR cohorts within retrospective studies. Variable definitions may limit generalizability and reproducibility of retrospective studies. More work is needed to standardize disease cohorts. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

3.
Liver Int ; 2024 Apr 25.
Article in English | MEDLINE | ID: mdl-38661296

ABSTRACT

BACKGROUND AND AIMS: The presence of steatosis in a donor liver and its relation to post-transplantation outcomes are not well defined. This study evaluates the effect of the presence and severity of micro- and macro-steatosis of a donor graft on post-transplantation outcomes. METHODS: The UNOS-STAR registry (2005-2019) was used to select patients who received a liver transplant graft with hepatic steatosis. The study cohort was stratified by the presence of macro- or micro-vesicular steatosis, and further stratified by histologic grade of steatosis. The primary endpoints of all-cause mortality and graft failure were compared using sequential Cox regression analysis. Analysis of specific causes of mortality was further performed. RESULTS: There were 9184 with no macro-steatosis (control), 150 with grade 3 macro-steatosis, 822 with grade 2 macro-steatosis and 12 585 with grade 1 macro-steatosis. There were 10 320 without micro-steatosis (control), 478 with grade 3 micro-steatosis, 1539 with grade 2 micro-steatosis and 10 404 with grade 1 micro-steatosis. There was no significant difference in all-cause mortality or graft failure among recipients who received a donor organ with any evidence of macro- or micro-steatosis, compared to those receiving non-steatotic grafts. There was increased mortality due to cardiac arrest among recipients of a grade 2 macro-steatosis donor organ. CONCLUSION: This study shows no significant difference in all-cause mortality or graft failure among recipients who received a donor liver with any degree of micro- or macro-steatosis. Further analysis identified increased mortality due to specific aetiologies among recipients receiving donor organs with varying grades of macro- and micro-steatosis.

4.
Nat Ecol Evol ; 8(5): 924-935, 2024 May.
Article in English | MEDLINE | ID: mdl-38499871

ABSTRACT

Wildlife must adapt to human presence to survive in the Anthropocene, so it is critical to understand species responses to humans in different contexts. We used camera trapping as a lens to view mammal responses to changes in human activity during the COVID-19 pandemic. Across 163 species sampled in 102 projects around the world, changes in the amount and timing of animal activity varied widely. Under higher human activity, mammals were less active in undeveloped areas but unexpectedly more active in developed areas while exhibiting greater nocturnality. Carnivores were most sensitive, showing the strongest decreases in activity and greatest increases in nocturnality. Wildlife managers must consider how habituation and uneven sensitivity across species may cause fundamental differences in human-wildlife interactions along gradients of human influence.


Subject(s)
COVID-19 , Human Activities , Mammals , Animals , Humans , COVID-19/epidemiology , Animals, Wild , Ecosystem
5.
Ophthalmol Sci ; 4(3): 100458, 2024.
Article in English | MEDLINE | ID: mdl-38317868

ABSTRACT

Objective: To determine if baseline diabetic retinopathy (DR) severity mediates the relationship between health insurance status and DR progression. Design: Retrospective cohort study. Subjects: Seven hundred sixteen patients aged ≥ 18 years with a diagnosis of type 1 or 2 diabetes mellitus, and a diagnosis of nonproliferative DR (NPDR) were identified from the electronic health record of a tertiary academic center between June 2012 and February 2022. Methods: NPDR severity at baseline was the proposed mediator in the relationship between insurance status and proliferative DR (PDR) progression. Logistic regression was used to determine the association between insurance status and NPDR severity at baseline, and Cox proportional hazards regression was used to assess the association between insurance status and time to PDR progression. To analyze the mediation effect of NPDR severity at baseline, a counterfactual approach, which decomposes a total effect into a natural direct effect and a natural indirect effect was applied. Main Outcome Measures: Time to progression from first NPDR diagnosis to first PDR diagnosis. Results: Of the 716 patients, 581 (81%) had Medicare or private insurance, 107 (15%) had Medicaid, and 28 (4.0%) were uninsured at their baseline eye visit. Uninsured or Medicaid patients had a higher proportion of moderate or severe NPDR at their baseline eye visit and a higher proportion of progression to PDR. After adjusting for confounders and NPDR severity at baseline, patients who were uninsured had significantly greater risk of progression to PDR compared with that of patients with Medicare/private insurance (hazard ratio [HR]: 2.63; 95% confidence interval [CI]: 1.10-6.25). Patients with Medicaid also had an increased risk of progression to PDR compared with that of patients with Medicare/private insurance, although not statistically significant (HR: 1.53; 95% CI: 0.81-2.89). NPDR severity at baseline mediated 41% of the effect of insurance status (uninsured vs. Medicare/private insurance) on PDR progression. Conclusions: Patients who were uninsured were more likely to have an advanced stage of NPDR at their baseline eye visit and were at significantly greater risk of progression to PDR compared with patients who had Medicare or were privately insured. Mediation analysis revealed that differences in baseline NPDR severity by insurance explained a significant proportion of the relationship between insurance status and DR progression. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

6.
Ophthalmic Epidemiol ; : 1-4, 2023 Sep 28.
Article in English | MEDLINE | ID: mdl-37771107

ABSTRACT

Purpose To compare the quality of optic nerve photographs from three different handheld fundus cameras and to assess the reproducibility and agreement of vertical cup-to-disk ratio (VCDR) measurements from each camera. Methods Adult patients from a comprehensive ophthalmology clinic and an intravitreous injection clinic in northern Thailand were recruited for this cross-sectional study. Each participant had optic nerve photography performed with each of 3 handheld cameras: the Volk iNview, Volk Pictor Plus, and Peek Retina. Images were graded for VCDR in a masked fashion by two photo-graders and images with > 0.2 discrepancy in VCDR were assessed by a third photo-grader. Results A total of 355 eyes underwent imaging with three different handheld fundus cameras. Optic nerve images were judged ungradable in 130 (37%) eyes imaged with Peek Retina, compared to 36 (10%) and 55 (15%) eyes imaged with the iNview and Pictor Plus, respectively. For 193 eyes with gradable images from all 3 cameras, inter-rater reliability for VCDR measurements was poor or moderate for each of the cameras, with intraclass correlation coefficients ranging from 0.41 to 0.52. A VCDR ≥ 0.6 was found in 6 eyes on iNview images, 9 eyes on Pictor Plus images, and 3 eyes on Peek images, with poor agreement between cameras (e.g., no eyes graded as VCDR ≥ 0.6 on images from both the iNview and Pictor Plus). Conclusions Inter-rater reliability of VCDR grades from 3 handheld cameras was poor. Cameras did not agree on which eyes had large VCDRs.

7.
Ecol Evol ; 13(9): e10464, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37720065

ABSTRACT

Outdoor recreation is widespread, with uncertain effects on wildlife. The human shield hypothesis (HSH) suggests that recreation could have differential effects on predators and prey, with predator avoidance of humans creating a spatial refuge 'shielding' prey from people. The generality of the HSH remains to be tested across larger scales, wherein human shielding may prove generalizable, or diminish with variability in ecological contexts. We combined data from 446 camera traps and 79,279 sampling days across 10 landscapes spanning 15,840 km2 in western Canada. We used hierarchical models to quantify the influence of recreation and landscape disturbance (roads, logging) on ungulate prey (moose, mule deer and elk) and carnivore (wolf, grizzly bear, cougar and black bear) site use. We found limited support for the HSH and strong responses to recreation at local but not larger spatial scales. Only mule deer showed positive but weak landscape-level responses to recreation. Elk were positively associated with local recreation while moose and mule deer responses were negative, contrary to HSH predictions. Mule deer showed a more complex interaction between recreation and land-use disturbance, with more negative responses to recreation at lower road density or higher logged areas. Contrary to HSH predictions, carnivores did not avoid recreation and grizzly bear site use was positively associated. We also tested the effects of roads and logging on temporal activity overlap between mule deer and recreation, expecting deer to minimize interaction with humans by partitioning time in areas subject to more habitat disturbance. However, temporal overlap between people and deer increased with road density. Our findings highlight the complex ecological patterns that emerge at macroecological scales. There is a need for expanded monitoring of human and wildlife use of recreation areas, particularly multi-scale and -species approaches to studying the interacting effects of recreation and land-use change on wildlife.

8.
Am J Ophthalmol ; 256: 97-107, 2023 12.
Article in English | MEDLINE | ID: mdl-37625509

ABSTRACT

PURPOSE: To describe 1-year secondary outcomes in the Tube Versus Trabeculectomy IRIS® (Intelligent Registry In Sight) Registry Study (TVTIRIS), and to compare to the TVT randomized controlled trial (TVTRCT). DESIGN: TVTIRIS was a retrospective cohort study. METHODS: The 2013-2017 IRIS Registry was used to identify eyes that received a tube shunt (tube) or trabeculectomy after a previous trabeculectomy and/or cataract surgery and had 1 year of follow-up. The TVTRCT compared a Baerveldt 350-mm2 glaucoma implant to trabeculectomy in similar eyes. RESULTS: In the TVTIRIS cohort, the tube (n = 236, 56.3%) and trabeculectomy (n = 183, 43.7%) groups had similar and significant reductions in intraocular pressure (IOP) from baseline to 1 year. In the tube group, IOP (mean ± SD) decreased from 26.6 ± 6.5 mm Hg at baseline to 14.3 ± 4.8 mm Hg at 1 year. In the trabeculectomy group, IOP decreased from 25.3 ± 6.4 mm Hg at baseline to 13.5 ± 5.2 mm Hg at 1 year. The trabeculectomy groups from both studies had similar 1-year IOP reduction (P = .18), although the TVTRCT cohort used fewer medications at all time points (P < .01). There were more pronounced differences in the mean IOP and medications between the tube groups in the 2 studies, presumably due to the inclusion of valved tubes in TVTIRIS. More reoperations occurred in TVTIRIS. CONCLUSIONS: The TVTIRIS tube and trabeculectomy groups had comparable 1-year IOP reduction, although trabeculectomy eyes used fewer glaucoma medications. The trabeculectomy group in TVTIRIS and TVTRCT had similar IOP and medication reduction at 1 year. Randomized controlled trials and electronic health record data both provide invaluable insight into surgical outcomes.


Subject(s)
Glaucoma Drainage Implants , Glaucoma , Trabeculectomy , Humans , Retrospective Studies , Mitomycin , Glaucoma/surgery , Intraocular Pressure , Treatment Outcome
9.
Ophthalmol Sci ; 3(2): 100276, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36950087

ABSTRACT

Purpose: To develop models for progression of nonproliferative diabetic retinopathy (NPDR) to proliferative diabetic retinopathy (PDR) and determine if incorporating updated information improves model performance. Design: Retrospective cohort study. Participants: Electronic health record (EHR) data from a tertiary academic center, University of California San Francisco (UCSF), and a safety-net hospital, Zuckerberg San Francisco General (ZSFG) Hospital were used to identify patients with a diagnosis of NPDR, age ≥ 18 years, a diagnosis of type 1 or 2 diabetes mellitus, ≥ 6 months of ophthalmology follow-up, and no prior diagnosis of PDR before the index date (date of first NPDR diagnosis in the EHR). Methods: Four survival models were developed: Cox proportional hazards, Cox with backward selection, Cox with LASSO regression and Random Survival Forest. For each model, three variable sets were compared to determine the impact of including updated clinical information: Static0 (data up to the index date), Static6m (data updated 6 months after the index date), and Dynamic (data in Static0 plus data change during the 6-month period). The UCSF data were split into 80% training and 20% testing (internal validation). The ZSFG data were used for external validation. Model performance was evaluated by the Harrell's concordance index (C-Index). Main Outcome Measures: Time to PDR. Results: The UCSF cohort included 1130 patients and 92 (8.1%) patients progressed to PDR. The ZSFG cohort included 687 patients and 30 (4.4%) patients progressed to PDR. All models performed similarly (C-indices ∼ 0.70) in internal validation. The random survival forest with Static6m set performed best in external validation (C-index 0.76). Insurance and age were selected or ranked as highly important by all models. Other key predictors were NPDR severity, diabetic neuropathy, number of strokes, mean Hemoglobin A1c, and number of hospital admissions. Conclusions: Our models for progression of NPDR to PDR achieved acceptable predictive performance and validated well in an external setting. Updating the baseline variables with new clinical information did not consistently improve the predictive performance. Financial Disclosures: Proprietary or commercial disclosure may be found after the references.

10.
JAMA Ophthalmol ; 141(1): 56-61, 2023 01 01.
Article in English | MEDLINE | ID: mdl-36454548

ABSTRACT

Importance: Telehealth in ophthalmology has traditionally focused on preventive disease screening with limited use in outpatient evaluation. The unique conditions of the COVID-19 pandemic afforded the opportunity to evaluate different implementations of teleophthalmology at scale, providing insight into expanding teleophthalmology care. Objective: To compare telehealth use in ophthalmology with other specialties and assess the feasibility of augmenting ophthalmic telehealth encounters with asynchronous testing during the COVID-19 pandemic. Design, Setting, and Participants: This quality improvement study evaluated retrospective, longitudinal, observational data from the first 18 months of the COVID-19 pandemic (January 1, 2020, through July 31, 2021) for 881 080 patients receiving care from outpatient primary care, cardiology, neurology, gastroenterology, surgery, neurosurgery, urology, orthopedic surgery, otolaryngology, obstetrics/gynecology, and ophthalmology clinics of the University of California, San Francisco. Asynchronous testing was evaluated for teleophthalmology encounters. Interventions: A hybrid care model wherein ophthalmic testing data were acquired asynchronously and used to augment telehealth encounters. Main Outcomes and Measures: Telehealth as a percentage of total volume of ambulatory care and use of asynchronous testing for ophthalmic conditions. Results: The volume of in-person outpatient visits dropped by 83.3% (39 488 of 47 390) across the evaluated specialties at the onset of shelter-in-place orders for the COVID-19 pandemic, and the initial use of telehealth increased for these specialties before stabilizing over the 18-month study period. In ophthalmology, telehealth use peaked at 488 of 1575 encounters (31.0%) early in the pandemic and returned to mostly in-person visits as COVID-19 restrictions lifted. Elective use of telehealth was highest in gastroenterology, urology, neurology, and neurosurgery and lowest in ophthalmology. Asynchronous testing was combined with 126 teleophthalmology encounters, resulting in change of clinical management for 32 patients (25.4%) and no change for 91 (72.2%). Conclusions and Relevance: Telehealth increased across various specialties during the COVID-19 pandemic. Combining teleophthalmic visits with asynchronous testing suggested that this approach is feasible for subspecialty-level evaluation. Additional study is needed to evaluate whether asynchronous testing outside the same institution could provide an effective and lasting approach for expanding the reach of ophthalmic telehealth.


Subject(s)
COVID-19 , Ophthalmology , Telemedicine , Pregnancy , Female , Humans , COVID-19/epidemiology , Telemedicine/methods , Pandemics/prevention & control , Retrospective Studies
11.
PLoS One ; 17(11): e0276448, 2022.
Article in English | MEDLINE | ID: mdl-36445857

ABSTRACT

The urban-wildland interface is expanding and increasing the risk of human-wildlife conflict. Some wildlife species adapt to or avoid living near people, while others select for anthropogenic resources and are thus more prone to conflict. To promote human-wildlife coexistence, wildlife and land managers need to understand how conflict relates to habitat and resource use in the urban-wildland interface. We investigated black bear (Ursus americanus) habitat use across a gradient of human disturbance in a North American hotspot of human-black bear conflict. We used camera traps to monitor bear activity from July 2018 to July 2019, and compared bear habitat use to environmental and anthropogenic variables and spatiotemporal probabilities of conflict. Bears predominantly used areas of high vegetation productivity and increased their nocturnality near people. Still, bears used more high-conflict areas in summer and autumn, specifically rural lands with ripe crops. Our results suggest that bears are generally modifying their behaviours in the urban-wildland interface through spatial and temporal avoidance of humans, which may facilitate coexistence. However, conflict still occurs, especially in autumn when hyperphagia and peak crop availability attract bears to abundant rural food resources. To improve conflict mitigation practices, we recommend targeting seasonal rural attractants with pre-emptive fruit picking, bear-proof compost containment, and other forms of behavioural deterrence. By combining camera-trap monitoring of a large carnivore along an anthropogenic gradient with conflict mapping, we provide a framework for evidence-based improvements in human-wildlife coexistence.


Subject(s)
Ecosystem , Ursidae , Animals , Humans , Animals, Wild , Crops, Agricultural , Seasons , Anthropogenic Effects
12.
Ecol Evol ; 12(7): e9108, 2022 Jul.
Article in English | MEDLINE | ID: mdl-35866017

ABSTRACT

Human disturbance directly affects animal populations and communities, but indirect effects of disturbance on species behaviors are less well understood. For instance, disturbance may alter predator activity and cause knock-on effects to predator-sensitive foraging in prey. Camera traps provide an emerging opportunity to investigate such disturbance-mediated impacts to animal behaviors across multiple scales. We used camera trap data to test predictions about predator-sensitive behavior in three ungulate species (caribou Rangifer tarandus; white-tailed deer, Odocoileus virginianus; moose, Alces alces) across two western boreal forest landscapes varying in disturbance. We quantified behavior as the number of camera trap photos per detection event and tested its relationship to inferred human-mediated predation risk between a landscape with greater industrial disturbance and predator activity and a "control" landscape with lower human and predator activity. We also assessed the finer-scale influence on behavior of variation in predation risk (relative to habitat variation) across camera sites within the more disturbed landscape. We predicted that animals in areas with greater predation risk (e.g., more wolf activity, less cover) would travel faster past cameras and generate fewer photos per detection event, while animals in areas with less predation risk would linger (rest, forage, investigate), generating more photos per event. Our predictions were supported at the landscape-level, as caribou and moose had more photos per event in the control landscape where disturbance-mediated predation risk was lower. At a finer-scale within the disturbed landscape, no prey species showed a significant behavioral response to wolf activity, but the number of photos per event decreased for white-tailed deer with increasing line of sight (m) along seismic lines (i.e., decreasing visual cover), consistent with a predator-sensitive response. The presence of juveniles was associated with shorter behavioral events for caribou and moose, suggesting greater predator sensitivity for females with calves. Only moose demonstrated a positive behavioral association (i.e., longer events) with vegetation productivity (16-day NDVI), suggesting that for other species bottom-up influences of forage availability were generally weaker than top-down influences from predation risk. Behavioral insights can be gleaned from camera trap surveys and provide complementary information about animal responses to predation risk, and thus about the indirect impacts of human disturbances on predator-prey interactions.

14.
PLOS Digit Health ; 1(11): e0000131, 2022 Nov.
Article in English | MEDLINE | ID: mdl-36812561

ABSTRACT

The objective of this study was to compare the sensitivity and specificity of handheld fundus cameras in detecting diabetic retinopathy (DR), diabetic macular edema (DME), and macular degeneration. Participants in the study, conducted at Maharaj Nakorn Hospital in Northern Thailand between September 2018 and May 2019, underwent an ophthalmologist examination as well as mydriatic fundus photography with three handheld fundus cameras (iNview, Peek Retina, Pictor Plus). Photographs were graded and adjudicated by masked ophthalmologists. Outcome measures included the sensitivity and specificity of each fundus camera for detecting DR, DME, and macular degeneration, relative to ophthalmologist examination. Fundus photographs of 355 eyes from 185 participants were captured with each of the three retinal cameras. Of the 355 eyes, 102 had DR, 71 had DME, and 89 had macular degeneration on ophthalmologist examination. The Pictor Plus was the most sensitive camera for each of the diseases (73-77%) and also achieved relatively high specificity (77-91%). The Peek Retina was the most specific (96-99%), although in part due to its low sensitivity (6-18%). The iNview had slightly lower estimates of sensitivity (55-72%) and specificity (86-90%) compared to the Pictor Plus. These findings demonstrated that the handheld cameras achieved high specificity but variable sensitivities in detecting DR, DME, and macular degeneration. The Pictor Plus, iNview, and Peek Retina would have distinct advantages and disadvantages when applied for utilization in tele-ophthalmology retinal screening programs.

15.
BMC Ophthalmol ; 21(1): 440, 2021 Dec 20.
Article in English | MEDLINE | ID: mdl-34930191

ABSTRACT

BACKGROUND: The authors sought to evaluate visual outcomes in patients with varying etiologies of neovascular glaucoma (NVG), who were treated with glaucoma drainage devices (GDD). METHODS: This was a retrospective case series of patients at a large academic teaching institution who had surgical intervention for neovascular glaucoma between September 2011 and May 2019. Eyes were included if there was documented neovascularization of the iris/angle with an intraocular pressure (IOP) > 21 mmHg at presentation. Eyes must also have been treated with surgical intervention that included a GDD. Primary outcome measure was visual acuity at the 1-year post-operative visit. Secondary outcome measure was qualified success after surgery defined by: pressure criteria (5 mmHg < IOP ≤ 21 mmHg), no re-operation for elevated IOP, and no loss of LP vision. RESULTS: One hundred twenty eyes met inclusion criteria. 61.7% had an etiology of proliferative diabetic retinopathy (PDR), 23.3% had retinal vein occlusions (RVO), and the remaining 15.0% suffered from other etiologies. Of patients treated with GDD, eyes with PDR had better vision compared to eyes with RVO at final evaluation (p = 0.041). There was a statistically significant difference (p = 0.027) in the mean number of glaucoma medications with Ahmed eyes (n = 70) requiring 1.9 medications and Baerveldt eyes (n = 46) requiring 1.3 medications at final evaluation. CONCLUSIONS: In our study, many patients with NVG achieved meaningful vision, as defined by World Health Organization (WHO) guidelines, and IOP control after GDD. Outcomes differed between patients with PDR and RVO in favor of the PDR group. Different GDD devices had similar performance profiles for VA and IOP outcomes. Direct prospective comparison of Baerveldt, Ahmed, and cyclophotocoagulation represents the next phase of discovery.


Subject(s)
Glaucoma, Neovascular , Glaucoma, Neovascular/etiology , Glaucoma, Neovascular/surgery , Humans , Prospective Studies , Retrospective Studies , Treatment Outcome
16.
Clin Ophthalmol ; 15: 3205-3211, 2021.
Article in English | MEDLINE | ID: mdl-34349497

ABSTRACT

PURPOSE: To evaluate the agreement of a home vision screening test compared to standard in-office technician-measured Snellen visual acuity to allow for remote screening and triaging of patients. PATIENTS AND METHODS: In this prospective study, English-speaking patients with in-office ophthalmology appointments from May to August 2020 and visual acuity better than 20/125 were asked to complete a home vision test one week before their scheduled in-office appointment. The home vision test was a modified ETDRS chart displayed in a PDF document that could be printed or viewed on a monitor. The primary outcome was the mean difference between office-based and home visual acuity. RESULTS: Eighty-two eyes of 45 patients were included in the study with 45 study eyes analyzed. The mean difference between office-based and home visual acuity was -0.02 logMAR (SD 0.15, P=0.28) among study eyes. Of these eyes, 91% demonstrated agreement between the two methods within 0.2 logMAR of the mean difference, and 60% had agreement within 0.1 logMAR of the mean difference. There were no significant demographic or ocular risk factors leading to a greater difference between the tests. CONCLUSION: There was good agreement between the home and in-office Snellen tests for patients with vision better than 20/125. The home vision test can be used to remotely determine if there is a significant vision change of >0.2 logMAR or approximately 2 lines of visual acuity.

17.
Am J Ophthalmol Case Rep ; 23: 101126, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34222712

ABSTRACT

PURPOSE: To describe a unique incidence of inadvertent filtering bleb creation after intravitreal injections. OBSERVATIONS: An 84-year-old woman with a history of wet age-related macular degeneration requiring intravitreal injections presented with a Seidel-positive conjunctival cyst. The cyst was in an area where she had received multiple injections and was suspected to be an inadvertent filtering bleb secondary to a full-thickness scleral hole created by these injections. She underwent surgical closure of the fistula and repair of the bleb. CONCLUSIONS AND IMPORTANCE: This case emphasizes the importance of recognizing this potential complication of intraocular injections and outlines steps that should be taken to prevent poor outcomes and vision loss.

19.
Am J Ophthalmol ; 227: 87-99, 2021 07.
Article in English | MEDLINE | ID: mdl-33657420

ABSTRACT

PURPOSE: This study compared 1-year results for the composite treatment outcome from the Tube Versus Trabeculectomy (TVT) randomized controlled trial (RCT) to those from an IRISⓇ (Intelligent Research In Sight) Registry cohort of analogous eyes. DESIGN: Retrospective clinical study with comparison to an RCT. METHODS: Subjects' eyes in the IRIS Registry received either a glaucoma drainage implant (tube) or underwent trabeculectomy after a previous trabeculectomy and/or cataract extraction and had data for 1-year follow-up analyses. OUTCOME: Eyes were classified as failing if they had hypotony (intraocular pressure (IOP) ≤5 mm Hg) or inadequate IOP control (IOP >21 mm Hg or not reduced at least 20% below baseline) on 2 consecutive follow-up visits after 3 months, a reoperation for glaucoma, or no light perception vision and as successful otherwise. Failure risk was compared by treatment, demographic, and clinical variables and was compared to analogous failure risks from the TVT RCT. RESULTS: The TVT IRIS Registry cohort included 419 eyes, 236 tube eyes (56.3%) and 183 trabeculectomy eyes (43.7%). In this cohort, there was no significant failure risk difference (12.3% for tube eyes and 16.4% for trabeculectomy eyes, P = 0.231). Comparing the studies, there was a significantly greater risk of failure in the TVT IRIS Registry tube eyes than in the TVT RCT tube eyes (3.8%; P <.001). Reasons for treatment failure included reoperations for glaucoma (none in the TVT RCT at 1 year). CONCLUSIONS: Our results were different from those in the TVT RCT. Possible reasons include non-Baerveldt tubes, greater severity among tube eyes, and practice patterns that reflect real-world data, which are different than those in RCTs.


Subject(s)
Glaucoma Drainage Implants , Glaucoma, Open-Angle/surgery , Prosthesis Implantation , Trabeculectomy , Aged , Aged, 80 and over , Cataract Extraction , Female , Glaucoma, Open-Angle/physiopathology , Humans , Intraocular Pressure/physiology , Male , Middle Aged , Registries , Reoperation , Retrospective Studies , Treatment Outcome , Visual Acuity/physiology
20.
Clin Ophthalmol ; 15: 243-251, 2021.
Article in English | MEDLINE | ID: mdl-33519186

ABSTRACT

BACKGROUND: There is limited long-term data comparing selective laser trabeculoplasty (SLT) to the newer micropulse laser trabeculoplasty (MLT) using a laser emitting at 532 nm. In this study, we determine the effectiveness and safety of MLT compared to SLT. DESIGN: Retrospective comparative cohort study. PARTICIPANTS: A total of 85 consecutive eyes received SLT and 43 consecutive eyes received MLT. METHODS: Patients with open-angle glaucoma receiving their first treatment of laser trabeculoplasty were included. Exclusion criteria are prior laser trabeculoplasty, laser cyclophotocoagulation or glaucoma surgery, and follow-up of less than 1 year. MAIN OUTCOME MEASURES: The primary outcome was success at 1 year, defined as a reduction in intraocular eye pressure (IOP) by ≥20% from baseline or met prespecified target IOP with no additional glaucoma medication or subsequent glaucoma intervention. RESULTS: Baseline IOP was 18.0 mmHg (95% CI=16.4-19.5) in the MLT group on an average of 1.8 (95% CI=1.4-2.2) glaucoma medications compared to 18.2 mmHg (95% CI=17.2-19.3) for the SLT group on an average of 2.0 (95% CI=1.6-2.3) medications. At 1-hour post-laser, the SLT group had more transient IOP spikes (MLT 5% vs SLT 16%, P=0.10). There was a trend toward increased success in the SLT group compared to MLT at 1 year (relative risk=1.4, 95% CI=0.8-2.5, P=0.30). CONCLUSION AND RELEVANCE: Eyes had similar success after MLT compared to SLT at 1 year. Laser trabeculoplasty with either method could be offered as treatment with consideration of MLT in those eyes where IOP spikes should be avoided.

SELECTION OF CITATIONS
SEARCH DETAIL
...