Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Cornea ; 39(12): 1503-1509, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32833849

ABSTRACT

PURPOSE: To evaluate the reliability of manual annotation when quantifying cornea anatomical and microbial keratitis (MK) morphological features on slit-lamp photography (SLP) images. METHODS: Prospectively enrolled patients with MK underwent SLP at initial encounter at 2 academic eye hospitals. Patients who presented with an epithelial defect (ED) were eligible for analysis. Features, which included ED, corneal limbus (L), pupil (P), stromal infiltrate (SI), white blood cell (WBC) infiltration at the SI edge, and hypopyon (H), were annotated independently by 2 physicians on SLP images. Intraclass correlation coefficients (ICCs) were applied for reliability assessment; dice similarity coefficients (DSCs) were used to investigate the area overlap between readers. RESULTS: Seventy-five MK patients with an ED received SLP. DSCs indicate good to fair annotation overlap between graders (L = 0.97, P = 0.80, ED = 0.94, SI = 0.82, H = 0.82, WBC = 0.83) and between repeat annotations by the same grader (L = 0.97, P = 0.81, ED = 0.94, SI = 0.85, H = 0.84, WBC = 0.82). ICC scores showed good intergrader (L = 0.98, P = 0.78, ED = 1.00, SI = 0.67, H = 0.97, WBC = 0.86) and intragrader (L = 0.99, P = 0.92, ED = 0.99, SI = 0.94, H = 0.99, WBC = 0.92) reliabilities. When reliability statistics were recalculated for annotated SI area in the subset of cases where both graders agreed WBC infiltration was present/absent, intergrader ICC improved to 0.91 and DSC improved to 0.86 and intragrader ICC remained the same, whereas DSC improved to 0.87. CONCLUSIONS: Manual annotation indicates usefulness of area quantification in the evaluation of MK. However, variability is intrinsic to the task. Thus, there is a need for optimization of annotation protocols. Future directions may include using multiple annotators per image or automated annotation software.


Subject(s)
Epithelium, Corneal/pathology , Eye Infections, Bacterial/pathology , Eye Infections, Fungal/pathology , Keratitis/pathology , Adult , Aged , Bacteria/isolation & purification , Corneal Stroma/pathology , Eye Infections, Bacterial/microbiology , Eye Infections, Fungal/microbiology , Female , Fungi/isolation & purification , Humans , Keratitis/microbiology , Leukocyte Count , Limbus Corneae/pathology , Male , Middle Aged , Prospective Studies , Reproducibility of Results , Slit Lamp Microscopy
2.
Cornea ; 39(5): 628-633, 2020 May.
Article in English | MEDLINE | ID: mdl-31977729

ABSTRACT

PURPOSE: To investigate the sources of measurement variability when quantifying the morphology of microbial keratitis (MK) from slit-lamp photography (SLP) images using a semiautomated, image-analysis algorithm. METHODS: Prospectively enrolled patients with MK underwent SLP to obtain images of their epithelial defects (ED). Eyes were stained with fluorescein and imaged multiple times under blue light, at low and high magnifications. A masked research assistant chose the 3 best images and annotated each 3 times to provide seed regions corresponding to ED and healthy cornea. The algorithm returned the ED area for each seeded image. Eyes without EDs and algorithm failures were excluded. Variance components were estimated with a random effects model and intraclass correlation coefficients estimated with intragrader reliability. RESULTS: A total of 42 eyes from 42 MK participants were photographed. After excluding poor quality images, eyes with no EDs, and algorithm failures, 34 patients with 92 images and 274 seeds were analyzed. No significant differences in the average ED area were found between seedings or high- versus low-SLP magnifications (all P > 0.5, paired t tests). Minimal measurement variability was because of image (0.9%), magnification (0.2%), or seed (0.1%). Most variability was attributable to differences in ED sizes between patients (85.2%). 13.7% of variability was unexplained. Multiple iterations of the algorithm on the same image showed good consistency (intraclass correlation coefficient = 0.98, 95% confidence interval, 0.97-0.99). CONCLUSIONS: Image-analysis algorithms showed good reliability for measuring the ED area from SLP images. Most measurement variability was because of between-patient differences, not imaging settings or application of the algorithm by the user.


Subject(s)
Algorithms , Cornea/pathology , Eye Infections, Bacterial/diagnosis , Image Processing, Computer-Assisted/methods , Keratitis/diagnosis , Female , Humans , Male , Middle Aged , Prospective Studies , Reproducibility of Results , Slit Lamp Microscopy
3.
JAMA Ophthalmol ; 137(8): 929-931, 2019 Aug 01.
Article in English | MEDLINE | ID: mdl-31145441

ABSTRACT

IMPORTANCE: Electronic health records (EHRs) contain an abundance of health information. However, researchers need to understand data accuracy to ask appropriate research questions. OBJECTIVE: To investigate the concordance of the names of medications for microbial keratitis in the structured, formal EHR medication list and the text of clinicians' progress notes. DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study, conducted in the cornea section of an ophthalmology department in a tertiary care, referral academic medical center, examined the medications of 53 patients with microbial keratitis treated until disease resolution from July 1, 2015, to August 1, 2018. Documentation of medications was compared between the structured medication list extracted from the EHR server and medications written into the clinical progress note and transcribed by the study team. EXPOSURE: Medication treatment for microbial keratitis. MAIN OUTCOMES AND MEASURES: Medication mismatch frequency. RESULTS: The study sample included 24 men and 29 women, with a mean (SD) age of 51.8 (19.6) years. Of the 247 medications identified, 57 (23.1%) of prescribed medications differed between the progress notes and the formal EHR-based medication list. Reasons included medications not prescribed via the EHR ordering system (25 [43.9%]), outside medications not reconciled in the internal EHR medication list (23 [40.4%]), and medications prescribed via the EHR ordering system and in the formal list, but not described in the clinical note (9 [15.8%]). Fortified antimicrobials represented the largest category for medication mismatch between modalities (17 of 70 [24.3%]). Nearly one-third of patients (17 [32.1%]) had at least 1 medication mismatch in their record. CONCLUSIONS AND RELEVANCE: Almost 1 in 4 medications were mismatched between the progress note and formal medication list in the EHR. These findings suggest that EHR data should be checked for internal consistency before use in research.

SELECTION OF CITATIONS
SEARCH DETAIL
...