Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
1.
Ophthalmology ; 124(3): 343-351, 2017 03.
Article in English | MEDLINE | ID: mdl-28024825

ABSTRACT

OBJECTIVE: With the increasing prevalence of diabetes, annual screening for diabetic retinopathy (DR) by expert human grading of retinal images is challenging. Automated DR image assessment systems (ARIAS) may provide clinically effective and cost-effective detection of retinopathy. We aimed to determine whether ARIAS can be safely introduced into DR screening pathways to replace human graders. DESIGN: Observational measurement comparison study of human graders following a national screening program for DR versus ARIAS. PARTICIPANTS: Retinal images from 20 258 consecutive patients attending routine annual diabetic eye screening between June 1, 2012, and November 4, 2013. METHODS: Retinal images were manually graded following a standard national protocol for DR screening and were processed by 3 ARIAS: iGradingM, Retmarker, and EyeArt. Discrepancies between manual grades and ARIAS results were sent to a reading center for arbitration. MAIN OUTCOME MEASURES: Screening performance (sensitivity, false-positive rate) and diagnostic accuracy (95% confidence intervals of screening-performance measures) were determined. Economic analysis estimated the cost per appropriate screening outcome. RESULTS: Sensitivity point estimates (95% confidence intervals) of the ARIAS were as follows: EyeArt 94.7% (94.2%-95.2%) for any retinopathy, 93.8% (92.9%-94.6%) for referable retinopathy (human graded as either ungradable, maculopathy, preproliferative, or proliferative), 99.6% (97.0%-99.9%) for proliferative retinopathy; Retmarker 73.0% (72.0 %-74.0%) for any retinopathy, 85.0% (83.6%-86.2%) for referable retinopathy, 97.9% (94.9%-99.1%) for proliferative retinopathy. iGradingM classified all images as either having disease or being ungradable. EyeArt and Retmarker saved costs compared with manual grading both as a replacement for initial human grading and as a filter prior to primary human grading, although the latter approach was less cost-effective. CONCLUSIONS: Retmarker and EyeArt systems achieved acceptable sensitivity for referable retinopathy when compared with that of human graders and had sufficient specificity to make them cost-effective alternatives to manual grading alone. ARIAS have the potential to reduce costs in developed-world health care economies and to aid delivery of DR screening in developing or remote health care settings.


Subject(s)
Cost-Benefit Analysis , Diabetic Retinopathy/diagnosis , Diabetic Retinopathy/economics , Image Interpretation, Computer-Assisted , Adolescent , Adult , Aged , Aged, 80 and over , Child , Decision Trees , Economics, Medical , False Negative Reactions , Female , Humans , Image Interpretation, Computer-Assisted/methods , Male , Mass Screening/methods , Middle Aged , Physical Examination/methods , Predictive Value of Tests , Reproducibility of Results , Sensitivity and Specificity , Software
2.
Health Technol Assess ; 20(92): 1-72, 2016 12.
Article in English | MEDLINE | ID: mdl-27981917

ABSTRACT

BACKGROUND: Diabetic retinopathy screening in England involves labour-intensive manual grading of retinal images. Automated retinal image analysis systems (ARIASs) may offer an alternative to manual grading. OBJECTIVES: To determine the screening performance and cost-effectiveness of ARIASs to replace level 1 human graders or pre-screen with ARIASs in the NHS diabetic eye screening programme (DESP). To examine technical issues associated with implementation. DESIGN: Observational retrospective measurement comparison study with a real-time evaluation of technical issues and a decision-analytic model to evaluate cost-effectiveness. SETTING: A NHS DESP. PARTICIPANTS: Consecutive diabetic patients who attended a routine annual NHS DESP visit. INTERVENTIONS: Retinal images were manually graded and processed by three ARIASs: iGradingM (version 1.1; originally Medalytix Group Ltd, Manchester, UK, but purchased by Digital Healthcare, Cambridge, UK, at the initiation of the study, purchased in turn by EMIS Health, Leeds, UK, after conclusion of the study), Retmarker (version 0.8.2, Retmarker Ltd, Coimbra, Portugal) and EyeArt (Eyenuk Inc., Woodland Hills, CA, USA). The final manual grade was used as the reference standard. Arbitration on a subset of discrepancies between manual grading and the use of an ARIAS by a reading centre masked to all grading was used to create a reference standard manual grade modified by arbitration. MAIN OUTCOME MEASURES: Screening performance (sensitivity, specificity, false-positive rate and likelihood ratios) and diagnostic accuracy [95% confidence intervals (CIs)] of ARIASs. A secondary analysis explored the influence of camera type and patients' ethnicity, age and sex on screening performance. Economic analysis estimated the cost per appropriate screening outcome identified. RESULTS: A total of 20,258 patients with 102,856 images were entered into the study. The sensitivity point estimates of the ARIASs were as follows: EyeArt 94.7% (95% CI 94.2% to 95.2%) for any retinopathy, 93.8% (95% CI 92.9% to 94.6%) for referable retinopathy and 99.6% (95% CI 97.0% to 99.9%) for proliferative retinopathy; and Retmarker 73.0% (95% CI 72.0% to 74.0%) for any retinopathy, 85.0% (95% CI 83.6% to 86.2%) for referable retinopathy and 97.9% (95% CI 94.9 to 99.1%) for proliferative retinopathy. iGradingM classified all images as either 'disease' or 'ungradable', limiting further iGradingM analysis. The sensitivity and false-positive rates for EyeArt were not affected by ethnicity, sex or camera type but sensitivity declined marginally with increasing patient age. The screening performance of Retmarker appeared to vary with patient's age, ethnicity and camera type. Both EyeArt and Retmarker were cost saving relative to manual grading either as a replacement for level 1 human grading or used prior to level 1 human grading, although the latter was less cost-effective. A threshold analysis testing the highest ARIAS cost per patient before which ARIASs became more expensive per appropriate outcome than human grading, when used to replace level 1 grader, was Retmarker £3.82 and EyeArt £2.71 per patient. LIMITATIONS: The non-randomised study design limited the health economic analysis but the same retinal images were processed by all ARIASs in this measurement comparison study. CONCLUSIONS: Retmarker and EyeArt achieved acceptable sensitivity for referable retinopathy and false-positive rates (compared with human graders as reference standard) and appear to be cost-effective alternatives to a purely manual grading approach. Future work is required to develop technical specifications to optimise deployment and address potential governance issues. FUNDING: The National Institute for Health Research (NIHR) Health Technology Assessment programme, a Fight for Sight Grant (Hirsch grant award) and the Department of Health's NIHR Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital and the University College London Institute of Ophthalmology.


Subject(s)
Diabetic Retinopathy/diagnosis , Image Processing, Computer-Assisted/economics , Image Processing, Computer-Assisted/methods , Mass Screening/methods , Adolescent , Adult , Age Factors , Aged , Aged, 80 and over , Child , Cost-Benefit Analysis , Diabetic Retinopathy/ethnology , Diabetic Retinopathy/pathology , England , Ethnicity , False Positive Reactions , Female , Humans , Image Processing, Computer-Assisted/instrumentation , Male , Mass Screening/standards , Middle Aged , Retrospective Studies , Sensitivity and Specificity , Software , State Medicine , Technology Assessment, Biomedical , Young Adult
3.
J Med Screen ; 22(3): 112-8, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25742804

ABSTRACT

OBJECTIVES: Diabetic retinopathy screening in England involves labour intensive manual grading of digital retinal images. We present the plan for an observational retrospective study of whether automated systems could replace one or more steps of human grading. METHODS: Patients aged 12 or older who attended the Diabetes Eye Screening programme, Homerton University Hospital (London) between 1 June 2012 and 4 November 2013 had macular and disc-centred retinal images taken. All screening episodes were manually graded and will additionally be graded by three automated systems. Each system will process all screening episodes, and screening performance (sensitivity, false positive rate, likelihood ratios) and diagnostic accuracy (95% confidence intervals of screening performance measures) will be quantified. A sub-set of gradings will be validated by an approved Reading Centre. Additional analyses will explore the effect of altering thresholds for disease detection within each automated system on screening performance. RESULTS: 2,782/20,258 diabetes patients were referred to ophthalmologists for further examination. Prevalence of maculopathy (M1), pre-proliferative retinopathy (R2), and proliferative retinopathy (R3) were 7.9%, 3.1% and 1.2%, respectively; 4749 (23%) patients were diagnosed with background retinopathy (R1); 1.5% were considered ungradable by human graders. CONCLUSIONS: Retinopathy prevalence was similar to other English diabetic screening programmes, so findings should be generalizable. The study population size will allow the detection of differences in screening performance between the human and automated grading systems as small as 2%. The project will compare performance and economic costs of manual versus automated systems.


Subject(s)
Diabetic Retinopathy/diagnosis , Diagnosis, Computer-Assisted/methods , Diagnosis, Computer-Assisted/standards , Mass Screening/methods , Mass Screening/standards , Adolescent , Adult , Aged , Aged, 80 and over , Child , England , Female , Humans , Male , Middle Aged , Ophthalmology/methods , Ophthalmology/standards , Pattern Recognition, Automated , Retrospective Studies , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...