Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Transl Vis Sci Technol ; 11(1): 11, 2022 01 03.
Article in English | MEDLINE | ID: mdl-35015061

ABSTRACT

Purpose: To compare supervised transfer learning to semisupervised learning for their ability to learn in-depth knowledge with limited data in the optical coherence tomography (OCT) domain. Methods: Transfer learning with EfficientNet-B4 and semisupervised learning with SimCLR are used in this work. The largest public OCT dataset, consisting of 108,312 images and four categories (choroidal neovascularization, diabetic macular edema, drusen, and normal) is used. In addition, two smaller datasets are constructed, containing 31,200 images for the limited version and 4000 for the mini version of the dataset. To illustrate the effectiveness of the developed models, local interpretable model-agnostic explanations and class activation maps are used as explainability techniques. Results: The proposed transfer learning approach using the EfficientNet-B4 model trained on the limited dataset achieves an accuracy of 0.976 (95% confidence interval [CI], 0.963, 0.983), sensitivity of 0.973 and specificity of 0.991. The semisupervised based solution with SimCLR using 10% labeled data and the limited dataset performs with an accuracy of 0.946 (95% CI, 0.932, 0.960), sensitivity of 0.941, and specificity of 0.983. Conclusions: Semisupervised learning has a huge potential for datasets that contain both labeled and unlabeled inputs, generally, with a significantly smaller number of labeled samples. The semisupervised based solution provided with merely 10% labeled data achieves very similar performance to the supervised transfer learning that uses 100% labeled samples. Translational Relevance: Semisupervised learning enables building performant models while requiring less expertise effort and time by using to good advantage the abundant amount of available unlabeled data along with the labeled samples.


Subject(s)
Deep Learning , Diabetic Retinopathy , Macular Edema , Algorithms , Diabetic Retinopathy/diagnosis , Humans , Macular Edema/diagnosis , Supervised Machine Learning
2.
Eye (Lond) ; 36(3): 524-532, 2022 03.
Article in English | MEDLINE | ID: mdl-33731888

ABSTRACT

BACKGROUND: In diabetic retinopathy (DR) screening programmes feature-based grading guidelines are used by human graders. However, recent deep learning approaches have focused on end to end learning, based on labelled data at the whole image level. Most predictions from such software offer a direct grading output without information about the retinal features responsible for the grade. In this work, we demonstrate a feature based retinal image analysis system, which aims to support flexible grading and monitor progression. METHODS: The system was evaluated against images that had been graded according to two different grading systems; The International Clinical Diabetic Retinopathy and Diabetic Macular Oedema Severity Scale and the UK's National Screening Committee guidelines. RESULTS: External evaluation on large datasets collected from three nations (Kenya, Saudi Arabia and China) was carried out. On a DR referable level, sensitivity did not vary significantly between different DR grading schemes (91.2-94.2.0%) and there were excellent specificity values above 93% in all image sets. More importantly, no cases of severe non-proliferative DR, proliferative DR or DMO were missed. CONCLUSIONS: We demonstrate the potential of an AI feature-based DR grading system that is not constrained to any specific grading scheme.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Macular Edema , Diabetic Retinopathy/diagnosis , Humans , Mass Screening/methods , Retina , Software
3.
Transl Vis Sci Technol ; 9(2): 44, 2020 08.
Article in English | MEDLINE | ID: mdl-32879754

ABSTRACT

Purpose: The aim of this work is to demonstrate how a retinal image analysis system, DAPHNE, supports the optimization of diabetic retinopathy (DR) screening programs for grading color fundus photography. Method: Retinal image sets, graded by trained and certified human graders, were acquired from Saudi Arabia, China, and Kenya. Each image was subsequently analyzed by the DAPHNE automated software. The sensitivity, specificity, and positive and negative predictive values for the detection of referable DR or diabetic macular edema were evaluated, taking human grading or clinical assessment outcomes to be the gold standard. The automated software's ability to identify co-pathology and to correctly label DR lesions was also assessed. Results: In all three datasets the agreement between the automated software and human grading was between 0.84 to 0.88. Sensitivity did not vary significantly between populations (94.28%-97.1%) with specificity ranging between 90.33% to 92.12%. There were excellent negative predictive values above 93% in all image sets. The software was able to monitor DR progression between baseline and follow-up images with the changes visualized. No cases of proliferative DR or DME were missed in the referable recommendations. Conclusions: The DAPHNE automated software demonstrated its ability not only to grade images but also to reliably monitor and visualize progression. Therefore it has the potential to assist timely image analysis in patients with diabetes in varied populations and also help to discover subtle signs of sight-threatening disease onset. Translational Relevance: This article takes research on machine vision and evaluates its readiness for clinical use.


Subject(s)
Diabetic Retinopathy , Macular Edema , China , Diabetic Retinopathy/diagnosis , Humans , Kenya/epidemiology , Saudi Arabia
4.
Proc Natl Acad Sci U S A ; 117(8): 4152-4157, 2020 02 25.
Article in English | MEDLINE | ID: mdl-32029596

ABSTRACT

Whenever a genetically homogenous population of bacterial cells is exposed to antibiotics, a tiny fraction of cells survives the treatment, the phenomenon known as bacterial persistence [G.L. Hobby et al., Exp. Biol. Med. 50, 281-285 (1942); J. Bigger, The Lancet 244, 497-500 (1944)]. Despite its biomedical relevance, the origin of the phenomenon is still unknown, and as a rare, phenotypically resistant subpopulation, persisters are notoriously hard to study and define. Using computerized tracking we show that persisters are small at birth and slowly replicating. We also determine that the high-persister mutant strain of Escherichia coli, HipQ, is associated with the phenotype of reduced phenotypic inheritance (RPI). We identify the gene responsible for RPI, ydcI, which encodes a transcription factor, and propose a mechanism whereby loss of phenotypic inheritance causes increased frequency of persisters. These results provide insight into the generation and maintenance of phenotypic variation and provide potential targets for the development of therapeutic strategies that tackle persistence in bacterial infections.


Subject(s)
DNA-Binding Proteins/metabolism , Drug Resistance, Bacterial/genetics , Escherichia coli Proteins/metabolism , Escherichia coli/drug effects , Transcription Factors/metabolism , Ampicillin/pharmacology , Anti-Bacterial Agents/pharmacology , DNA-Binding Proteins/genetics , Escherichia coli/genetics , Escherichia coli/physiology , Escherichia coli Proteins/genetics , Microfluidics , Models, Biological , Mutation , Transcription Factors/genetics
5.
Eye (Lond) ; 33(2): 313-319, 2019 02.
Article in English | MEDLINE | ID: mdl-30206417

ABSTRACT

PURPOSE: Objective feedback is important for the continuous development of surgical skills. Motion tracking, which has previously been validated across an entire cataract procedure, can be a useful adjunct. We aimed to measure quantitative differences between junior and senior surgeons' performance in three distinct segments. We further explored whether automated analysis of trainee surgical videos through PhacoTracking could be aligned with metrics from the EyeSi virtual reality simulator, allowing focused improvement of these areas in a controlled environment. METHODS: Prospective cohort analysis, comparing junior vs. senior surgeons' real-life performance in distinct segments of cataract surgery: continuous curvilinear capsulorhexis (CCC), phacoemulsification, and irrigation and aspiration (I&A). EyeSi metrics that could be aligned with motion tracking parameters were identified. Motion tracking parameters (instrument path length, number of movements and total time) were measured. t-test used between the two cohorts for each component to check for any significance (p < 0.05). RESULTS: A total of 120 segments from videos of 20 junior and 20 senior surgeons were analysed. Significant differences between junior and senior surgeons were found during CCC (path length p = 0.0004; number of movements p < 0.0001; time taken p < 0.0001), phacoemulsification (path length p < 0.0001; number of movements p < 0.0001; time taken p < 0.0001), and irrigation and aspiration (path length p = 0.006; number of movements p = 0.013; time taken p = 0.036). CONCLUSION: Individual segments of cataract surgery analysed using motion tracking appear to discriminate between junior and senior surgeons. Alignment of motion tracking and EyeSi parameters could enable independent, task specific, objective and quantitative feedback for each segment of surgery thus mirroring the widely utilised modular training.


Subject(s)
Capsulorhexis/methods , Clinical Competence , Image Processing, Computer-Assisted , Operating Rooms , Phacoemulsification/methods , Task Performance and Analysis , Capsulorhexis/education , Education, Medical, Graduate/methods , Educational Measurement/methods , Humans , Internship and Residency , Medical Staff, Hospital , Ophthalmology/education , Phacoemulsification/education , Prospective Studies
6.
BMJ Open ; 8(2): e018478, 2018 02 17.
Article in English | MEDLINE | ID: mdl-29455164

ABSTRACT

OBJECTIVES: To investigate differences in surgical time, the distance the surgical instrument travelled and number of movements required to complete manual phacoemulsification cataract surgery versus femtosecond laser cataract surgery. DESIGN: Non-randomised comparative case series. SETTING: Single surgery site, Moorfields Eye Hospital, UK. PARTICIPANTS: 40 cataract surgeries of 40 patients. INTERVENTIONS: Laser-assisted and manual phacoemulsification cataract surgery. Laser-assisted surgery cases were performed using the AMO Catalys platform. PRIMARY AND SECONDARY OUTCOME MEASURES: Computer vision tracking software PhacoTracking were applied to the recordings to establish the distance the instrument travelled, total number of movements (the number of times an instrument stops and starts moving) and time taken for surgery steps including phacoemulsification, irrigation-aspiration (IA) and overall surgery time. The time taken for laser docking and delivery was not included in the analyses. RESULTS: Data on 19 laser-assisted and 19 manual phacoemulsification surgeries were analysed (two cases were excluded due to insufficient video-recording quality). There were no differences in the number of instrument moves, the distance the instrument travelled or time taken to complete the phacoemulsification stage. However for IA, the number of instrument moves (manual: mean 20 (SD 15) vs laser: mean 38 (SD 22), P=0.008) and time taken (manual: mean 75 s (SD 24) vs laser: mean 108 s (SD 36), P=0.003) were significantly greater for laser cases. For laser versus manual cases overall, there was no difference in number of moves or the distance the instrument travelled, but laser cases took longer (mean 88 s, P=0.049). CONCLUSIONS: Laser cataract surgery cases took longer to complete without accounting for the time taken to complete the laser procedure itself. This appears to be in part due to IA requiring more instrument manoeuvres and taking longer to complete. Data from a large randomised series would better elucidate this relationship.


Subject(s)
Cataract Extraction/methods , Laser Therapy/methods , Phacoemulsification/methods , Case-Control Studies , Cataract , Humans , Postoperative Complications/etiology , Software , Treatment Outcome , Video Recording , Visual Acuity
7.
R Soc Open Sci ; 4(5): 170207, 2017 May.
Article in English | MEDLINE | ID: mdl-28573031

ABSTRACT

Cell growth experiments with a microfluidic device produce large-scale time-lapse image data, which contain important information on cell growth and patterns in their genealogy. To extract such information, we propose a scheme to segment and track bacterial cells automatically. In contrast with most published approaches, which often split segmentation and tracking into two independent procedures, we focus on designing an algorithm that describes cell properties evolving between consecutive frames by feeding segmentation and tracking results from one frame to the next one. The cell boundaries are extracted by minimizing the distance regularized level set evolution (DRLSE) model. Each individual cell was identified and tracked by identifying cell septum and membrane as well as developing a trajectory energy minimization function along time-lapse series. Experiments show that by applying this scheme, cell growth and division can be measured automatically. The results show the efficiency of the approach when testing on different datasets while comparing with other existing algorithms. The proposed approach demonstrates great potential for large-scale bacterial cell growth analysis.

8.
IEEE Trans Biomed Eng ; 64(5): 990-1002, 2017 05.
Article in English | MEDLINE | ID: mdl-27362756

ABSTRACT

GOAL: Reliable recognition of microaneurysms (MAs) is an essential task when developing an automated analysis system for diabetic retinopathy (DR) detection. In this study, we propose an integrated approach for automated MA detection with high accuracy. METHODS: Candidate objects are first located by applying a dark object filtering process. Their cross-section profiles along multiple directions are processed through singular spectrum analysis. The correlation coefficient between each processed profile and a typical MA profile is measured and used as a scale factor to adjust the shape of the candidate profile. This is to increase the difference in their profiles between true MAs and other non-MA candidates. A set of statistical features of those profiles is then extracted for a K-nearest neighbor classifier. RESULTS: Experiments show that by applying this process, MAs can be separated well from the retinal background, the most common interfering objects and artifacts. CONCLUSION: The results have demonstrated the robustness of the approach when testing on large scale datasets with clinically acceptable sensitivity and specificity. SIGNIFICANCE: The approach proposed in the evaluated system has great potential when used in an automated DR screening tool or for large scale eye epidemiology studies.


Subject(s)
Aneurysm/pathology , Diabetic Retinopathy/pathology , Fluorescein Angiography/methods , Pattern Recognition, Automated/methods , Retinal Artery/pathology , Algorithms , Aneurysm/diagnostic imaging , Diabetic Retinopathy/diagnostic imaging , Fundus Oculi , Humans , Image Interpretation, Computer-Assisted/methods , Machine Learning , Reproducibility of Results , Retinal Artery/diagnostic imaging , Sensitivity and Specificity
9.
J Ophthalmol ; 2016: 4176547, 2016.
Article in English | MEDLINE | ID: mdl-28074155

ABSTRACT

Patients without diabetic retinopathy (DR) represent a large proportion of the caseload seen by the DR screening service so reliable recognition of the absence of DR in digital fundus images (DFIs) is a prime focus of automated DR screening research. We investigate the use of a novel automated DR detection algorithm to assess retinal DFIs for absence of DR. A retrospective, masked, and controlled image-based study was undertaken. 17,850 DFIs of patients from six different countries were assessed for DR by the automated system and by human graders. The system's performance was compared across DFIs from the different countries/racial groups. The sensitivities for detection of DR by the automated system were Kenya 92.8%, Botswana 90.1%, Norway 93.5%, Mongolia 91.3%, China 91.9%, and UK 90.1%. The specificities were Kenya 82.7%, Botswana 83.2%, Norway 81.3%, Mongolia 82.5%, China 83.0%, and UK 79%. There was little variability in the calculated sensitivities and specificities across the six different countries involved in the study. These data suggest the possible scalability of an automated DR detection platform that enables rapid identification of patients without DR across a wide range of races.

10.
PLoS One ; 8(7): e66730, 2013.
Article in English | MEDLINE | ID: mdl-23840865

ABSTRACT

In any diabetic retinopathy screening program, about two-thirds of patients have no retinopathy. However, on average, it takes a human expert about one and a half times longer to decide an image is normal than to recognize an abnormal case with obvious features. In this work, we present an automated system for filtering out normal cases to facilitate a more effective use of grading time. The key aim with any such tool is to achieve high sensitivity and specificity to ensure patients' safety and service efficiency. There are many challenges to overcome, given the variation of images and characteristics to identify. The system combines computed evidence obtained from various processing stages, including segmentation of candidate regions, classification and contextual analysis through Hidden Markov Models. Furthermore, evolutionary algorithms are employed to optimize the Hidden Markov Models, feature selection and heterogeneous ensemble classifiers. In order to evaluate its capability of identifying normal images across diverse populations, a population-oriented study was undertaken comparing the software's output to grading by humans. In addition, population based studies collect large numbers of images on subjects expected to have no abnormality. These studies expect timely and cost-effective grading. Altogether 9954 previously unseen images taken from various populations were tested. All test images were masked so the automated system had not been exposed to them before. This system was trained using image subregions taken from about 400 sample images. Sensitivities of 92.2% and specificities of 90.4% were achieved varying between populations and population clusters. Of all images the automated system decided to be normal, 98.2% were true normal when compared to the manual grading results. These results demonstrate scalability and strong potential of such an integrated computational intelligence system as an effective tool to assist a grading service.


Subject(s)
Diabetic Retinopathy/diagnosis , Fundus Oculi , Image Processing, Computer-Assisted/methods , Mass Screening/methods , Algorithms , Artificial Intelligence , Diabetic Retinopathy/pathology , Humans , Image Processing, Computer-Assisted/economics , Markov Chains , Mass Screening/economics
SELECTION OF CITATIONS
SEARCH DETAIL
...