Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
Ophthalmology ; 126(4): 552-564, 2019 04.
Article in English | MEDLINE | ID: mdl-30553900

ABSTRACT

PURPOSE: To understand the impact of deep learning diabetic retinopathy (DR) algorithms on physician readers in computer-assisted settings. DESIGN: Evaluation of diagnostic technology. PARTICIPANTS: One thousand seven hundred ninety-six retinal fundus images from 1612 diabetic patients. METHODS: Ten ophthalmologists (5 general ophthalmologists, 4 retina specialists, 1 retina fellow) read images for DR severity based on the International Clinical Diabetic Retinopathy disease severity scale in each of 3 conditions: unassisted, grades only, or grades plus heatmap. Grades-only assistance comprised a histogram of DR predictions (grades) from a trained deep-learning model. For grades plus heatmap, we additionally showed explanatory heatmaps. MAIN OUTCOME MEASURES: For each experiment arm, we computed sensitivity and specificity of each reader and the algorithm for different levels of DR severity against an adjudicated reference standard. We also measured accuracy (exact 5-class level agreement and Cohen's quadratically weighted κ), reader-reported confidence (5-point Likert scale), and grading time. RESULTS: Readers graded more accurately with model assistance than without for the grades-only condition (P < 0.001). Grades plus heatmaps improved accuracy for patients with DR (P < 0.001), but reduced accuracy for patients without DR (P = 0.006). Both forms of assistance increased readers' sensitivity moderate-or-worse DR: unassisted: mean, 79.4% [95% confidence interval (CI), 72.3%-86.5%]; grades only: mean, 87.5% [95% CI, 85.1%-89.9%]; grades plus heatmap: mean, 88.7% [95% CI, 84.9%-92.5%] without a corresponding drop in specificity (unassisted: mean, 96.6% [95% CI, 95.9%-97.4%]; grades only: mean, 96.1% [95% CI, 95.5%-96.7%]; grades plus heatmap: mean, 95.5% [95% CI, 94.8%-96.1%]). Algorithmic assistance increased the accuracy of retina specialists above that of the unassisted reader or model alone; and increased grading confidence and grading time across all readers. For most cases, grades plus heatmap was only as effective as grades only. Over the course of the experiment, grading time decreased across all conditions, although most sharply for grades plus heatmap. CONCLUSIONS: Deep learning algorithms can improve the accuracy of, and confidence in, DR diagnosis in an assisted read setting. They also may increase grading time, although these effects may be ameliorated with experience.


Subject(s)
Algorithms , Deep Learning , Diabetic Retinopathy/classification , Diabetic Retinopathy/diagnosis , Diagnosis, Computer-Assisted/methods , Female , Humans , Male , Ophthalmologists/standards , Photography/methods , ROC Curve , Reference Standards , Reproducibility of Results , Sensitivity and Specificity
2.
Invest Ophthalmol Vis Sci ; 59(7): 2861-2868, 2018 06 01.
Article in English | MEDLINE | ID: mdl-30025129

ABSTRACT

Purpose: We evaluate how deep learning can be applied to extract novel information such as refractive error from retinal fundus imaging. Methods: Retinal fundus images used in this study were 45- and 30-degree field of view images from the UK Biobank and Age-Related Eye Disease Study (AREDS) clinical trials, respectively. Refractive error was measured by autorefraction in UK Biobank and subjective refraction in AREDS. We trained a deep learning algorithm to predict refractive error from a total of 226,870 images and validated it on 24,007 UK Biobank and 15,750 AREDS images. Our model used the "attention" method to identify features that are correlated with refractive error. Results: The resulting algorithm had a mean absolute error (MAE) of 0.56 diopters (95% confidence interval [CI]: 0.55-0.56) for estimating spherical equivalent on the UK Biobank data set and 0.91 diopters (95% CI: 0.89-0.93) for the AREDS data set. The baseline expected MAE (obtained by simply predicting the mean of this population) was 1.81 diopters (95% CI: 1.79-1.84) for UK Biobank and 1.63 (95% CI: 1.60-1.67) for AREDS. Attention maps suggested that the foveal region was one of the most important areas used by the algorithm to make this prediction, though other regions also contribute to the prediction. Conclusions: To our knowledge, the ability to estimate refractive error with high accuracy from retinal fundus photos has not been previously known and demonstrates that deep learning can be applied to make novel predictions from medical images.


Subject(s)
Deep Learning , Fundus Oculi , Refractive Errors/diagnosis , Retina/diagnostic imaging , Adult , Aged , Algorithms , Datasets as Topic , Female , Humans , Male , Middle Aged , Refraction, Ocular , Vision Tests , Visual Fields/physiology
3.
Nat Biomed Eng ; 2(3): 158-164, 2018 03.
Article in English | MEDLINE | ID: mdl-31015713

ABSTRACT

Traditionally, medical discoveries are made by observing associations, making hypotheses from them and then designing and running experiments to test the hypotheses. However, with medical images, observing and quantifying associations can often be difficult because of the wide variety of features, patterns, colours, values and shapes that are present in real data. Here, we show that deep learning can extract new knowledge from retinal fundus images. Using deep-learning models trained on data from 284,335 patients and validated on two independent datasets of 12,026 and 999 patients, we predicted cardiovascular risk factors not previously thought to be present or quantifiable in retinal images, such as age (mean absolute error within 3.26 years), gender (area under the receiver operating characteristic curve (AUC) = 0.97), smoking status (AUC = 0.71), systolic blood pressure (mean absolute error within 11.23 mmHg) and major adverse cardiac events (AUC = 0.70). We also show that the trained deep-learning models used anatomical features, such as the optic disc or blood vessels, to generate each prediction.


Subject(s)
Cardiovascular Diseases , Deep Learning , Image Interpretation, Computer-Assisted/methods , Retina/diagnostic imaging , Aged , Aged, 80 and over , Algorithms , Cardiovascular Diseases/diagnostic imaging , Cardiovascular Diseases/epidemiology , Female , Fundus Oculi , Humans , Male , Middle Aged , Risk Factors
4.
Nucleic Acids Res ; 44(11): e102, 2016 06 20.
Article in English | MEDLINE | ID: mdl-27036861

ABSTRACT

Scalable production of DNA nanostructures remains a substantial obstacle to realizing new applications of DNA nanotechnology. Typical DNA nanostructures comprise hundreds of DNA oligonucleotide strands, where each unique strand requires a separate synthesis step. New design methods that reduce the strand count for a given shape while maintaining overall size and complexity would be highly beneficial for efficiently producing DNA nanostructures. Here, we report a method for folding a custom template strand by binding individual staple sequences to multiple locations on the template. We built several nanostructures for well-controlled testing of various design rules, and demonstrate folding of a 6-kb template by as few as 10 unique strand sequences binding to 10 ± 2 locations on the template strand.


Subject(s)
DNA/chemistry , Nanostructures , Nucleic Acid Conformation , Base Sequence , Nanotechnology , Oligonucleotides/chemistry
5.
Biochim Biophys Acta ; 1858(7 Pt A): 1499-506, 2016 Jul.
Article in English | MEDLINE | ID: mdl-27033412

ABSTRACT

Cell-penetrating peptides (CPPs) have emerged as a potentially powerful tool for drug delivery due to their ability to efficiently transport a whole host of biologically active cargoes into cells. Although concerted efforts have shed some light on the cellular internalization pathways of CPPs, quantification of CPP uptake has proved problematic. Here we describe an experimental approach that combines two powerful biophysical techniques, fluorescence-activated cell sorting (FACS) and fluorescence correlation spectroscopy (FCS), to directly, accurately and precisely measure the cellular uptake of fluorescently-labeled molecules. This rapid and technically simple approach is highly versatile and can readily be applied to characterize all major CPP properties that normally require multiple assays, including amount taken up by cells (in moles/cell), uptake efficiency, internalization pathways, intracellular distribution, intracellular degradation and toxicity threshold. The FACS-FCS approach provides a means for quantifying any intracellular biochemical entity, whether expressed in the cell or introduced exogenously and transported across the plasma membrane.


Subject(s)
Cell Membrane/metabolism , Cell-Penetrating Peptides/analysis , Staining and Labeling/methods , Ammonium Chloride/pharmacology , Biotin/chemistry , Cell Membrane/drug effects , Cell Membrane Permeability/drug effects , Cell-Penetrating Peptides/metabolism , Chlorpromazine/pharmacology , Cytochalasin D/pharmacology , Endocytosis/drug effects , Filipin/pharmacology , Flow Cytometry , Fluorescent Dyes/chemistry , HeLa Cells , Humans , Kinetics , Protein Transport/drug effects , Spectrometry, Fluorescence/methods , Streptavidin/chemistry , Succinimides/chemistry , beta-Cyclodextrins/pharmacology
SELECTION OF CITATIONS
SEARCH DETAIL