Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Ophthalmology ; 129(2): 139-146, 2022 02.
Article in English | MEDLINE | ID: mdl-34352302

ABSTRACT

PURPOSE: To develop and evaluate an automated, portable algorithm to differentiate active corneal ulcers from healed scars using only external photographs. DESIGN: A convolutional neural network was trained and tested using photographs of corneal ulcers and scars. PARTICIPANTS: De-identified photographs of corneal ulcers were obtained from the Steroids for Corneal Ulcers Trial (SCUT), Mycotic Ulcer Treatment Trial (MUTT), and Byers Eye Institute at Stanford University. METHODS: Photographs of corneal ulcers (n = 1313) and scars (n = 1132) from the SCUT and MUTT were used to train a convolutional neural network (CNN). The CNN was tested on 2 different patient populations from eye clinics in India (n = 200) and the Byers Eye Institute at Stanford University (n = 101). Accuracy was evaluated against gold standard clinical classifications. Feature importances for the trained model were visualized using gradient-weighted class activation mapping. MAIN OUTCOME MEASURES: Accuracy of the CNN was assessed via F1 score. The area under the receiver operating characteristic (ROC) curve (AUC) was used to measure the precision-recall trade-off. RESULTS: The CNN correctly classified 115 of 123 active ulcers and 65 of 77 scars in patients with corneal ulcer from India (F1 score, 92.0% [95% confidence interval (CI), 88.2%-95.8%]; sensitivity, 93.5% [95% CI, 89.1%-97.9%]; specificity, 84.42% [95% CI, 79.42%-89.42%]; ROC: AUC, 0.9731). The CNN correctly classified 43 of 55 active ulcers and 42 of 46 scars in patients with corneal ulcers from Northern California (F1 score, 84.3% [95% CI, 77.2%-91.4%]; sensitivity, 78.2% [95% CI, 67.3%-89.1%]; specificity, 91.3% [95% CI, 85.8%-96.8%]; ROC: AUC, 0.9474). The CNN visualizations correlated with clinically relevant features such as corneal infiltrate, hypopyon, and conjunctival injection. CONCLUSIONS: The CNN classified corneal ulcers and scars with high accuracy and generalized to patient populations outside of its training data. The CNN focused on clinically relevant features when it made a diagnosis. The CNN demonstrated potential as an inexpensive diagnostic approach that may aid triage in communities with limited access to eye care.


Subject(s)
Cicatrix/diagnostic imaging , Corneal Ulcer/diagnostic imaging , Deep Learning , Eye Infections, Bacterial/diagnostic imaging , Eye Infections, Fungal/diagnostic imaging , Photography , Wound Healing/physiology , Algorithms , Area Under Curve , Cicatrix/physiopathology , Corneal Ulcer/classification , Corneal Ulcer/microbiology , Eye Infections, Bacterial/classification , Eye Infections, Bacterial/microbiology , Eye Infections, Fungal/classification , Eye Infections, Fungal/microbiology , False Positive Reactions , Humans , Predictive Value of Tests , ROC Curve , Retrospective Studies , Sensitivity and Specificity , Slit Lamp Microscopy
3.
Nat Med ; 25(1): 24-29, 2019 01.
Article in English | MEDLINE | ID: mdl-30617335

ABSTRACT

Here we present deep-learning techniques for healthcare, centering our discussion on deep learning in computer vision, natural language processing, reinforcement learning, and generalized methods. We describe how these computational techniques can impact a few key areas of medicine and explore how to build end-to-end systems. Our discussion of computer vision focuses largely on medical imaging, and we describe the application of natural language processing to domains such as electronic health record data. Similarly, reinforcement learning is discussed in the context of robotic-assisted surgery, and generalized deep-learning methods for genomics are reviewed.


Subject(s)
Deep Learning , Delivery of Health Care , Diagnostic Imaging , Electronic Health Records , Humans , Natural Language Processing
4.
Nature ; 546(7660): 686, 2017 06 28.
Article in English | MEDLINE | ID: mdl-28658222

ABSTRACT

This corrects the article DOI: 10.1038/nature21056.

5.
Nature ; 542(7639): 115-118, 2017 02 02.
Article in English | MEDLINE | ID: mdl-28117445

ABSTRACT

Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care.


Subject(s)
Dermatologists/standards , Neural Networks, Computer , Skin Neoplasms/classification , Skin Neoplasms/diagnosis , Automation , Cell Phone/statistics & numerical data , Datasets as Topic , Humans , Keratinocytes/pathology , Keratosis, Seborrheic/classification , Keratosis, Seborrheic/diagnosis , Keratosis, Seborrheic/pathology , Melanoma/classification , Melanoma/diagnosis , Melanoma/pathology , Nevus/classification , Nevus/diagnosis , Nevus/pathology , Photography , Reproducibility of Results , Skin Neoplasms/pathology
6.
IEEE Trans Pattern Anal Mach Intell ; 35(5): 1039-50, 2013 May.
Article in English | MEDLINE | ID: mdl-23520250

ABSTRACT

We describe a method for 3D object scanning by aligning depth scans that were taken from around an object with a Time-of-Flight (ToF) camera. These ToF cameras can measure depth scans at video rate. Due to comparably simple technology, they bear potential for economical production in big volumes. Our easy-to-use, cost-effective scanning solution, which is based on such a sensor, could make 3D scanning technology more accessible to everyday users. The algorithmic challenge we face is that the sensor's level of random noise is substantial and there is a nontrivial systematic bias. In this paper, we show the surprising result that 3D scans of reasonable quality can also be obtained with a sensor of such low data quality. Established filtering and scan alignment techniques from the literature fail to achieve this goal. In contrast, our algorithm is based on a new combination of a 3D superresolution method with a probabilistic scan alignment approach that explicitly takes into account the sensor's noise characteristics.

7.
IEEE Trans Biomed Eng ; 58(1): 159-71, 2011 Jan.
Article in English | MEDLINE | ID: mdl-20934939

ABSTRACT

Recent advances in optical imaging have led to the development of miniature microscopes that can be brought to the patient for visualizing tissue structures in vivo. These devices have the potential to revolutionize health care by replacing tissue biopsy with in vivo pathology. One of the primary limitations of these microscopes, however, is that the constrained field of view can make image interpretation and navigation difficult. In this paper, we show that image mosaicing can be a powerful tool for widening the field of view and creating image maps of microanatomical structures. First, we present an efficient algorithm for pairwise image mosaicing that can be implemented in real time. Then, we address two of the main challenges associated with image mosaicing in medical applications: cumulative image registration errors and scene deformation. To deal with cumulative errors, we present a global alignment algorithm that draws upon techniques commonly used in probabilistic robotics. To accommodate scene deformation, we present a local alignment algorithm that incorporates deformable surface models into the mosaicing framework. These algorithms are demonstrated on image sequences acquired in vivo with various imaging devices including a hand-held dual-axes confocal microscope, a miniature two-photon microscope, and a commercially available confocal microendoscope.


Subject(s)
Endoscopes , Image Processing, Computer-Assisted/methods , Microscopy, Confocal , Algorithms , Animals , Brain/anatomy & histology , Brain/blood supply , Endoscopy/methods , Hand , Humans , Mice , Microscopy, Confocal/instrumentation , Microscopy, Confocal/methods , Miniaturization , Robotics/instrumentation , Skin/anatomy & histology
SELECTION OF CITATIONS
SEARCH DETAIL
...