Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Am J Surg Pathol ; 2024 May 29.
Article in English | MEDLINE | ID: mdl-38809272

ABSTRACT

The detection of lymph node metastases is essential for breast cancer staging, although it is a tedious and time-consuming task where the sensitivity of pathologists is suboptimal. Artificial intelligence (AI) can help pathologists detect lymph node metastases, which could help alleviate workload issues. We studied how pathologists' performance varied when aided by AI. An AI algorithm was trained using more than 32 000 breast sentinel lymph node whole slide images (WSIs) matched with their corresponding pathology reports from more than 8000 patients. The algorithm highlighted areas suspicious of harboring metastasis. Three pathologists were asked to review a dataset comprising 167 breast sentinel lymph node WSIs, of which 69 harbored cancer metastases of different sizes, enriched for challenging cases. Ninety-eight slides were benign. The pathologists read the dataset twice, both digitally, with and without AI assistance, randomized for slide and reading orders to reduce bias, separated by a 3-week washout period. Their slide-level diagnosis was recorded, and they were timed during their reads. The average reading time per slide was 129 seconds during the unassisted phase versus 58 seconds during the AI-assisted phase, resulting in an overall efficiency gain of 55% (P<0.001). These efficiency gains are applied to both benign and malignant WSIs. Two of the 3 reading pathologists experienced significant sensitivity improvements, from 74.5% to 93.5% (P≤0.006). This study highlights that AI can help pathologists shorten their reading times by more than half and also improve their metastasis detection rate.

2.
Sci Rep ; 11(1): 12576, 2021 06 15.
Article in English | MEDLINE | ID: mdl-34131165

ABSTRACT

Reflectance confocal microscopy (RCM) is an effective non-invasive tool for cancer diagnosis. However, acquiring and reading RCM images requires extensive training and experience, and novice clinicians exhibit high discordance in diagnostic accuracy. Quantitative tools to standardize image acquisition could reduce both required training and diagnostic variability. To perform diagnostic analysis, clinicians collect a set of RCM mosaics (RCM images concatenated in a raster fashion to extend the field view) at 4-5 specific layers in skin, all localized in the junction between the epidermal and dermal layers (dermal-epidermal junction, DEJ), necessitating locating that junction before mosaic acquisition. In this study, we automate DEJ localization using deep recurrent convolutional neural networks to delineate skin strata in stacks of RCM images collected at consecutive depths. Success will guide to automated and quantitative mosaic acquisition thus reducing inter operator variability and bring standardization in imaging. Testing our model against an expert labeled dataset of 504 RCM stacks, we achieved [Formula: see text] classification accuracy and nine-fold reduction in the number of anatomically impossible errors compared to the previous state-of-the-art.


Subject(s)
Early Detection of Cancer , Microscopy, Confocal/methods , Skin Neoplasms/diagnosis , Epidermis/diagnostic imaging , Epidermis/pathology , Female , Humans , Image Processing, Computer-Assisted/methods , Male , Neural Networks, Computer , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology
3.
Sci Rep ; 11(1): 3679, 2021 02 11.
Article in English | MEDLINE | ID: mdl-33574486

ABSTRACT

Reflectance confocal microscopy (RCM) is a non-invasive imaging tool that reduces the need for invasive histopathology for skin cancer diagnoses by providing high-resolution mosaics showing the architectural patterns of skin, which are used to identify malignancies in-vivo. RCM mosaics are similar to dermatopathology sections, both requiring extensive training to interpret. However, these modalities differ in orientation, as RCM mosaics are horizontal (parallel to the skin surface) while histopathology sections are vertical, and contrast mechanism, RCM with a single (reflectance) mechanism resulting in grayscale images and histopathology with multi-factor color-stained contrast. Image analysis and machine learning methods can potentially provide a diagnostic aid to clinicians to interpret RCM mosaics, eventually helping to ease the adoption and more efficiently utilizing RCM in routine clinical practice. However standard supervised machine learning may require a prohibitive volume of hand-labeled training data. In this paper, we present a weakly supervised machine learning model to perform semantic segmentation of architectural patterns encountered in RCM mosaics. Unlike more widely used fully supervised segmentation models that require pixel-level annotations, which are very labor-demanding and error-prone to obtain, here we focus on training models using only patch-level labels (e.g. a single field of view within an entire mosaic). We segment RCM mosaics into "benign" and "aspecific (nonspecific)" regions, where aspecific regions represent the loss of regular architecture due to injury and/or inflammation, pre-malignancy, or malignancy. We adopt Efficientnet, a deep neural network (DNN) proven to accurately accomplish classification tasks, to generate class activation maps, and use a Gaussian weighting kernel to stitch smaller images back into larger fields of view. The trained DNN achieved an average area under the curve of 0.969, and Dice coefficient of 0.778 showing the feasibility of spatial localization of aspecific regions in RCM images, and making the diagnostics decision model more interpretable to the clinicians.


Subject(s)
Image Processing, Computer-Assisted , Microscopy, Confocal , Skin Neoplasms/diagnosis , Skin/ultrastructure , Humans , Machine Learning , Neural Networks, Computer , Semantics , Skin/diagnostic imaging , Skin/pathology , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology
4.
Med Image Anal ; 67: 101841, 2021 01.
Article in English | MEDLINE | ID: mdl-33142135

ABSTRACT

In-vivo optical microscopy is advancing into routine clinical practice for non-invasively guiding diagnosis and treatment of cancer and other diseases, and thus beginning to reduce the need for traditional biopsy. However, reading and analysis of the optical microscopic images are generally still qualitative, relying mainly on visual examination. Here we present an automated semantic segmentation method called "Multiscale Encoder-Decoder Network (MED-Net)" that provides pixel-wise labeling into classes of patterns in a quantitative manner. The novelty in our approach is the modeling of textural patterns at multiple scales (magnifications, resolutions). This mimics the traditional procedure for examining pathology images, which routinely starts with low magnification (low resolution, large field of view) followed by closer inspection of suspicious areas with higher magnification (higher resolution, smaller fields of view). We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy. With patient-wise cross-validation, we achieved pixel-wise mean sensitivity and specificity of 74% and 92%, respectively, with 0.74 Dice coefficient over six classes. In the scenario, we partitioned the data clinic-wise and tested the generalizability of the model over multiple clinics. In this setting, we achieved pixel-wise mean sensitivity and specificity of 77% and 94%, respectively, with 0.77 Dice coefficient. We compared MED-Net against the state-of-the-art semantic segmentation models and achieved better quantitative segmentation performance. Our results also suggest that, due to its nested multiscale architecture, the MED-Net model annotated RCM mosaics more coherently, avoiding unrealistic-fragmented annotations.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Humans , Microscopy, Confocal
5.
J Invest Dermatol ; 140(6): 1214-1222, 2020 06.
Article in English | MEDLINE | ID: mdl-31838127

ABSTRACT

In vivo reflectance confocal microscopy (RCM) enables clinicians to examine lesions' morphological and cytological information in epidermal and dermal layers while reducing the need for biopsies. As RCM is being adopted more widely, the workflow is expanding from real-time diagnosis at the bedside to include a capture, store, and forward model with image interpretation and diagnosis occurring offsite, similar to radiology. As the patient may no longer be present at the time of image interpretation, quality assurance is key during image acquisition. Herein, we introduce a quality assurance process by means of automatically quantifying diagnostically uninformative areas within the lesional area by using RCM and coregistered dermoscopy images together. We trained and validated a pixel-level segmentation model on 117 RCM mosaics collected by international collaborators. The model delineates diagnostically uninformative areas with 82% sensitivity and 93% specificity. We further tested the model on a separate set of 372 coregistered RCM-dermoscopic image pairs and illustrate how the results of the RCM-only model can be improved via a multimodal (RCM + dermoscopy) approach, which can help quantify the uninformative regions within the lesional area. Our data suggest that machine learning-based automatic quantification offers a feasible objective quality control measure for RCM imaging.


Subject(s)
Dermoscopy/methods , Image Processing, Computer-Assisted/methods , Machine Learning , Skin Diseases/diagnosis , Skin/diagnostic imaging , Dermoscopy/standards , Diagnosis, Differential , Feasibility Studies , Humans , Microscopy, Confocal/methods , Microscopy, Confocal/standards , Quality Control
6.
J Biom Biostat ; 9(5)2018.
Article in English | MEDLINE | ID: mdl-31131151

ABSTRACT

Predicting disease status for a complex human disease using genomic data is an important, yet challenging, step in personalized medicine. Among many challenges, the so-called curse of dimensionality problem results in unsatisfied performances of many state-of-art machine learning algorithms. A major recent advance in machine learning is the rapid development of deep learning algorithms that can efficiently extract meaningful features from high-dimensional and complex datasets through a stacked and hierarchical learning process. Deep learning has shown breakthrough performance in several areas including image recognition, natural language processing, and speech recognition. However, the performance of deep learning in predicting disease status using genomic datasets is still not well studied. In this article, we performed a review on the four relevant articles that we found through our thorough literature search. All four articles first used auto-encoders to project high-dimensional genomic data to a low dimensional space and then applied the state-of-the-art machine learning algorithms to predict disease status based on the low-dimensional representations. These deep learning approaches outperformed existing prediction methods, such as prediction based on transcript-wise screening and prediction based on principal component analysis. The limitations of the current deep learning approach and possible improvements were also discussed.

7.
JAMA Ophthalmol ; 134(6): 651-7, 2016 Jun 01.
Article in English | MEDLINE | ID: mdl-27077667

ABSTRACT

IMPORTANCE: Published definitions of plus disease in retinopathy of prematurity (ROP) reference arterial tortuosity and venous dilation within the posterior pole based on a standard published photograph. One possible explanation for limited interexpert reliability for a diagnosis of plus disease is that experts deviate from the published definitions. OBJECTIVE: To identify vascular features used by experts for diagnosis of plus disease through quantitative image analysis. DESIGN, SETTING, AND PARTICIPANTS: A computer-based image analysis system (Imaging and Informatics in ROP [i-ROP]) was developed using a set of 77 digital fundus images, and the system was designed to classify images compared with a reference standard diagnosis (RSD). System performance was analyzed as a function of the field of view (circular crops with a radius of 1-6 disc diameters) and vessel subtype (arteries only, veins only, or all vessels). Routine ROP screening was conducted from June 29, 2011, to October 14, 2014, in neonatal intensive care units at 8 academic institutions, with a subset of 73 images independently classified by 11 ROP experts for validation. The RSD was compared with the majority diagnosis of experts. MAIN OUTCOMES AND MEASURES: The primary outcome measure was the percentage of accuracy of the i-ROP system classification of plus disease, with the RSD as a function of the field of view and vessel type. Secondary outcome measures included the accuracy of the 11 experts compared with the RSD. RESULTS: Accuracy of plus disease diagnosis by the i-ROP computer-based system was highest (95%; 95% CI, 94%-95%) when it incorporated vascular tortuosity from both arteries and veins and with the widest field of view (6-disc diameter radius). Accuracy was 90% or less when using only arterial tortuosity and 85% or less using a 2- to 3-disc diameter view similar to the standard published photograph. Diagnostic accuracy of the i-ROP system (95%) was comparable to that of 11 expert physicians (mean 87%, range 79%-99%). CONCLUSIONS AND RELEVANCE: Experts in ROP appear to consider findings from beyond the posterior retina when diagnosing plus disease and consider tortuosity of both arteries and veins, in contrast with published definitions. It is feasible for a computer-based image analysis system to perform comparably with ROP experts, using manually segmented images.


Subject(s)
Arteries/abnormalities , Image Processing, Computer-Assisted , Joint Instability/diagnosis , Retinal Vessels/pathology , Retinopathy of Prematurity/diagnosis , Skin Diseases, Genetic/diagnosis , Vascular Malformations/diagnosis , Diagnosis, Computer-Assisted , Expert Systems , Humans , Infant, Newborn , Infant, Premature , Intensive Care Units, Neonatal , Joint Instability/classification , Reproducibility of Results , Retinopathy of Prematurity/classification , Skin Diseases, Genetic/classification , Vascular Malformations/classification
8.
Transl Vis Sci Technol ; 4(6): 5, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26644965

ABSTRACT

PURPOSE: We developed and evaluated the performance of a novel computer-based image analysis system for grading plus disease in retinopathy of prematurity (ROP), and identified the image features, shapes, and sizes that best correlate with expert diagnosis. METHODS: A dataset of 77 wide-angle retinal images from infants screened for ROP was collected. A reference standard diagnosis was determined for each image by combining image grading from 3 experts with the clinical diagnosis from ophthalmoscopic examination. Manually segmented images were cropped into a range of shapes and sizes, and a computer algorithm was developed to extract tortuosity and dilation features from arteries and veins. Each feature was fed into our system to identify the set of characteristics that yielded the highest-performing system compared to the reference standard, which we refer to as the "i-ROP" system. RESULTS: Among the tested crop shapes, sizes, and measured features, point-based measurements of arterial and venous tortuosity (combined), and a large circular cropped image (with radius 6 times the disc diameter), provided the highest diagnostic accuracy. The i-ROP system achieved 95% accuracy for classifying preplus and plus disease compared to the reference standard. This was comparable to the performance of the 3 individual experts (96%, 94%, 92%), and significantly higher than the mean performance of 31 nonexperts (81%). CONCLUSIONS: This comprehensive analysis of computer-based plus disease suggests that it may be feasible to develop a fully-automated system based on wide-angle retinal images that performs comparably to expert graders at three-level plus disease discrimination. TRANSLATIONAL RELEVANCE: Computer-based image analysis, using objective and quantitative retinal vascular features, has potential to complement clinical ROP diagnosis by ophthalmologists.

SELECTION OF CITATIONS
SEARCH DETAIL
...