Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
1.
Sci Rep ; 14(1): 3202, 2024 02 08.
Article in English | MEDLINE | ID: mdl-38331955

ABSTRACT

Developing a clinical AI model necessitates a significant amount of highly curated and carefully annotated dataset by multiple medical experts, which results in increased development time and costs. Self-supervised learning (SSL) is a method that enables AI models to leverage unlabelled data to acquire domain-specific background knowledge that can enhance their performance on various downstream tasks. In this work, we introduce CypherViT, a cluster-based histo-pathology phenotype representation learning by self-supervised multi-class-token hierarchical Vision Transformer (ViT). CypherViT is a novel backbone that can be integrated into a SSL pipeline, accommodating both coarse and fine-grained feature learning for histopathological images via a hierarchical feature agglomerative attention module with multiple classification (cls) tokens in ViT. Our qualitative analysis showcases that our approach successfully learns semantically meaningful regions of interest that align with morphological phenotypes. To validate the model, we utilize the DINO self-supervised learning (SSL) framework to train CypherViT on a substantial dataset of unlabeled breast cancer histopathological images. This trained model proves to be a generalizable and robust feature extractor for colorectal cancer images. Notably, our model demonstrates promising performance in patch-level tissue phenotyping tasks across four public datasets. The results from our quantitative experiments highlight significant advantages over existing state-of-the-art SSL models and traditional transfer learning methods, such as those relying on ImageNet pre-training.


Subject(s)
Electric Power Supplies , Self-Management , Humans , Knowledge , Phenotype , Supervised Machine Learning
2.
J Pathol Inform ; 13: 100116, 2022.
Article in English | MEDLINE | ID: mdl-36268099

ABSTRACT

Background: Identification of HER2 protein overexpression and/or amplification of the HER2 gene are required to qualify breast cancer patients for HER2 targeted therapies. In situ hybridization (ISH) assays that identify HER2 gene amplification function as a stand-alone test for determination of HER2 status and rely on the manual quantification of the number of HER2 genes and copies of chromosome 17 to determine HER2 amplification. Methods: To assist pathologists, we have developed the uPath HER2 Dual ISH Image Analysis for Breast (uPath HER2 DISH IA) algorithm, as an adjunctive aid in the determination of HER2 gene status in breast cancer specimens. The objective of this study was to compare uPath HER2 DISH image analysis vs manual read scoring of VENTANA HER2 DISH-stained breast carcinoma specimens with ground truth (GT) gene status as the reference. Three reader pathologists reviewed 220, formalin-fixed, paraffin-embedded (FFPE) breast cancer cases by both manual and uPath HER2 DISH IA methods. Scoring results from manual read (MR) and computer-assisted scores (image analysis, IA) were compared against the GT gene status generated by consensus of a panel of pathologists. The differences in agreement rates of HER2 gene status between manual, computer-assisted, and GT gene status were determined. Results: The positive percent agreement (PPA) and negative percent agreement (NPA) rates for image analysis (IA) vs GT were 97.2% (95% confidence interval [CI]: 95.0, 99.3) and 94.3% (95% CI: 90.8, 97.3) respectively. Comparison of agreement rates showed that the lower bounds of the 95% CIs for the difference of PPA and NPA for IA vs MR were -0.9% and -6.2%, respectively. Further, inter- and intra-reader agreement rates in the IA method were observed with point estimates of at least 96.7%. Conclusions: Overall, our data show that the uPath HER2 DISH IA is non-inferior to manual scoring and supports its use as an aid for pathologists in routine diagnosis of breast cancer.

3.
Med Image Anal ; 39: 206-217, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28528295

ABSTRACT

Bruch's membrane opening-minimum rim width (BMO-MRW) is a recently proposed structural parameter which estimates the remaining nerve fiber bundles in the retina and is superior to other conventional structural parameters for diagnosing glaucoma. Measuring this structural parameter requires identification of BMO locations within spectral domain-optical coherence tomography (SD-OCT) volumes. While most automated approaches for segmentation of the BMO either segment the 2D projection of BMO points or identify BMO points in individual B-scans, in this work, we propose a machine-learning graph-based approach for true 3D segmentation of BMO from glaucomatous SD-OCT volumes. The problem is formulated as an optimization problem for finding a 3D path within the SD-OCT volume. In particular, the SD-OCT volumes are transferred to the radial domain where the closed loop BMO points in the original volume form a path within the radial volume. The estimated location of BMO points in 3D are identified by finding the projected location of BMO points using a graph-theoretic approach and mapping the projected locations onto the Bruch's membrane (BM) surface. Dynamic programming is employed in order to find the 3D BMO locations as the minimum-cost path within the volume. In order to compute the cost function needed for finding the minimum-cost path, a random forest classifier is utilized to learn a BMO model, obtained by extracting intensity features from the volumes in the training set, and computing the required 3D cost function. The proposed method is tested on 44 glaucoma patients and evaluated using manual delineations. Results show that the proposed method successfully identifies the 3D BMO locations and has significantly smaller errors compared to the existing 3D BMO identification approaches.


Subject(s)
Bruch Membrane/diagnostic imaging , Glaucoma/diagnostic imaging , Imaging, Three-Dimensional/methods , Machine Learning , Tomography, Optical Coherence/methods , Bruch Membrane/pathology , Glaucoma/pathology , Humans , Optic Disk/diagnostic imaging , Optic Disk/pathology
4.
Comput Med Imaging Graph ; 55: 87-94, 2017 01.
Article in English | MEDLINE | ID: mdl-27507325

ABSTRACT

The internal limiting membrane (ILM) separates the retina and optic nerve head (ONH) from the vitreous. In the optical coherence tomography volumes of glaucoma patients, while current approaches for the segmentation of the ILM in the peripapillary and macular regions are considered robust, current approaches commonly produce ILM segmentation errors at the ONH due to the presence of blood vessels and/or characteristic glaucomatous deep cupping. Because a precise segmentation of the ILM surface at the ONH is required for computing several newer structural measurements including Bruch's membrane opening-minimum rim width (BMO-MRW) and cup volume, in this study, we propose a multimodal multiresolution graph-based method to precisely segment the ILM surface within ONH-centered spectral-domain optical coherence tomography (SD-OCT) volumes. In particular, the gradient vector flow (GVF) field, which is computed from a multiresolution initial segmentation, is employed for calculating a set of non-overlapping GVF-based columns perpendicular to the initial segmentation. The GVF columns are utilized to resample the volume and also serve as the columns to the graph construction. The ILM surface in the resampled volume is fairly smooth and does not contain the steep slopes. This prior shape knowledge along with the blood vessel information, obtained from registered fundus photographs, are incorporated in a graph-theoretic approach in order to identify the location of the ILM surface. The proposed method is tested on the SD-OCT volumes of 44 subjects with various stages of glaucoma and significantly smaller segmentation errors were obtained than that of current approaches.


Subject(s)
Algorithms , Glaucoma/diagnostic imaging , Optic Disk/diagnostic imaging , Tomography, Optical Coherence/methods , Diagnostic Techniques, Ophthalmological , Humans , Retinal Vessels/diagnostic imaging
5.
Biomed Opt Express ; 7(12): 5252-5267, 2016 Dec 01.
Article in English | MEDLINE | ID: mdl-28018740

ABSTRACT

With availability of different retinal imaging modalities such as fundus photography and spectral domain optical coherence tomography (SD-OCT), having a robust and accurate registration scheme to enable utilization of this complementary information is beneficial. The few existing fundus-OCT registration approaches contain a vessel segmentation step, as the retinal blood vessels are the most dominant structures that are in common between the pair of images. However, errors in the vessel segmentation from either modality may cause corresponding errors in the registration. In this paper, we propose a feature-based registration method for registering fundus photographs and SD-OCT projection images that benefits from vasculature structural information without requiring blood vessel segmentation. In particular, after a preprocessing step, a set of control points (CPs) are identified by looking for the corners in the images. Next, each CP is represented by a feature vector which encodes the local structural information via computing the histograms of oriented gradients (HOG) from the neighborhood of each CP. The best matching CPs are identified by calculating the distance of their corresponding feature vectors. After removing the incorrect matches the best affine transform that registers fundus photographs to SD-OCT projection images is computed using the random sample consensus (RANSAC) method. The proposed method was tested on 44 pairs of fundus and SD-OCT projection images of glaucoma patients and the result showed that the proposed method successfully registers the multimodal images and produced a registration error of 25.34 ± 12.34 µm (0.84 ± 0.41 pixels).

7.
IEEE Trans Med Imaging ; 34(9): 1854-66, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25781623

ABSTRACT

In this work, a multimodal approach is proposed to use the complementary information from fundus photographs and spectral domain optical coherence tomography (SD-OCT) volumes in order to segment the optic disc and cup boundaries. The problem is formulated as an optimization problem where the optimal solution is obtained using a machine-learning theoretical graph-based method. In particular, first the fundus photograph is registered to the 2D projection of the SD-OCT volume. Three in-region cost functions are designed using a random forest classifier corresponding to three regions of cup, rim, and background. Next, the volumes are resampled to create radial scans in which the Bruch's Membrane Opening (BMO) endpoints are easier to detect. Similar to in-region cost function design, the disc-boundary cost function is designed using a random forest classifier for which the features are created by applying the Haar Stationary Wavelet Transform (SWT) to the radial projection image. A multisurface graph-based approach utilizes the in-region and disc-boundary cost images to segment the boundaries of optic disc and cup under feasibility constraints. The approach is evaluated on 25 multimodal image pairs from 25 subjects in a leave-one-out fashion (by subject). The performances of the graph-theoretic approach using three sets of cost functions are compared: 1) using unimodal (OCT only) in-region costs, 2) using multimodal in-region costs, and 3) using multimodal in-region and disc-boundary costs. Results show that the multimodal approaches outperform the unimodal approach in segmenting the optic disc and cup.


Subject(s)
Diagnostic Techniques, Ophthalmological , Imaging, Three-Dimensional/methods , Multimodal Imaging/methods , Optic Disk/blood supply , Algorithms , Humans , Machine Learning
8.
IEEE Trans Biomed Eng ; 58(5): 1183-92, 2011 May.
Article in English | MEDLINE | ID: mdl-21147592

ABSTRACT

Retinal images can be used in several applications, such as ocular fundus operations as well as human recognition. Also, they play important roles in detection of some diseases in early stages, such as diabetes, which can be performed by comparison of the states of retinal blood vessels. Intrinsic characteristics of retinal images make the blood vessel detection process difficult. Here, we proposed a new algorithm to detect the retinal blood vessels effectively. Due to the high ability of the curvelet transform in representing the edges, modification of curvelet transform coefficients to enhance the retinal image edges better prepares the image for the segmentation part. The directionality feature of the multistructure elements method makes it an effective tool in edge detection. Hence, morphology operators using multistructure elements are applied to the enhanced image in order to find the retinal image ridges. Afterward, morphological operators by reconstruction eliminate the ridges not belonging to the vessel tree while trying to preserve the thin vessels unchanged. In order to increase the efficiency of the morphological operators by reconstruction, they were applied using multistructure elements. A simple thresholding method along with connected components analysis (CCA) indicates the remained ridges belonging to vessels. In order to utilize CCA more efficiently, we locally applied the CCA and length filtering instead of considering the whole image. Experimental results on a known database, DRIVE, and achieving to more than 94% accuracy in about 50 s for blood vessel detection, proved that the blood vessels can be effectively detected by applying our method on the retinal images.


Subject(s)
Algorithms , Diagnostic Techniques, Ophthalmological , Image Processing, Computer-Assisted/methods , Retinal Vessels/anatomy & histology , Databases, Factual , Fundus Oculi , Humans , Models, Statistical
SELECTION OF CITATIONS
SEARCH DETAIL
...