Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
IEEE Trans Med Imaging ; 43(7): 2599-2609, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38381642

ABSTRACT

Methods for unsupervised domain adaptation (UDA) help to improve the performance of deep neural networks on unseen domains without any labeled data. Especially in medical disciplines such as histopathology, this is crucial since large datasets with detailed annotations are scarce. While the majority of existing UDA methods focus on the adaptation from a labeled source to a single unlabeled target domain, many real-world applications with a long life cycle involve more than one target domain. Thus, the ability to sequentially adapt to multiple target domains becomes essential. In settings where the data from previously seen domains cannot be stored, e.g., due to data protection regulations, the above becomes a challenging continual learning problem. To this end, we propose to use generative feature-driven image replay in conjunction with a dual-purpose discriminator that not only enables the generation of images with realistic features for replay, but also promotes feature alignment during domain adaptation. We evaluate our approach extensively on a sequence of three histopathological datasets for tissue-type classification, achieving state-of-the-art results. We present detailed ablation experiments studying our proposed method components and demonstrate a possible use-case of our continual UDA method for an unsupervised patch-based segmentation task given high-resolution tissue images. Our code is available at: https://github.com/histocartography/multi-scale-feature-alignment.


Subject(s)
Image Processing, Computer-Assisted , Image Processing, Computer-Assisted/methods , Humans , Algorithms , Unsupervised Machine Learning , Deep Learning , Animals , Databases, Factual , Neural Networks, Computer
2.
Med Image Anal ; 89: 102915, 2023 10.
Article in English | MEDLINE | ID: mdl-37633177

ABSTRACT

The identification and segmentation of histological regions of interest can provide significant support to pathologists in their diagnostic tasks. However, segmentation methods are constrained by the difficulty in obtaining pixel-level annotations, which are tedious and expensive to collect for whole-slide images (WSI). Though several methods have been developed to exploit image-level weak-supervision for WSI classification, the task of segmentation using WSI-level labels has received very little attention. The research in this direction typically require additional supervision beyond image labels, which are difficult to obtain in real-world practice. In this study, we propose WholeSIGHT, a weakly-supervised method that can simultaneously segment and classify WSIs of arbitrary shapes and sizes. Formally, WholeSIGHT first constructs a tissue-graph representation of WSI, where the nodes and edges depict tissue regions and their interactions, respectively. During training, a graph classification head classifies the WSI and produces node-level pseudo-labels via post-hoc feature attribution. These pseudo-labels are then used to train a node classification head for WSI segmentation. During testing, both heads simultaneously render segmentation and class prediction for an input WSI. We evaluate the performance of WholeSIGHT on three public prostate cancer WSI datasets. Our method achieves state-of-the-art weakly-supervised segmentation performance on all datasets while resulting in better or comparable classification with respect to state-of-the-art weakly-supervised WSI classification methods. Additionally, we assess the generalization capability of our method in terms of segmentation and classification performance, uncertainty estimation, and model calibration. Our code is available at: https://github.com/histocartography/wholesight.


Subject(s)
Prostatic Neoplasms , Male , Humans , Prostatic Neoplasms/diagnostic imaging , Calibration , Uncertainty
3.
Med Image Anal ; 89: 102924, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37597316

ABSTRACT

Deep learning models can achieve high accuracy when trained on large amounts of labeled data. However, real-world scenarios often involve several challenges: Training data may become available in installments, may originate from multiple different domains, and may not contain labels for training. Certain settings, for instance medical applications, often involve further restrictions that prohibit retention of previously seen data due to privacy regulations. In this work, to address such challenges, we study unsupervised segmentation in continual learning scenarios that involve domain shift. To that end, we introduce GarDA (Generative Appearance Replay for continual Domain Adaptation), a generative-replay based approach that can adapt a segmentation model sequentially to new domains with unlabeled data. In contrast to single-step unsupervised domain adaptation (UDA), continual adaptation to a sequence of domains enables leveraging and consolidation of information from multiple domains. Unlike previous approaches in incremental UDA, our method does not require access to previously seen data, making it applicable in many practical scenarios. We evaluate GarDA on three datasets with different organs and modalities, where it substantially outperforms existing techniques. Our code is available at: https://github.com/histocartography/generative-appearance-replay.

4.
Brief Bioinform ; 24(3)2023 05 19.
Article in English | MEDLINE | ID: mdl-37122067

ABSTRACT

Understanding the interactions between the biomolecules that govern cellular behaviors remains an emergent question in biology. Recent advances in single-cell technologies have enabled the simultaneous quantification of multiple biomolecules in the same cell, opening new avenues for understanding cellular complexity and heterogeneity. Still, the resulting multimodal single-cell datasets present unique challenges arising from the high dimensionality and multiple sources of acquisition noise. Computational methods able to match cells across different modalities offer an appealing alternative towards this goal. In this work, we propose MatchCLOT, a novel method for modality matching inspired by recent promising developments in contrastive learning and optimal transport. MatchCLOT uses contrastive learning to learn a common representation between two modalities and applies entropic optimal transport as an approximate maximum weight bipartite matching algorithm. Our model obtains state-of-the-art performance on two curated benchmarking datasets and an independent test dataset, improving the top scoring method by 26.1% while preserving the underlying biological structure of the multimodal data. Importantly, MatchCLOT offers high gains in computational time and memory that, in contrast to existing methods, allows it to scale well with the number of cells. As single-cell datasets become increasingly large, MatchCLOT offers an accurate and efficient solution to the problem of modality matching.


Subject(s)
Algorithms , Learning , Benchmarking , Entropy , Research Design
5.
Database (Oxford) ; 20222022 10 17.
Article in English | MEDLINE | ID: mdl-36251776

ABSTRACT

Breast cancer is the most commonly diagnosed cancer and registers the highest number of deaths for women. Advances in diagnostic activities combined with large-scale screening policies have significantly lowered the mortality rates for breast cancer patients. However, the manual inspection of tissue slides by pathologists is cumbersome, time-consuming and is subject to significant inter- and intra-observer variability. Recently, the advent of whole-slide scanning systems has empowered the rapid digitization of pathology slides and enabled the development of Artificial Intelligence (AI)-assisted digital workflows. However, AI techniques, especially Deep Learning, require a large amount of high-quality annotated data to learn from. Constructing such task-specific datasets poses several challenges, such as data-acquisition level constraints, time-consuming and expensive annotations and anonymization of patient information. In this paper, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of annotated Hematoxylin and Eosin (H&E)-stained images to advance AI development in the automatic characterization of breast lesions. BRACS contains 547 Whole-Slide Images (WSIs) and 4539 Regions Of Interest (ROIs) extracted from the WSIs. Each WSI and respective ROIs are annotated by the consensus of three board-certified pathologists into different lesion categories. Specifically, BRACS includes three lesion types, i.e., benign, malignant and atypical, which are further subtyped into seven categories. It is, to the best of our knowledge, the largest annotated dataset for breast cancer subtyping both at WSI and ROI levels. Furthermore, by including the understudied atypical lesions, BRACS offers a unique opportunity for leveraging AI to better understand their characteristics. We encourage AI practitioners to develop and evaluate novel algorithms on the BRACS dataset to further breast cancer diagnosis and patient care. Database URL: https://www.bracs.icar.cnr.it/.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Algorithms , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/genetics , Breast Neoplasms/pathology , Eosine Yellowish-(YS) , Female , Hematoxylin , Humans
6.
Med Image Anal ; 75: 102264, 2022 01.
Article in English | MEDLINE | ID: mdl-34781160

ABSTRACT

Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.


Subject(s)
Histological Techniques , Neural Networks, Computer , Benchmarking , Humans , Prognosis
7.
Med Image Anal ; 67: 101859, 2021 01.
Article in English | MEDLINE | ID: mdl-33129150

ABSTRACT

Classification of digital pathology images is imperative in cancer diagnosis and prognosis. Recent advancements in deep learning and computer vision have greatly benefited the pathology workflow by developing automated solutions for classification tasks. However, the cost and time for acquiring high quality task-specific large annotated training data are subject to intra- and inter-observer variability, thus challenging the adoption of such tools. To address these challenges, we propose a classification framework via co-representation learning to maximize the learning capability of deep neural networks while using a reduced amount of training data. The framework captures the class-label information and the local spatial distribution information by jointly optimizing a categorical cross-entropy objective and a deep metric learning objective respectively. A deep metric learning objective is incorporated to enhance the classification, especially in the low training data regime. Further, a neighborhood-aware multiple similarity sampling strategy, and a soft-multi-pair objective that optimizes interactions between multiple informative sample pairs, is proposed to accelerate deep metric learning. We evaluate the proposed framework on five benchmark datasets from three digital pathology tasks, i.e., nuclei classification, mitosis detection, and tissue type classification. For all the datasets, our framework achieves state-of-the-art performance when using approximately only 50% of the training data. On using complete training data, the proposed framework outperforms the state-of-the-art on all the five datasets.


Subject(s)
Deep Learning , Humans , Neural Networks, Computer , Workflow
8.
Front Med (Lausanne) ; 6: 173, 2019.
Article in English | MEDLINE | ID: mdl-31428614

ABSTRACT

Clinical morphological analysis of histopathology samples is an effective method in cancer diagnosis. Computational pathology methods can be employed to automate this analysis, providing improved objectivity and scalability. More specifically, computational techniques can be used in segmenting glands, which is an essential factor in cancer diagnosis. Automatic delineation of glands is a challenging task considering a large variability in glandular morphology across tissues and pathological subtypes. A deep learning based gland segmentation method can be developed to address the above task, but it requires a large number of accurate gland annotations from several tissue slides. Such a large dataset need to be generated manually by experienced pathologists, which is laborious, time-consuming, expensive, and suffers from the subjectivity of the annotator. So far, deep learning techniques have produced promising results on a few organ-specific gland segmentation tasks, however, the demand for organ-specific gland annotations hinder the extensibility of these techniques to other organs. This work investigates the idea of cross-domain (-organ type) approximation that aims at reducing the need for organ-specific annotations. Unlike parenchyma, the stromal component of tissues, that lies between the glands, is more consistent across several organs. It is hypothesized that an automatic method, that can precisely segment the stroma, would pave the way for a cross-organ gland segmentation. Two proposed Dense-U-Nets are trained on H&E strained colon adenocarcinoma samples focusing on the gland and stroma segmentation. The trained networks are evaluated on two independent datasets, they are, a H&E stained colon adenocarcinoma dataset and a H&E stained breast invasive cancer dataset. The trained network targeting the stroma segmentation performs similar to the network targeting the gland segmentation on the colon dataset. Whereas, the former approach performs significantly better compared to the latter approach on the breast dataset, showcasing the higher generalization capacity of the stroma segmentation approach. The networks are evaluated using Dice coefficient and Hausdorff distance computed between the ground truth gland masks and the predicted gland masks. The conducted experiments validate the efficacy of the proposed stoma segmentation approach toward multi-organ gland segmentation.

9.
Nat Biomed Eng ; 3(6): 478-490, 2019 06.
Article in English | MEDLINE | ID: mdl-30962588

ABSTRACT

Immunohistochemistry is the gold-standard method for cancer-biomarker identification and patient stratification. Yet, owing to signal saturation, its use as a quantitative assay is limited as it cannot distinguish tumours with similar biomarker-expression levels. Here, we introduce a quantitative microimmunochemistry assay that enables the acquisition of dynamic information, via a metric of the evolution of the immunohistochemistry signal during tissue staining, for the quantification of relative antigen density on tissue surfaces. We used the assay to stratify 30 patient-derived breast-cancer samples into conventional classes and to determine the proximity of each sample to the other classes. We also show that the assay enables the quantification of multiple biomarkers (human epidermal growth factor receptor, oestrogen receptor and progesterone receptor) in a standard breast-cancer panel. The integration of quantitative microimmunohistochemistry into current pathology workflows may lead to improvements in the precision of biomarker quantification.


Subject(s)
Breast Neoplasms/pathology , Immunohistochemistry/methods , Staining and Labeling , Algorithms , Antigens, Neoplasm/metabolism , Biomarkers, Tumor/metabolism , Cell Line, Tumor , Female , Humans , Kinetics , Neoplasm Staging , Receptor, ErbB-2/metabolism
10.
IEEE Trans Biomed Eng ; 66(10): 2952-2963, 2019 10.
Article in English | MEDLINE | ID: mdl-30762525

ABSTRACT

Accurate profiling of tumors using immunohistochemistry (IHC) is essential in cancer diagnosis. The inferences drawn from IHC-stained images depend to a great extent on the quality of immunostaining, which is in turn affected strongly by assay parameters. To optimize assay parameters, the available tissue sample is often limited. Moreover, with current practices in pathology, exploring the entire assay parameter space is not feasible. Thus, the evaluation of IHC stained slides is conventionally a subjective task, in which diagnoses are commonly drawn on images that are suboptimal. In this work, we introduce a framework to analyze IHC staining quality and its sensitivity to process parameters. To that extent, first histopathological sections are segmented automatically. Then, machine learning techniques are employed to extract disease-specific staining quality metrics (SQMs) targeting a quantitative assessment of staining quality. Finally, an approach to efficiently analyze the parameter space is introduced to infer sensitivity to process parameters. We present results on microscale IHC tissue samples of five breast tumor classes, based on disease state and protein expression. A disease-type classification F1-score of 0.82 and a contrast-level classification F1-score of 0.95 were achieved. With the proposed SQMs, an area under the curve of 0.85 was achieved on average over different disease types. Our methodology provides a promising step in automatically evaluating and quantifying staining quality of IHC stained tissue sections, and it can potentially standardize immunostaining across diagnostic laboratories.


Subject(s)
Breast Neoplasms/pathology , Immunohistochemistry/methods , Machine Learning , Staining and Labeling/methods , Automation, Laboratory , Biomarkers, Tumor/metabolism , Coloring Agents , Female , Humans , Microfluidic Analytical Techniques , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...