Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Diagn Pathol ; 18(1): 67, 2023 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-37198691

RESUMO

BACKGROUND: Deep learning models applied to healthcare applications including digital pathology have been increasing their scope and importance in recent years. Many of these models have been trained on The Cancer Genome Atlas (TCGA) atlas of digital images, or use it as a validation source. One crucial factor that seems to have been widely ignored is the internal bias that originates from the institutions that contributed WSIs to the TCGA dataset, and its effects on models trained on this dataset. METHODS: 8,579 paraffin-embedded, hematoxylin and eosin stained, digital slides were selected from the TCGA dataset. More than 140 medical institutions (acquisition sites) contributed to this dataset. Two deep neural networks (DenseNet121 and KimiaNet were used to extract deep features at 20× magnification. DenseNet was pre-trained on non-medical objects. KimiaNet has the same structure but trained for cancer type classification on TCGA images. The extracted deep features were later used to detect each slide's acquisition site, and also for slide representation in image search. RESULTS: DenseNet's deep features could distinguish acquisition sites with 70% accuracy whereas KimiaNet's deep features could reveal acquisition sites with more than 86% accuracy. These findings suggest that there are acquisition site specific patterns that could be picked up by deep neural networks. It has also been shown that these medically irrelevant patterns can interfere with other applications of deep learning in digital pathology, namely image search. This study shows that there are acquisition site specific patterns that can be used to identify tissue acquisition sites without any explicit training. Furthermore, it was observed that a model trained for cancer subtype classification has exploited such medically irrelevant patterns to classify cancer types. Digital scanner configuration and noise, tissue stain variation and artifacts, and source site patient demographics are among factors that likely account for the observed bias. Therefore, researchers should be cautious of such bias when using histopathology datasets for developing and training deep networks.


Assuntos
Neoplasias , Humanos , Neoplasias/genética , Redes Neurais de Computação , Corantes , Hematoxilina , Amarelo de Eosina-(YS)
2.
Artif Intell Med ; 132: 102368, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36207081

RESUMO

Despite the recent progress in Deep Neural Networks (DNNs) to characterize histopathology images, compactly representing a gigapixel whole-slide image (WSI) via salient features to enable computational pathology is still an urgent need and a significant challenge. In this paper, we propose a novel WSI characterization approach to represent, search and classify biopsy specimens using a compact feature vector (CFV) extracted from a multitude of deep feature vectors. Since the non-optimal design and training of deep networks may result in many irrelevant and redundant features and also cause computational bottlenecks, we proposed a low-cost stochastic method to optimize the output of pre-trained deep networks using evolutionary algorithms to generate a very small set of features to accurately represent each tissue/biopsy. The performance of the proposed method has been assessed using WSIs from the publicly available TCGA image data. In addition to acquiring a very compact representation (i.e., 11,000 times smaller than the initial set of features), the optimized features achieved 93% classification accuracy resulting in 11% improvement compared to the published benchmarks. The experimental results reveal that the proposed method can reliably select salient features of the biopsy sample. Furthermore, the proposed approach holds the potential to immensely facilitate the adoption of digital pathology by enabling a new generation of WSI representation for efficient storage and more user-friendly visualization.


Assuntos
Algoritmos , Redes Neurais de Computação
3.
IEEE J Biomed Health Inform ; 26(9): 4611-4622, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35687644

RESUMO

This paper investigates the effect of magnification on content-based image search in digital pathology archives and proposes to use multi-magnification image representation. Image search in large archives of digital pathology slides provides researchers and medical professionals with an opportunity to match records of current and past patients and learn from evidently diagnosed and treated cases. When working with microscopes, pathologists switch between different magnification levels while examining tissue specimens to find and evaluate various morphological features. Inspired by the conventional pathology workflow, we have investigated several magnification levels in digital pathology and their combinations to minimize the gap between AI-enabled image search methods and clinical settings. The proposed searching framework does not rely on any regional annotation and potentially applies to millions of unlabelled (raw) whole slide images. This paper suggests two approaches for combining magnification levels and compares their performance. The first approach obtains a single-vector deep feature representation for a digital slide, whereas the second approach works with a multi-vector deep feature representation. We report the search results of 20×, 10×, and 5× magnifications and their combinations on a subset of The Cancer Genome Atlas (TCGA) repository. The experiments verify that cell-level information at the highest magnification is essential for searching for diagnostic purposes. In contrast, low-magnification information may improve this assessment depending on the tumor type. Our multi-magnification approach achieved up to 11% F1-score improvement in searching among the urinary tract and brain tumor subtypes compared to the single-magnification image search.


Assuntos
Processamento de Imagem Assistida por Computador , Microscopia , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microscopia/métodos , Fluxo de Trabalho
4.
Am J Pathol ; 191(12): 2172-2183, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34508689

RESUMO

Although deep learning networks applied to digital images have shown impressive results for many pathology-related tasks, their black-box approach and limitation in terms of interpretability are significant obstacles for their widespread clinical utility. This study investigates the visualization of deep features (DFs) to characterize two lung cancer subtypes, adenocarcinoma and squamous cell carcinoma. It demonstrates that a subset of DFs, called prominent DFs, can accurately distinguish these two cancer subtypes. Visualization of such individual DFs allows for a better understanding of histopathologic patterns at both the whole-slide and patch levels, and discrimination of these cancer types. These DFs were visualized at the whole slide image level through DF-specific heatmaps and at tissue patch level through the generation of activation maps. In addition, these prominent DFs can distinguish carcinomas of organs other than the lung. This framework may serve as a platform for evaluating the interpretability of any deep network for diagnostic decision making.


Assuntos
Adenocarcinoma de Pulmão/diagnóstico , Carcinoma de Células Escamosas/diagnóstico , Aprendizado Profundo , Neoplasias Pulmonares/diagnóstico , Adenocarcinoma de Pulmão/patologia , Carcinoma de Células Escamosas/patologia , Conjuntos de Dados como Assunto , Diagnóstico Diferencial , Estudos de Viabilidade , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Pulmonares/patologia , Masculino , Redes Neurais de Computação , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
5.
Sci Rep ; 11(1): 9817, 2021 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-33972606

RESUMO

Fast diagnosis and treatment of pneumothorax, a collapsed or dropped lung, is crucial to avoid fatalities. Pneumothorax is typically detected on a chest X-ray image through visual inspection by experienced radiologists. However, the detection rate is quite low due to the complexity of visual inspection for small lung collapses. Therefore, there is an urgent need for automated detection systems to assist radiologists. Although deep learning classifiers generally deliver high accuracy levels in many applications, they may not be useful in clinical practice due to the lack of high-quality and representative labeled image sets. Alternatively, searching in the archive of past cases to find matching images may serve as a "virtual second opinion" through accessing the metadata of matched evidently diagnosed cases. To use image search as a triaging or diagnosis assistant, we must first tag all chest X-ray images with expressive identifiers, i.e., deep features. Then, given a query chest X-ray image, the majority vote among the top k retrieved images can provide a more explainable output. In this study, we searched in a repository with more than 550,000 chest X-ray images. We developed the Autoencoding Thorax Net (short AutoThorax -Net) for image search in chest radiographs. Experimental results show that image search based on AutoThorax -Net features can achieve high identification performance providing a path towards real-world deployment. We achieved 92% AUC accuracy for a semi-automated search in 194,608 images (pneumothorax and normal) and 82% AUC accuracy for fully automated search in 551,383 images (normal, pneumothorax and many other chest diseases).

6.
Med Image Anal ; 70: 102032, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33773296

RESUMO

Feature vectors provided by pre-trained deep artificial neural networks have become a dominant source for image representation in recent literature. Their contribution to the performance of image analysis can be improved through fine-tuning. As an ultimate solution, one might even train a deep network from scratch with the domain-relevant images, a highly desirable option which is generally impeded in pathology by lack of labeled images and the computational expense. In this study, we propose a new network, namely KimiaNet, that employs the topology of the DenseNet with four dense blocks, fine-tuned and trained with histopathology images in different configurations. We used more than 240,000 image patches with 1000×1000 pixels acquired at 20× magnification through our proposed "high-cellularity mosaic" approach to enable the usage of weak labels of 7126 whole slide images of formalin-fixed paraffin-embedded human pathology samples publicly available through The Cancer Genome Atlas (TCGA) repository. We tested KimiaNet using three public datasets, namely TCGA, endometrial cancer images, and colorectal cancer images by evaluating the performance of search and classification when corresponding features of different networks are used for image representation. As well, we designed and trained multiple convolutional batch-normalized ReLU (CBR) networks. The results show that KimiaNet provides superior results compared to the original DenseNet and smaller CBR networks when used as feature extractor to represent histopathology images.


Assuntos
Neoplasias , Redes Neurais de Computação , Humanos , Processamento de Imagem Assistida por Computador , Neoplasias/diagnóstico por imagem
7.
Am J Pathol ; 191(10): 1702-1708, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33636179

RESUMO

One of the major obstacles in reaching diagnostic consensus is observer variability. With the recent success of artificial intelligence, particularly the deep networks, the question emerges as to whether the fundamental challenge of diagnostic imaging can now be resolved. This article briefly reviews the problem and how eventually both supervised and unsupervised AI technologies could help to overcome it.


Assuntos
Inteligência Artificial , Processamento de Imagem Assistida por Computador , Variações Dependentes do Observador , Patologia , Humanos , Processamento de Linguagem Natural , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA