Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 266
Filtrar
1.
Sci Rep ; 14(1): 23489, 2024 10 08.
Artigo em Inglês | MEDLINE | ID: mdl-39379448

RESUMO

Automated segmentation of biomedical image has been recognized as an important step in computer-aided diagnosis systems for detection of abnormalities. Despite its importance, the segmentation process remains an open challenge due to variations in color, texture, shape diversity and boundaries. Semantic segmentation often requires deeper neural networks to achieve higher accuracy, making the segmentation model more complex and slower. Due to the need to process a large number of biomedical images, more efficient and cheaper image processing techniques for accurate segmentation are needed. In this article, we present a modified deep semantic segmentation model that utilizes the backbone of EfficientNet-B3 along with UNet for reliable segmentation. We trained our model on Non-melanoma skin cancer segmentation for histopathology dataset to divide the image in 12 different classes for segmentation. Our method outperforms the existing literature with an increase in average class accuracy from 79 to 83%. Our approach also shows an increase in overall accuracy from 85 to 94%.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Semântica , Neoplasias Cutâneas , Pele , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia , Processamento de Imagem Assistida por Computador/métodos , Pele/diagnóstico por imagem , Pele/patologia , Aprendizado Profundo , Algoritmos
2.
Front Immunol ; 15: 1453232, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39372403

RESUMO

Objective: Develop a predictive model utilizing weakly supervised deep learning techniques to accurately forecast major pathological response (MPR) in patients with resectable non-small cell lung cancer (NSCLC) undergoing neoadjuvant chemoimmunotherapy (NICT), by leveraging whole slide images (WSIs). Methods: This retrospective study examined pre-treatment WSIs from 186 patients with non-small cell lung cancer (NSCLC), using a weakly supervised learning framework. We employed advanced deep learning architectures, including DenseNet121, ResNet50, and Inception V3, to analyze WSIs on both micro (patch) and macro (slide) levels. The training process incorporated innovative data augmentation and normalization techniques to bolster the robustness of the models. We evaluated the performance of these models against traditional clinical predictors and integrated them with a novel pathomics signature, which was developed using multi-instance learning algorithms that facilitate feature aggregation from patch-level probability distributions. Results: Univariate and multivariable analyses confirmed histology as a statistically significant prognostic factor for MPR (P-value< 0.05). In patch model evaluations, DenseNet121 led in the validation set with an area under the curve (AUC) of 0.656, surpassing ResNet50 (AUC = 0.626) and Inception V3 (AUC = 0.654), and showed strong generalization in external testing (AUC = 0.611). Further evaluation through visual inspection of patch-level data integration into WSIs revealed XGBoost's superior class differentiation and generalization, achieving the highest AUCs of 0.998 in training and robust scores of 0.818 in validation and 0.805 in testing. Integrating pathomics features with clinical data into a nomogram yielded AUC of 0.819 in validation and 0.820 in testing, enhancing discriminative accuracy. Gradient-weighted Class Activation Mapping (Grad-CAM) and feature aggregation methods notably boosted the model's interpretability and feature modeling. Conclusion: The application of weakly supervised deep learning to WSIs offers a powerful tool for predicting MPR in NSCLC patients treated with NICT.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Neoplasias Pulmonares , Terapia Neoadjuvante , Humanos , Carcinoma Pulmonar de Células não Pequenas/terapia , Carcinoma Pulmonar de Células não Pequenas/patologia , Estudos Retrospectivos , Neoplasias Pulmonares/terapia , Neoplasias Pulmonares/patologia , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Aprendizado de Máquina Supervisionado , Imunoterapia/métodos , Resultado do Tratamento , Adulto
3.
J Pathol Clin Res ; 10(6): e70004, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39358807

RESUMO

EGFR mutations are a major prognostic factor in lung adenocarcinoma. However, current detection methods require sufficient samples and are costly. Deep learning is promising for mutation prediction in histopathological image analysis but has limitations in that it does not sufficiently reflect tumor heterogeneity and lacks interpretability. In this study, we developed a deep learning model to predict the presence of EGFR mutations by analyzing histopathological patterns in whole slide images (WSIs). We also introduced the EGFR mutation prevalence (EMP) score, which quantifies EGFR prevalence in WSIs based on patch-level predictions, and evaluated its interpretability and utility. Our model estimates the probability of EGFR prevalence in each patch by partitioning the WSI based on multiple-instance learning and predicts the presence of EGFR mutations at the slide level. We utilized a patch-masking scheduler training strategy to enable the model to learn various histopathological patterns of EGFR. This study included 868 WSI samples from lung adenocarcinoma patients collected from three medical institutions: Hallym University Medical Center, Inha University Hospital, and Chungnam National University Hospital. For the test dataset, 197 WSIs were collected from Ajou University Medical Center to evaluate the presence of EGFR mutations. Our model demonstrated prediction performance with an area under the receiver operating characteristic curve of 0.7680 (0.7607-0.7720) and an area under the precision-recall curve of 0.8391 (0.8326-0.8430). The EMP score showed Spearman correlation coefficients of 0.4705 (p = 0.0087) for p.L858R and 0.5918 (p = 0.0037) for exon 19 deletions in 64 samples subjected to next-generation sequencing analysis. Additionally, high EMP scores were associated with papillary and acinar patterns (p = 0.0038 and p = 0.0255, respectively), whereas low EMP scores were associated with solid patterns (p = 0.0001). These results validate the reliability of our model and suggest that it can provide crucial information for rapid screening and treatment plans.


Assuntos
Adenocarcinoma de Pulmão , Aprendizado Profundo , Receptores ErbB , Neoplasias Pulmonares , Mutação , Humanos , Receptores ErbB/genética , Adenocarcinoma de Pulmão/genética , Adenocarcinoma de Pulmão/patologia , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Análise Mutacional de DNA , Feminino , Interpretação de Imagem Assistida por Computador
4.
Discov Oncol ; 15(1): 545, 2024 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-39390246

RESUMO

This study aims to develop a deep learning (DL) model based on whole-slide images (WSIs) to predict the pathological stage of clear cell renal cell carcinoma (ccRCC). The histopathological images of 513 ccRCC patients were downloaded from The Cancer Genome Atlas (TCGA) database and randomly divided into training set and validation set according to the ratio of 8∶2. The CLAM algorithm was used to establish the DL model, and the stability of the model was evaluated in the external validation set. DL features were extracted from the model to construct a prognostic risk model, which was validated in an external dataset. The results showed that the DL model showed excellent prediction ability with an area under the curve (AUC) of 0.875 and an average accuracy score of 0.809, indicating that the model could reliably distinguish ccRCC patients at different stages from histopathological images. In addition, the prognostic risk model constructed by DL characteristics showed that the overall survival rate of patients in the high-risk group was significantly lower than that in the low-risk group (P = 0.003), and AUC values for predicting 1-, 3- and 5-year overall survival rates were 0.68, 0.69 and 0.69, respectively, indicating that the prediction model had high sensitivity and specificity. The results of the validation set are consistent with the above results. Therefore, DL model can accurately predict the pathological stage and prognosis of ccRCC patients, and provide certain reference value for clinical diagnosis.

5.
BMC Cancer ; 24(1): 1069, 2024 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-39210289

RESUMO

BACKGROUND: Thyroid cancer is a common thyroid malignancy. The majority of thyroid lesion needs intraoperative frozen pathology diagnosis, which provides important information for precision operation. As digital whole slide images (WSIs) develop, deep learning methods for histopathological classification of the thyroid gland (paraffin sections) have achieved outstanding results. Our current study is to clarify whether deep learning assists pathology diagnosis for intraoperative frozen thyroid lesions or not. METHODS: We propose an artificial intelligence-assisted diagnostic system for frozen thyroid lesions that applies prior knowledge in tandem with a dichotomous judgment of whether the lesion is cancerous or not and a quadratic judgment of the type of cancerous lesion to categorize the frozen thyroid lesions into five categories: papillary thyroid carcinoma, medullary thyroid carcinoma, anaplastic thyroid carcinoma, follicular thyroid tumor, and non-cancerous lesion. We obtained 4409 frozen digital pathology sections (WSI) of thyroid from the First Affiliated Hospital of Sun Yat-sen University (SYSUFH) to train and test the model, and the performance was validated by a six-fold cross validation, 101 papillary microcarcinoma sections of thyroid were used to validate the system's sensitivity, and 1388 WSIs of thyroid were used for the evaluation of the external dataset. The deep learning models were compared in terms of several metrics such as accuracy, F1 score, recall, precision and AUC (Area Under Curve). RESULTS: We developed the first deep learning-based frozen thyroid diagnostic classifier for histopathological WSI classification of papillary carcinoma, medullary carcinoma, follicular tumor, anaplastic carcinoma, and non-carcinoma lesion. On test slides, the system had an accuracy of 0.9459, a precision of 0.9475, and an AUC of 0.9955. In the papillary carcinoma test slides, the system was able to accurately predict even lesions as small as 2 mm in diameter. Tested with the acceleration component, the cut processing can be performed in 346.12 s and the visual inference prediction results can be obtained in 98.61 s, thus meeting the time requirements for intraoperative diagnosis. Our study employs a deep learning approach for high-precision classification of intraoperative frozen thyroid lesion distribution in the clinical setting, which has potential clinical implications for assisting pathologists and precision surgery of thyroid lesions.


Assuntos
Aprendizado Profundo , Secções Congeladas , Câncer Papilífero da Tireoide , Neoplasias da Glândula Tireoide , Humanos , Neoplasias da Glândula Tireoide/patologia , Neoplasias da Glândula Tireoide/diagnóstico , Neoplasias da Glândula Tireoide/cirurgia , Câncer Papilífero da Tireoide/patologia , Câncer Papilífero da Tireoide/diagnóstico , Câncer Papilífero da Tireoide/cirurgia , Carcinoma Papilar/patologia , Carcinoma Papilar/cirurgia , Carcinoma Papilar/diagnóstico , Adenocarcinoma Folicular/patologia , Adenocarcinoma Folicular/diagnóstico , Adenocarcinoma Folicular/cirurgia , Glândula Tireoide/patologia , Glândula Tireoide/cirurgia , Carcinoma Neuroendócrino/patologia , Carcinoma Neuroendócrino/diagnóstico , Carcinoma Neuroendócrino/cirurgia , Feminino , Masculino , Pessoa de Meia-Idade , Adulto , Período Intraoperatório , Carcinoma Anaplásico da Tireoide/patologia , Carcinoma Anaplásico da Tireoide/diagnóstico , Carcinoma Anaplásico da Tireoide/cirurgia
6.
Phys Med Biol ; 69(18)2024 Sep 13.
Artigo em Inglês | MEDLINE | ID: mdl-39191290

RESUMO

Objective.In this study, we propose a semi-supervised learning (SSL) scheme using a patch-based deep learning (DL) framework to tackle the challenge of high-precision classification of seven lung tumor growth patterns, despite having a small amount of labeled data in whole slide images (WSIs). This scheme aims to enhance generalization ability with limited data and reduce dependence on large amounts of labeled data. It effectively addresses the common challenge of high demand for labeled data in medical image analysis.Approach.To address these challenges, the study employs a SSL approach enhanced by a dynamic confidence threshold mechanism. This mechanism adjusts based on the quantity and quality of pseudo labels generated. This dynamic thresholding mechanism helps avoid the imbalance of pseudo-label categories and the low number of pseudo-labels that may result from a higher fixed threshold. Furthermore, the research introduces a multi-teacher knowledge distillation (MTKD) technique. This technique adaptively weights predictions from multiple teacher models to transfer reliable knowledge and safeguard student models from low-quality teacher predictions.Main results.The framework underwent rigorous training and evaluation using a dataset of 150 WSIs, each representing one of the seven growth patterns. The experimental results demonstrate that the framework is highly accurate in classifying lung tumor growth patterns in histopathology images. Notably, the performance of the framework is comparable to that of fully supervised models and human pathologists. In addition, the framework's evaluation metrics on a publicly available dataset are higher than those of previous studies, indicating good generalizability.Significance.This research demonstrates that a SSL approach can achieve results comparable to fully supervised models and expert pathologists, thus opening new possibilities for efficient and cost-effective medical images analysis. The implementation of dynamic confidence thresholding and MTKD techniques represents a significant advancement in applying DL to complex medical image analysis tasks. This advancement could lead to faster and more accurate diagnoses, ultimately improving patient outcomes and fostering the overall progress of healthcare technology.


Assuntos
Adenocarcinoma de Pulmão , Processamento de Imagem Assistida por Computador , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Neoplasias Pulmonares/patologia , Adenocarcinoma de Pulmão/diagnóstico por imagem , Adenocarcinoma de Pulmão/patologia , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina Supervisionado , Aprendizado Profundo
7.
Med Image Anal ; 97: 103294, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39128377

RESUMO

Multiple instance learning (MIL)-based methods have been widely adopted to process the whole slide image (WSI) in the field of computational pathology. Due to the sparse slide-level supervision, these methods usually lack good localization on the tumor regions, leading to poor interpretability. Moreover, they lack robust uncertainty estimation of prediction results, leading to poor reliability. To solve the above two limitations, we propose an explainable and evidential multiple instance learning (E2-MIL) framework for whole slide image classification. E2-MIL is mainly composed of three modules: a detail-aware attention distillation module (DAM), a structure-aware attention refined module (SRM), and an uncertainty-aware instance classifier (UIC). Specifically, DAM helps the global network locate more detail-aware positive instances by utilizing the complementary sub-bags to learn detailed attention knowledge from the local network. In addition, a masked self-guidance loss is also introduced to help bridge the gap between the slide-level labels and instance-level classification tasks. SRM generates a structure-aware attention map that locates the entire tumor region structure by effectively modeling the spatial relations between clustering instances. Moreover, UIC provides accurate instance-level classification results and robust predictive uncertainty estimation to improve the model reliability based on subjective logic theory. Extensive experiments on three large multi-center subtyping datasets demonstrate both slide-level and instance-level performance superiority of E2-MIL.


Assuntos
Interpretação de Imagem Assistida por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Algoritmos , Aprendizado de Máquina
8.
Quant Imaging Med Surg ; 14(8): 5831-5844, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39144041

RESUMO

Background: Axillary lymph node (ALN) status is a crucial prognostic indicator for breast cancer metastasis, with manual interpretation of whole slide images (WSIs) being the current standard practice. However, this method is subjective and time-consuming. Recent advancements in deep learning-based methods for medical image analysis have shown promise in improving clinical diagnosis. This study aims to leverage these technological advancements to develop a deep learning model based on features extracted from primary tumor biopsies for preoperatively identifying ALN metastasis in early-stage breast cancer patients with negative nodes. Methods: We present DLCNBC-SA, a deep learning-based network specifically tailored for core needle biopsy and clinical data feature extraction, which integrates a self-attention mechanism (CNBC-SA). The proposed model consists of a feature extractor based on convolutional neural network (CNN) and an improved self-attention mechanism module, which can preserve the independence of features in WSIs for analysis and enhancement to provide rich feature representation. To validate the performance of the proposed model, we conducted comparative experiments and ablation studies using publicly available datasets, and verification was performed through quantitative analysis. Results: The comparative experiment illustrates the superior performance of the proposed model in the task of binary classification of ALNs, as compared to alternative methods. Our method achieved outstanding performance [area under the curve (AUC): 0.882] in this task, significantly surpassing the state-of-the-art (SOTA) method on the same dataset (AUC: 0.862). The ablation experiment reveals that incorporating RandomRotation data augmentation technology and utilizing Adadelta optimizer can effectively enhance the performance of the proposed model. Conclusions: The experimental results demonstrate that the model proposed in this paper outperforms the SOTA model on the same dataset, thereby establishing its reliability as an assistant for pathologists in analyzing WSIs of breast cancer. Consequently, it significantly enhances both the efficiency and accuracy of doctors during the diagnostic process.

9.
Virchows Arch ; 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39107524

RESUMO

The aim of the present study was to develop and validate a quantitative image analysis (IA) algorithm to aid pathologists in assessing bright-field HER2 in situ hybridization (ISH) tests in solid cancers. A cohort of 80 sequential cases (40 HER2-negative and 40 HER2-positive) were evaluated for HER2 gene amplification with bright-field ISH. We developed an IA algorithm using the ISH Module from HALO software to automatically quantify HER2 and CEP17 copy numbers per cell as well as the HER2/CEP17 ratio. We observed a high correlation of HER2/CEP17 ratio, an average of HER2 and CEP17 copy number per cell between visual and IA quantification (Pearson's correlation coefficient of 0.842, 0.916, and 0.765, respectively). IA was able to count from 124 cells to 47,044 cells (median of 5565 cells). The margin of error for the visual quantification of the HER2/CEP17 ratio and of the average of HER2 copy number per cell decreased from a median of 0.23 to 0.02 and from a median of 0.49 to 0.04, respectively, in IA. Curve estimation regression models showed that a minimum of 469 or 953 invasive cancer cells per case is needed to reach an average margin of error below 0.1 for the HER2/CEP17 ratio or for the average of HER2 copy number per cell, respectively. Lastly, on average, a case took 212.1 s to execute the IA, which means that it evaluates about 130 cells/s and requires 6.7 s/mm2. The concordance of the IA software with the visual scoring was 95%, with a sensitivity of 90% and a specificity of 100%. All four discordant cases were able to achieve concordant results after the region of interest adjustment. In conclusion, this validation study underscores the usefulness of IA in HER2 ISH testing, displaying excellent concordance with visual scoring and significantly reducing margins of error.

10.
J Med Imaging (Bellingham) ; 11(4): 047501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-39087085

RESUMO

Purpose: Endometrial cancer (EC) is one of the most common types of cancer affecting women. While the hematoxylin-and-eosin (H&E) staining remains the standard for histological analysis, the immunohistochemistry (IHC) method provides molecular-level visualizations. Our study proposes a digital staining method to generate the hematoxylin-3,3'-diaminobenzidine (H-DAB) IHC stain of Ki-67 for the whole slide image of the EC tumor from its H&E stain counterpart. Approach: We employed a color unmixing technique to yield stain density maps from the optical density (OD) of the stains and utilized the U-Net for end-to-end inference. The effectiveness of the proposed method was evaluated using the Pearson correlation between the digital and physical stain's labeling index (LI), a key metric indicating tumor proliferation. Two different cross-validation schemes were designed in our study: intraslide validation and cross-case validation (CCV). In the widely used intraslide scheme, the training and validation sets might include different regions from the same slide. The rigorous CCV validation scheme strictly prohibited any validation slide from contributing to training. Results: The proposed method yielded a high-resolution digital stain with preserved histological features, indicating a reliable correlation with the physical stain in terms of the Ki-67 LI. In the intraslide scheme, using intraslide patches resulted in a biased accuracy (e.g., R = 0.98 ) significantly higher than that of CCV. The CCV scheme retained a fair correlation (e.g., R = 0.66 ) between the LIs calculated from the digital stain and its physical IHC counterpart. Inferring the OD of the IHC stain from that of the H&E stain enhanced the correlation metric, outperforming that of the baseline model using the RGB space. Conclusions: Our study revealed that molecule-level insights could be obtained from H&E images using deep learning. Furthermore, the improvement brought via OD inference indicated a possible method for creating more generalizable models for digital staining via per-stain analysis.

11.
Med Image Anal ; 97: 103257, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-38981282

RESUMO

The alignment of tissue between histopathological whole-slide-images (WSI) is crucial for research and clinical applications. Advances in computing, deep learning, and availability of large WSI datasets have revolutionised WSI analysis. Therefore, the current state-of-the-art in WSI registration is unclear. To address this, we conducted the ACROBAT challenge, based on the largest WSI registration dataset to date, including 4,212 WSIs from 1,152 breast cancer patients. The challenge objective was to align WSIs of tissue that was stained with routine diagnostic immunohistochemistry to its H&E-stained counterpart. We compare the performance of eight WSI registration algorithms, including an investigation of the impact of different WSI properties and clinical covariates. We find that conceptually distinct WSI registration methods can lead to highly accurate registration performances and identify covariates that impact performances across methods. These results provide a comparison of the performance of current WSI registration methods and guide researchers in selecting and developing methods.


Assuntos
Algoritmos , Neoplasias da Mama , Humanos , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Feminino , Interpretação de Imagem Assistida por Computador/métodos , Imuno-Histoquímica
12.
Cancer Cytopathol ; 2024 Jul 14.
Artigo em Inglês | MEDLINE | ID: mdl-39003588

RESUMO

BACKGROUND: This study evaluated the diagnostic effectiveness of the AIxURO platform, an artificial intelligence-based tool, to support urine cytology for bladder cancer management, which typically requires experienced cytopathologists and substantial diagnosis time. METHODS: One cytopathologist and two cytotechnologists reviewed 116 urine cytology slides and corresponding whole-slide images (WSIs) from urology patients. They used three diagnostic modalities: microscopy, WSI review, and AIxURO, per The Paris System for Reporting Urinary Cytology (TPS) criteria. Performance metrics, including TPS-guided and binary diagnosis, inter- and intraobserver agreement, and screening time, were compared across all methods and reviewers. RESULTS: AIxURO improved diagnostic accuracy by increasing sensitivity (from 25.0%-30.6% to 63.9%), positive predictive value (PPV; from 21.6%-24.3% to 31.1%), and negative predictive value (NPV; from 91.3%-91.6% to 95.3%) for atypical urothelial cell (AUC) cases. For suspicious for high-grade urothelial carcinoma (SHGUC) cases, it improved sensitivity (from 15.2%-27.3% to 33.3%), PPV (from 31.3%-47.4% to 61.1%), and NPV (from 91.6%-92.7% to 93.3%). Binary diagnoses exhibited an improvement in sensitivity (from 77.8%-82.2% to 90.0%) and NPV (from 91.7%-93.4% to 95.8%). Interobserver agreement across all methods showed moderate consistency (κ = 0.57-0.61), with the cytopathologist demonstrating higher intraobserver agreement than the two cytotechnologists across the methods (κ = 0.75-0.88). AIxURO significantly reduced screening time by 52.3%-83.2% from microscopy and 43.6%-86.7% from WSI review across all reviewers. Screening-positive (AUC+) cases required more time than negative cases across all methods and reviewers. CONCLUSIONS: AIxURO demonstrates the potential to improve both sensitivity and efficiency in bladder cancer diagnostics via urine cytology. Its integration into the cytopathological screening workflow could markedly decrease screening times, which would improve overall diagnostic processes.

13.
Front Oncol ; 14: 1346237, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39035745

RESUMO

Pancreatic cancer is one of the most lethal cancers worldwide, with a 5-year survival rate of less than 5%, the lowest of all cancer types. Pancreatic ductal adenocarcinoma (PDAC) is the most common and aggressive pancreatic cancer and has been classified as a health emergency in the past few decades. The histopathological diagnosis and prognosis evaluation of PDAC is time-consuming, laborious, and challenging in current clinical practice conditions. Pathological artificial intelligence (AI) research has been actively conducted lately. However, accessing medical data is challenging; the amount of open pathology data is small, and the absence of open-annotation data drawn by medical staff makes it difficult to conduct pathology AI research. Here, we provide easily accessible high-quality annotation data to address the abovementioned obstacles. Data evaluation is performed by supervised learning using a deep convolutional neural network structure to segment 11 annotated PDAC histopathological whole slide images (WSIs) drawn by medical staff directly from an open WSI dataset. We visualized the segmentation results of the histopathological images with a Dice score of 73% on the WSIs, including PDAC areas, thus identifying areas important for PDAC diagnosis and demonstrating high data quality. Additionally, pathologists assisted by AI can significantly increase their work efficiency. The pathological AI guidelines we propose are effective in developing histopathological AI for PDAC and are significant in the clinical field.

14.
Brief Bioinform ; 25(4)2024 May 23.
Artigo em Inglês | MEDLINE | ID: mdl-38960406

RESUMO

Spatial transcriptomics data play a crucial role in cancer research, providing a nuanced understanding of the spatial organization of gene expression within tumor tissues. Unraveling the spatial dynamics of gene expression can unveil key insights into tumor heterogeneity and aid in identifying potential therapeutic targets. However, in many large-scale cancer studies, spatial transcriptomics data are limited, with bulk RNA-seq and corresponding Whole Slide Image (WSI) data being more common (e.g. TCGA project). To address this gap, there is a critical need to develop methodologies that can estimate gene expression at near-cell (spot) level resolution from existing WSI and bulk RNA-seq data. This approach is essential for reanalyzing expansive cohort studies and uncovering novel biomarkers that have been overlooked in the initial assessments. In this study, we present STGAT (Spatial Transcriptomics Graph Attention Network), a novel approach leveraging Graph Attention Networks (GAT) to discern spatial dependencies among spots. Trained on spatial transcriptomics data, STGAT is designed to estimate gene expression profiles at spot-level resolution and predict whether each spot represents tumor or non-tumor tissue, especially in patient samples where only WSI and bulk RNA-seq data are available. Comprehensive tests on two breast cancer spatial transcriptomics datasets demonstrated that STGAT outperformed existing methods in accurately predicting gene expression. Further analyses using the TCGA breast cancer dataset revealed that gene expression estimated from tumor-only spots (predicted by STGAT) provides more accurate molecular signatures for breast cancer sub-type and tumor stage prediction, and also leading to improved patient survival and disease-free analysis. Availability: Code is available at https://github.com/compbiolabucf/STGAT.


Assuntos
Perfilação da Expressão Gênica , RNA-Seq , Transcriptoma , Humanos , RNA-Seq/métodos , Perfilação da Expressão Gênica/métodos , Neoplasias da Mama/genética , Neoplasias da Mama/metabolismo , Regulação Neoplásica da Expressão Gênica , Biologia Computacional/métodos , Feminino , Biomarcadores Tumorais/genética , Biomarcadores Tumorais/metabolismo
15.
Lab Invest ; 104(8): 102094, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38871058

RESUMO

Accurate assessment of epidermal growth factor receptor (EGFR) mutation status and subtype is critical for the treatment of non-small cell lung cancer patients. Conventional molecular testing methods for detecting EGFR mutations have limitations. In this study, an artificial intelligence-powered deep learning framework was developed for the weakly supervised prediction of EGFR mutations in non-small cell lung cancer from hematoxylin and eosin-stained histopathology whole-slide images. The study cohort was partitioned into training and validation subsets. Foreground regions containing tumor tissue were extracted from whole-slide images. A convolutional neural network employing a contrastive learning paradigm was implemented to extract patch-level morphologic features. These features were aggregated using a vision transformer-based model to predict EGFR mutation status and classify patient cases. The established prediction model was validated on unseen data sets. In internal validation with a cohort from the University of Science and Technology of China (n = 172), the model achieved patient-level areas under the receiver-operating characteristic curve (AUCs) of 0.927 and 0.907, sensitivities of 81.6% and 83.3%, and specificities of 93.0% and 92.3%, for surgical resection and biopsy specimens, respectively, in EGFR mutation subtype prediction. External validation with cohorts from the Second Affiliated Hospital of Anhui Medical University and the First Affiliated Hospital of Wannan Medical College (n = 193) yielded patient-level AUCs of 0.849 and 0.867, sensitivities of 79.2% and 80.7%, and specificities of 91.7% and 90.7% for surgical and biopsy specimens, respectively. Further validation with The Cancer Genome Atlas data set (n = 81) showed an AUC of 0.861, a sensitivity of 84.6%, and a specificity of 90.5%. Deep learning solutions demonstrate potential advantages for automated, noninvasive, fast, cost-effective, and accurate inference of EGFR alterations from histomorphology. Integration of such artificial intelligence frameworks into routine digital pathology workflows could augment existing molecular testing pipelines.


Assuntos
Carcinoma Pulmonar de Células não Pequenas , Aprendizado Profundo , Receptores ErbB , Hematoxilina , Neoplasias Pulmonares , Mutação , Humanos , Receptores ErbB/genética , Carcinoma Pulmonar de Células não Pequenas/genética , Carcinoma Pulmonar de Células não Pequenas/patologia , Neoplasias Pulmonares/genética , Neoplasias Pulmonares/patologia , Amarelo de Eosina-(YS) , Feminino , Masculino , Pessoa de Meia-Idade , Idoso
16.
Dig Dis Sci ; 69(8): 2985-2995, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38837111

RESUMO

BACKGROUND: Colorectal cancer (CRC) is a malignant tumor within the digestive tract with both a high incidence rate and mortality. Early detection and intervention could improve patient clinical outcomes and survival. METHODS: This study computationally investigates a set of prognostic tissue and cell features from diagnostic tissue slides. With the combination of clinical prognostic variables, the pathological image features could predict the prognosis in CRC patients. Our CRC prognosis prediction pipeline sequentially consisted of three modules: (1) A MultiTissue Net to delineate outlines of different tissue types within the WSI of CRC for further ROI selection by pathologists. (2) Development of three-level quantitative image metrics related to tissue compositions, cell shape, and hidden features from a deep network. (3) Fusion of multi-level features to build a prognostic CRC model for predicting survival for CRC. RESULTS: Experimental results suggest that each group of features has a particular relationship with the prognosis of patients in the independent test set. In the fusion features combination experiment, the accuracy rate of predicting patients' prognosis and survival status is 81.52%, and the AUC value is 0.77. CONCLUSION: This paper constructs a model that can predict the postoperative survival of patients by using image features and clinical information. Some features were found to be associated with the prognosis and survival of patients.


Assuntos
Neoplasias Colorretais , Humanos , Neoplasias Colorretais/patologia , Neoplasias Colorretais/mortalidade , Prognóstico , Masculino , Feminino , Interpretação de Imagem Assistida por Computador , Valor Preditivo dos Testes
17.
Biomed Phys Eng Express ; 10(5)2024 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-38925106

RESUMO

Detecting the Kirsten Rat Sarcoma Virus (KRAS) gene mutation is significant for colorectal cancer (CRC) patients. TheKRASgene encodes a protein involved in the epidermal growth factor receptor (EGFR) signaling pathway, and mutations in this gene can negatively impact the use of monoclonal antibodies in anti-EGFR therapy and affect treatment decisions. Currently, commonly used methods like next-generation sequencing (NGS) identifyKRASmutations but are expensive, time-consuming, and may not be suitable for every cancer patient sample. To address these challenges, we have developedKRASFormer, a novel framework that predictsKRASgene mutations from Haematoxylin and Eosin (H & E) stained WSIs that are widely available for most CRC patients.KRASFormerconsists of two stages: the first stage filters out non-tumor regions and selects only tumour cells using a quality screening mechanism, and the second stage predicts theKRASgene either wildtype' or mutant' using a Vision Transformer-based XCiT method. The XCiT employs cross-covariance attention to capture clinically meaningful long-range representations of textural patterns in tumour tissue andKRASmutant cells. We evaluated the performance of the first stage using an independent CRC-5000 dataset, and the second stage included both The Cancer Genome Atlas colon and rectal cancer (TCGA-CRC-DX) and in-house cohorts. The results of our experiments showed that the XCiT outperformed existing state-of-the-art methods, achieving AUCs for ROC curves of 0.691 and 0.653 on TCGA-CRC-DX and in-house datasets, respectively. Our findings emphasize three key consequences: the potential of using H & E-stained tissue slide images for predictingKRASgene mutations as a cost-effective and time-efficient means for guiding treatment choice with CRC patients; the increase in performance metrics of a Transformer-based model; and the value of the collaboration between pathologists and data scientists in deriving a morphologically meaningful model.


Assuntos
Neoplasias Colorretais , Mutação , Proteínas Proto-Oncogênicas p21(ras) , Humanos , Neoplasias Colorretais/genética , Neoplasias Colorretais/patologia , Proteínas Proto-Oncogênicas p21(ras)/genética , Algoritmos , Receptores ErbB/genética , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Processamento de Imagem Assistida por Computador/métodos , Curva ROC
18.
ArXiv ; 2024 Apr 11.
Artigo em Inglês | MEDLINE | ID: mdl-38903738

RESUMO

Whole Slide Images (WSI), obtained by high-resolution digital scanning of microscope slides at multiple scales, are the cornerstone of modern Digital Pathology. However, they represent a particular challenge to AI-based/AI-mediated analysis because pathology labeling is typically done at slide-level, instead of tile-level. It is not just that medical diagnostics is recorded at the specimen level, the detection of oncogene mutation is also experimentally obtained, and recorded by initiatives like The Cancer Genome Atlas (TCGA), at the slide level. This configures a dual challenge: a) accurately predicting the overall cancer phenotype and b) finding out what cellular morphologies are associated with it at the tile level. To address these challenges, a weakly supervised Multiple Instance Learning (MIL) approach was explored for two prevalent cancer types, Invasive Breast Carcinoma (TCGA-BRCA) and Lung Squamous Cell Carcinoma (TCGA-LUSC). This approach was explored for tumor detection at low magnification levels and TP53 mutations at various levels. Our results show that a novel additive implementation of MIL matched the performance of reference implementation (AUC 0.96), and was only slightly outperformed by Attention MIL (AUC 0.97). More interestingly from the perspective of the molecular pathologist, these different AI architectures identify distinct sensitivities to morphological features (through the detection of Regions of Interest, RoI) at different amplification levels. Tellingly, TP53 mutation was most sensitive to features at the higher applications where cellular morphology is resolved.

19.
Sci Rep ; 14(1): 13304, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38858367

RESUMO

The limited field of view of high-resolution microscopic images hinders the study of biological samples in a single shot. Stitching of microscope images (tiles) captured by the whole-slide imaging (WSI) technique solves this problem. However, stitching is challenging due to the repetitive textures of tissues, the non-informative background part of the slide, and the large number of tiles that impact performance and computational time. To address these challenges, we proposed the Fast and Robust Microscopic Image Stitching (FRMIS) algorithm, which relies on pairwise and global alignment. The speeded up robust features (SURF) were extracted and matched within a small part of the overlapping region to compute the transformation and align two neighboring tiles. In cases where the transformation could not be computed due to an insufficient number of matched features, features were extracted from the entire overlapping region. This enhances the efficiency of the algorithm since most of the computational load is related to pairwise registration and reduces misalignment that may occur by matching duplicated features in tiles with repetitive textures. Then, global alignment was achieved by constructing a weighted graph where the weight of each edge is determined by the normalized inverse of the number of matched features between two tiles. FRMIS has been evaluated on experimental and synthetic datasets from different modalities with different numbers of tiles and overlaps, demonstrating faster stitching time compared to existing algorithms such as the Microscopy Image Stitching Tool (MIST) toolbox. FRMIS outperforms MIST by 481% for bright-field, 259% for phase-contrast, and 282% for fluorescence modalities, while also being robust to uneven illumination.

20.
Cancers (Basel) ; 16(11)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38893251

RESUMO

The presence of spread through air spaces (STASs) in early-stage lung adenocarcinoma is a significant prognostic factor associated with disease recurrence and poor outcomes. Although current STAS detection methods rely on pathological examinations, the advent of artificial intelligence (AI) offers opportunities for automated histopathological image analysis. This study developed a deep learning (DL) model for STAS prediction and investigated the correlation between the prediction results and patient outcomes. To develop the DL-based STAS prediction model, 1053 digital pathology whole-slide images (WSIs) from the competition dataset were enrolled in the training set, and 227 WSIs from the National Taiwan University Hospital were enrolled for external validation. A YOLOv5-based framework comprising preprocessing, candidate detection, false-positive reduction, and patient-based prediction was proposed for STAS prediction. The model achieved an area under the curve (AUC) of 0.83 in predicting STAS presence, with 72% accuracy, 81% sensitivity, and 63% specificity. Additionally, the DL model demonstrated a prognostic value in disease-free survival compared to that of pathological evaluation. These findings suggest that DL-based STAS prediction could serve as an adjunctive screening tool and facilitate clinical decision-making in patients with early-stage lung adenocarcinoma.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA