Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38765185

ABSTRACT

Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist's TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.

2.
Article in English | MEDLINE | ID: mdl-38756441

ABSTRACT

Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.

3.
Article in English | MEDLINE | ID: mdl-38752165

ABSTRACT

Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose Bayesian Multiple Instance Learning (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.

4.
Nat Commun ; 14(1): 8000, 2023 Dec 04.
Article in English | MEDLINE | ID: mdl-38044384

ABSTRACT

Conventional spectroscopies are not sufficiently selective to comprehensively understand the behaviour of trapped carriers in perovskite solar cells, particularly under their working conditions. Here we use infrared optical activation spectroscopy (i.e., pump-push-photocurrent), to observe the properties and real-time dynamics of trapped carriers within operando perovskite solar cells. We compare behaviour differences of trapped holes in pristine and surface-passivated FA0.99Cs0.01PbI3 devices using a combination of quasi-steady-state and nanosecond time-resolved pump-push-photocurrent, as well as kinetic and drift-diffusion models. We find a two-step trap-filling process: the rapid filling (~10 ns) of low-density traps in the bulk of perovskite, followed by the slower filling (~100 ns) of high-density traps at the perovskite/hole transport material interface. Surface passivation by n-octylammonium iodide dramatically reduces the number of trap states (~50 times), improving the device performance substantially. Moreover, the activation energy (~280 meV) of the dominant hole traps remains similar with and without surface passivation.

5.
Comput Biol Med ; 167: 107607, 2023 12.
Article in English | MEDLINE | ID: mdl-37890421

ABSTRACT

Multiple instance learning (MIL) models have achieved remarkable success in analyzing whole slide images (WSIs) for disease classification problems. However, with regard to giga-pixel WSI classification problems, current MIL models are often incapable of differentiating a WSI with extremely small tumor lesions. This minute tumor-to-normal area ratio in a MIL bag inhibits the attention mechanism from properly weighting the areas corresponding to minor tumor lesions. To overcome this challenge, we propose salient instance inference MIL (SiiMIL), a weakly-supervised MIL model for WSI classification. We introduce a novel representation learning for histopathology images to identify representative normal keys. These keys facilitate the selection of salient instances within WSIs, forming bags with high tumor-to-normal ratios. Finally, an attention mechanism is employed for slide-level classification based on formed bags. Our results show that salient instance inference can improve the tumor-to-normal area ratio in the tumor WSIs. As a result, SiiMIL achieves 0.9225 AUC and 0.7551 recall on the Camelyon16 dataset, which outperforms the existing MIL models. In addition, SiiMIL can generate tumor-sensitive attention heatmaps that is more interpretable to pathologists than the widely used attention-based MIL method. Our experiments imply that SiiMIL can accurately identify tumor instances, which could only take up less than 1% of a WSI, so that the ratio of tumor to normal instances within a bag can increase by two to four times.


Subject(s)
Image Interpretation, Computer-Assisted , Machine Learning , Neoplasms , Humans , Neoplasms/diagnostic imaging
6.
Semin Cancer Biol ; 97: 70-85, 2023 12.
Article in English | MEDLINE | ID: mdl-37832751

ABSTRACT

Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.


Subject(s)
Artificial Intelligence , Chromosomal Instability , Humans , Reproducibility of Results , Eosine Yellowish-(YS) , Medical Oncology
7.
Article in English | MEDLINE | ID: mdl-37538448

ABSTRACT

Obstructive sleep apnea (OSA) is a prevalent disease affecting 10 to 15% of Americans and nearly one billion people worldwide. It leads to multiple symptoms including daytime sleepiness; snoring, choking, or gasping during sleep; fatigue; headaches; non-restorative sleep; and insomnia due to frequent arousals. Although polysomnography (PSG) is the gold standard for OSA diagnosis, it is expensive, not universally available, and time-consuming, so many patients go undiagnosed due to lack of access to the test. Given the incomplete access and high cost of PSG, many studies are seeking alternative diagnosis approaches based on different data modalities. Here, we propose a machine learning model to predict OSA severity from 2D frontal view craniofacial images. In a cross-validation study of 280 patients, our method achieves an average AUC of 0.780. In comparison, the craniofacial analysis model proposed by a recent study only achieves 0.638 AUC on our dataset. The proposed model also outperforms the widely used STOP-BANG OSA screening questionnaire, which achieves an AUC of 0.52 on our dataset. Our findings indicate that deep learning has the potential to significantly reduce the cost of OSA diagnosis.

8.
Cancers (Basel) ; 15(13)2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37444538

ABSTRACT

The early diagnosis of lymph node metastasis in breast cancer is essential for enhancing treatment outcomes and overall prognosis. Unfortunately, pathologists often fail to identify small or subtle metastatic deposits, leading them to rely on cytokeratin stains for improved detection, although this approach is not without its flaws. To address the need for early detection, multiple-instance learning (MIL) has emerged as the preferred deep learning method for automatic tumor detection on whole slide images (WSIs). However, existing methods often fail to identify some small lesions due to insufficient attention to small regions. Attention-based multiple-instance learning (ABMIL)-based methods can be particularly problematic because they may focus too much on normal regions, leaving insufficient attention for small-tumor lesions. In this paper, we propose a new ABMIL-based model called normal representative keyset ABMIL (NRK-ABMIL), which addresseses this issue by adjusting the attention mechanism to give more attention to lesions. To accomplish this, the NRK-ABMIL creates an optimal keyset of normal patch embeddings called the normal representative keyset (NRK). The NRK roughly represents the underlying distribution of all normal patch embeddings and is used to modify the attention mechanism of the ABMIL. We evaluated NRK-ABMIL on the publicly available Camelyon16 and Camelyon17 datasets and found that it outperformed existing state-of-the-art methods in accurately identifying small tumor lesions that may spread over a few patches. Additionally, the NRK-ABMIL also performed exceptionally well in identifying medium/large tumor lesions.

9.
PLoS One ; 18(4): e0283562, 2023.
Article in English | MEDLINE | ID: mdl-37014891

ABSTRACT

Breast cancer is the most common malignancy in women, with over 40,000 deaths annually in the United States alone. Clinicians often rely on the breast cancer recurrence score, Oncotype DX (ODX), for risk stratification of breast cancer patients, by using ODX as a guide for personalized therapy. However, ODX and similar gene assays are expensive, time-consuming, and tissue destructive. Therefore, developing an AI-based ODX prediction model that identifies patients who will benefit from chemotherapy in the same way that ODX does would give a low-cost alternative to the genomic test. To overcome this problem, we developed a deep learning framework, Breast Cancer Recurrence Network (BCR-Net), which automatically predicts ODX recurrence risk from histopathology slides. Our proposed framework has two steps. First, it intelligently samples discriminative features from whole-slide histopathology images of breast cancer patients. Then, it automatically weights all features through a multiple instance learning model to predict the recurrence score at the slide level. On a dataset of H&E and Ki67 breast cancer resection whole slides images (WSIs) from 99 anonymized patients, the proposed framework achieved an overall AUC of 0.775 (68.9% and 71.1% accuracies for low and high risk) on H&E WSIs and overall AUC of 0.811 (80.8% and 79.2% accuracies for low and high risk) on Ki67 WSIs of breast cancer patients. Our findings provide strong evidence for automatically risk-stratify patients with a high degree of confidence. Our experiments reveal that the BCR-Net outperforms the state-of-the-art WSI classification models. Moreover, BCR-Net is highly efficient with low computational needs, making it practical to deploy in limited computational settings.


Subject(s)
Breast Neoplasms , Deep Learning , Female , Humans , Breast Neoplasms/pathology , Ki-67 Antigen , Breast/pathology , Risk
10.
Med Image Anal ; 79: 102462, 2022 07.
Article in English | MEDLINE | ID: mdl-35512532

ABSTRACT

Deep learning consistently demonstrates high performance in classifying and segmenting medical images like CT, PET, and MRI. However, compared to these kinds of images, whole slide images (WSIs) of stained tissue sections are huge and thus much less efficient to process, especially for deep learning algorithms. To overcome these challenges, we present attention2majority, a weak multiple instance learning model to automatically and efficiently process WSIs for classification. Our method initially assigns exhaustively sampled label-free patches with the label of the respective WSIs and trains a convolutional neural network to perform patch-wise classification. Then, an intelligent sampling method is performed in which patches with high confidence are collected to form weak representations of WSIs. Lastly, we apply a multi-head attention-based multiple instance learning model to do slide-level classification based on high-confidence patches (intelligently sampled patches). Attention2majority was trained and tested on classifying the quality of 127 WSIs (of regenerated kidney sections) into three categories. On average, attention2majority resulted in 97.4%±2.4 AUC for the four-fold cross-validation. We demonstrate that the intelligent sampling module within attention2majority is superior to the current state-of-the-art random sampling method. Furthermore, we show that the replacement of random sampling with intelligent sampling in attention2majority results in its performance boost (from 94.9%±3.1 to 97.4%±2.4 average AUC for the four-fold cross-validation). We also tested a variation of attention2majority on the famous Camelyon16 dataset, which resulted in 89.1%±0.8 AUC1. When compared to random sampling, the attention2majority demonstrated excellent slide-level interpretability. It also provided an efficient framework to arrive at a multi-class slide-level prediction.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Kidney/diagnostic imaging
11.
Med Image Anal ; 77: 102333, 2022 04.
Article in English | MEDLINE | ID: mdl-34998111

ABSTRACT

The Cerebral Aneurysm Detection and Analysis (CADA) challenge was organized to support the development and benchmarking of algorithms for detecting, analyzing, and risk assessment of cerebral aneurysms in X-ray rotational angiography (3DRA) images. 109 anonymized 3DRA datasets were provided for training, and 22 additional datasets were used to test the algorithmic solutions. Cerebral aneurysm detection was assessed using the F2 score based on recall and precision, and the fit of the delivered bounding box was assessed using the distance to the aneurysm. The segmentation quality was measured using the Jaccard index and a combination of different surface distance measures. Systematic errors were analyzed using volume correlation and bias. Rupture risk assessment was evaluated using the F2 score. 158 participants from 22 countries registered for the CADA challenge. The U-Net-based detection solutions presented by the community show similar accuracy compared to experts (F2 score 0.92), with a small number of missed aneurysms with diameters smaller than 3.5 mm. In addition, the delineation of these structures, based on U-Net variations, is excellent, with a Jaccard score of 0.92. The rupture risk estimation methods achieved an F2 score of 0.71. The performance of the detection and segmentation solutions is equivalent to that of human experts. The best results are obtained in rupture risk estimation by combining different image-based, morphological, and computational fluid dynamic parameters using machine learning methods. Furthermore, we evaluated the best methods pipeline, from detecting and delineating the vessel dilations to estimating the risk of rupture. The chain of these methods achieves an F2-score of 0.70, which is comparable to applying the risk prediction to the ground-truth delineation (0.71).


Subject(s)
Intracranial Aneurysm , Algorithms , Cerebral Angiography/methods , Humans , Imaging, Three-Dimensional/methods , Intracranial Aneurysm/diagnostic imaging , X-Rays
SELECTION OF CITATIONS
SEARCH DETAIL
...