Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Article in English | MEDLINE | ID: mdl-38752165

ABSTRACT

Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose Bayesian Multiple Instance Learning (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.

2.
Article in English | MEDLINE | ID: mdl-38765185

ABSTRACT

Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist's TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.

3.
Article in English | MEDLINE | ID: mdl-38756441

ABSTRACT

Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.

4.
Diagn Pathol ; 19(1): 17, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38243330

ABSTRACT

BACKGROUND: c-MYC and BCL2 positivity are important prognostic factors for diffuse large B-cell lymphoma. However, manual quantification is subject to significant intra- and inter-observer variability. We developed an automated method for quantification in whole-slide images of tissue sections where manual quantification requires evaluating large areas of tissue with possibly heterogeneous staining. We train this method using annotations of tumor positivity in smaller tissue microarray cores where expression and staining are more homogeneous and then translate this model to whole-slide images. METHODS: Our method applies a technique called attention-based multiple instance learning to regress the proportion of c-MYC-positive and BCL2-positive tumor cells from pathologist-scored tissue microarray cores. This technique does not require annotation of individual cell nuclei and is trained instead on core-level annotations of percent tumor positivity. We translate this model to scoring of whole-slide images by tessellating the slide into smaller core-sized tissue regions and calculating an aggregate score. Our method was trained on a public tissue microarray dataset from Stanford and applied to whole-slide images from a geographically diverse multi-center cohort produced by the Lymphoma Epidemiology of Outcomes study. RESULTS: In tissue microarrays, the automated method had Pearson correlations of 0.843 and 0.919 with pathologist scores for c-MYC and BCL2, respectively. When utilizing standard clinical thresholds, the sensitivity/specificity of our method was 0.743 / 0.963 for c-MYC and 0.938 / 0.951 for BCL2. For double-expressors, sensitivity and specificity were 0.720 and 0.974. When translated to the external WSI dataset scored by two pathologists, Pearson correlation was 0.753 & 0.883 for c-MYC and 0.749 & 0.765 for BCL2, and sensitivity/specificity was 0.857/0.991 & 0.706/0.930 for c-MYC, 0.856/0.719 & 0.855/0.690 for BCL2, and 0.890/1.00 & 0.598/0.952 for double-expressors. Survival analysis demonstrates that for progression-free survival, model-predicted TMA scores significantly stratify double-expressors and non double-expressors (p = 0.0345), whereas pathologist scores do not (p = 0.128). CONCLUSIONS: We conclude that proportion of positive stains can be regressed using attention-based multiple instance learning, that these models generalize well to whole slide images, and that our models can provide non-inferior stratification of progression-free survival outcomes.


Subject(s)
Deep Learning , Lymphoma, Large B-Cell, Diffuse , Humans , Prognosis , Proto-Oncogene Proteins c-myc/metabolism , Proto-Oncogene Proteins c-bcl-2/metabolism , Antineoplastic Combined Chemotherapy Protocols
5.
Semin Cancer Biol ; 97: 70-85, 2023 12.
Article in English | MEDLINE | ID: mdl-37832751

ABSTRACT

Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.


Subject(s)
Artificial Intelligence , Chromosomal Instability , Humans , Reproducibility of Results , Eosine Yellowish-(YS) , Medical Oncology
6.
Article in English | MEDLINE | ID: mdl-37538448

ABSTRACT

Obstructive sleep apnea (OSA) is a prevalent disease affecting 10 to 15% of Americans and nearly one billion people worldwide. It leads to multiple symptoms including daytime sleepiness; snoring, choking, or gasping during sleep; fatigue; headaches; non-restorative sleep; and insomnia due to frequent arousals. Although polysomnography (PSG) is the gold standard for OSA diagnosis, it is expensive, not universally available, and time-consuming, so many patients go undiagnosed due to lack of access to the test. Given the incomplete access and high cost of PSG, many studies are seeking alternative diagnosis approaches based on different data modalities. Here, we propose a machine learning model to predict OSA severity from 2D frontal view craniofacial images. In a cross-validation study of 280 patients, our method achieves an average AUC of 0.780. In comparison, the craniofacial analysis model proposed by a recent study only achieves 0.638 AUC on our dataset. The proposed model also outperforms the widely used STOP-BANG OSA screening questionnaire, which achieves an AUC of 0.52 on our dataset. Our findings indicate that deep learning has the potential to significantly reduce the cost of OSA diagnosis.

7.
Cancers (Basel) ; 15(13)2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37444538

ABSTRACT

The early diagnosis of lymph node metastasis in breast cancer is essential for enhancing treatment outcomes and overall prognosis. Unfortunately, pathologists often fail to identify small or subtle metastatic deposits, leading them to rely on cytokeratin stains for improved detection, although this approach is not without its flaws. To address the need for early detection, multiple-instance learning (MIL) has emerged as the preferred deep learning method for automatic tumor detection on whole slide images (WSIs). However, existing methods often fail to identify some small lesions due to insufficient attention to small regions. Attention-based multiple-instance learning (ABMIL)-based methods can be particularly problematic because they may focus too much on normal regions, leaving insufficient attention for small-tumor lesions. In this paper, we propose a new ABMIL-based model called normal representative keyset ABMIL (NRK-ABMIL), which addresseses this issue by adjusting the attention mechanism to give more attention to lesions. To accomplish this, the NRK-ABMIL creates an optimal keyset of normal patch embeddings called the normal representative keyset (NRK). The NRK roughly represents the underlying distribution of all normal patch embeddings and is used to modify the attention mechanism of the ABMIL. We evaluated NRK-ABMIL on the publicly available Camelyon16 and Camelyon17 datasets and found that it outperformed existing state-of-the-art methods in accurately identifying small tumor lesions that may spread over a few patches. Additionally, the NRK-ABMIL also performed exceptionally well in identifying medium/large tumor lesions.

8.
Cancers (Basel) ; 14(23)2022 Nov 24.
Article in English | MEDLINE | ID: mdl-36497258

ABSTRACT

Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.

9.
EBioMedicine ; 62: 103094, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33166789

ABSTRACT

BACKGROUND: Identifying which individuals will develop tuberculosis (TB) remains an unresolved problem due to few animal models and computational approaches that effectively address its heterogeneity. To meet these shortcomings, we show that Diversity Outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb) . METHODS: Following M.tb infection, a "supersusceptible" phenotype develops in approximately one-third of DO mice characterized by rapid morbidity and mortality within 8 weeks. These supersusceptible DO mice develop lung granulomas patterns akin to humans. This led us to utilize deep learning to identify supersusceptibility from hematoxylin & eosin (H&E) lung tissue sections utilizing only clinical outcomes (supersusceptible or not-supersusceptible) as labels. FINDINGS: The proposed machine learning model diagnosed supersusceptibility with high accuracy (91.50 ± 4.68%) compared to two expert pathologists using H&E stained lung sections (94.95% and 94.58%). Two non-experts used the imaging biomarker to diagnose supersusceptibility with high accuracy (88.25% and 87.95%) and agreement (96.00%). A board-certified veterinary pathologist (GB) examined the imaging biomarker and determined the model was making diagnostic decisions using a form of granuloma necrosis (karyorrhectic and pyknotic nuclear debris). This was corroborated by one other board-certified veterinary pathologist. Finally, the imaging biomarker was quantified, providing a novel means to convert visual patterns within granulomas to data suitable for statistical analyses. IMPLICATIONS: Overall, our results have translatable implication to improve our understanding of TB and also to the broader field of computational pathology in which clinical outcomes alone can drive automatic identification of interpretable imaging biomarkers, knowledge discovery, and validation of existing clinical biomarkers. FUNDING: National Institutes of Health and American Lung Association.


Subject(s)
Biomarkers , Deep Learning , Molecular Imaging , Mycobacterium tuberculosis , Tuberculosis/diagnosis , Tuberculosis/etiology , Algorithms , Animals , Computational Biology/methods , Disease Models, Animal , Disease Susceptibility , Female , Humans , Image Processing, Computer-Assisted , Immunohistochemistry/methods , Machine Learning , Male , Molecular Imaging/methods , Prognosis , Reproducibility of Results
10.
Sci Rep ; 10(1): 2398, 2020 Feb 06.
Article in English | MEDLINE | ID: mdl-32024961

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

11.
Sci Rep ; 9(1): 18969, 2019 12 12.
Article in English | MEDLINE | ID: mdl-31831792

ABSTRACT

Automatic identification of tissue structures in the analysis of digital tissue biopsies remains an ongoing problem in digital pathology. Common barriers include lack of reliable ground truth due to inter- and intra- reader variability, class imbalances, and inflexibility of discriminative models. To overcome these barriers, we are developing a framework that benefits from a reliable immunohistochemistry ground truth during labeling, overcomes class imbalances through single task learning, and accommodates any number of classes through a minimally supervised, modular model-per-class paradigm. This study explores an initial application of this framework, based on conditional generative adversarial networks, to automatically identify tumor from non-tumor regions in colorectal H&E slides. The average precision, sensitivity, and F1 score during validation was 95.13 ± 4.44%, 93.05 ± 3.46%, and 94.02 ± 3.23% and for an external test dataset was 98.75 ± 2.43%, 88.53 ± 5.39%, and 93.31 ± 3.07%, respectively. With accurate identification of tumor regions, we plan to further develop our framework to establish a tumor front, from which tumor buds can be detected in a restricted region. This model will be integrated into a larger system which will quantitatively determine the prognostic significance of tumor budding.


Subject(s)
Colorectal Neoplasms/diagnosis , Colorectal Neoplasms/metabolism , Colorectal Neoplasms/pathology , Image Processing, Computer-Assisted , Neural Networks, Computer , Female , Humans , Immunohistochemistry , Male
12.
Artif Intell Med ; 95: 82-87, 2019 04.
Article in English | MEDLINE | ID: mdl-30266546

ABSTRACT

In this paper, we propose a pathological image compression framework to address the needs of Big Data image analysis in digital pathology. Big Data image analytics require analysis of large databases of high-resolution images using distributed storage and computing resources along with transmission of large amounts of data between the storage and computing nodes that can create a major processing bottleneck. The proposed image compression framework is based on the JPEG2000 Interactive Protocol and aims to minimize the amount of data transfer between the storage and computing nodes as well as to considerably reduce the computational demands of the decompression engine. The proposed framework was integrated into hotspot detection from images of breast biopsies, yielding considerable reduction of data and computing requirements.


Subject(s)
Big Data , Breast Neoplasms/diagnosis , Data Compression/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Information Storage and Retrieval
13.
PLoS One ; 13(10): e0205387, 2018.
Article in English | MEDLINE | ID: mdl-30359393

ABSTRACT

The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.


Subject(s)
Deep Learning , Neural Networks, Computer , Algorithms , Area Under Curve , Image Processing, Computer-Assisted , ROC Curve
14.
Article in English | MEDLINE | ID: mdl-34262235

ABSTRACT

A neutrophil is a type of white blood cell that is responsible for killing pathogenic bacteria but may simultaneously damage host tissue. We established a method to automatically detect neutrophils from slides stained with hematoxylin and eosin (H&E), because there is growing evidence that neutrophils, which respond to Mycobacterium tuberculosis, are cellular biomarkers of lung damage in tuberculosis. The proposed method relies on transfer learning to reuse features extracted from the activation of a deep convolutional network trained on a large dataset. We present a methodology to identify the correct tile size, magnification, and the number of tiles using multidimensional scaling to efficiently train the final layer of this pre-trained network. The method was trained on tiles acquired from 12 whole slide images, resulting in an average accuracy of 93.0%. The trained system successfully identified all neutrophil clusters on an independent dataset of 53 images. The method can be used to automatically, accurately, and efficiently count the number of neutrophil sites in regions-of-interest extracted from whole slide images.

15.
Pediatr Dev Pathol ; 20(5): 394-402, 2017.
Article in English | MEDLINE | ID: mdl-28420318

ABSTRACT

A subset of patients with neuroblastoma are at extremely high risk for treatment failure, though they are not identifiable at diagnosis and therefore have the highest mortality with conventional treatment approaches. Despite tremendous understanding of clinical and biological features that correlate with prognosis, neuroblastoma at ultra-high risk for treatment failure remains a diagnostic challenge. As a first step towards improving prognostic risk stratification within the high-risk group of patients, we determined the feasibility of using computerized image analysis and proteomic profiling on single slides from diagnostic tissue specimens. After expert pathologist review of tumor sections to ensure quality and representative material input, we evaluated multiple regions of single slides as well as multiple sections from different patients' tumors using computational histologic analysis and semiquantitative proteomic profiling. We found that both approaches determined that intertumor heterogeneity was greater than intratumor heterogeneity. Unbiased clustering of samples was greatest within a tumor, suggesting a single section can be representative of the tumor as a whole. There is expected heterogeneity between tumor samples from different individuals with a high degree of similarity among specimens derived from the same patient. Both techniques are novel to supplement pathologist review of neuroblastoma for refined risk stratification, particularly since we demonstrate these results using only a single slide derived from what is usually a scarce tissue resource. Due to limitations of traditional approaches for upfront stratification, integration of new modalities with data derived from one section of tumor hold promise as tools to improve outcomes.


Subject(s)
Biomarkers, Tumor/metabolism , Image Interpretation, Computer-Assisted/methods , Neuroblastoma/diagnosis , Neuroblastoma/pathology , Proteomics , Child, Preschool , Feasibility Studies , Humans , Neuroblastoma/metabolism , Neuroblastoma/mortality , Prognosis
16.
IEEE J Biomed Health Inform ; 21(4): 1027-1038, 2017 07.
Article in English | MEDLINE | ID: mdl-28113734

ABSTRACT

Histopathologic features, particularly Gleason grading system, have contributed significantly to the diagnosis, treatment, and prognosis of prostate cancer for decades. However, prostate cancer demonstrates enormous heterogeneity in biological behavior, thus establishing improved prognostic and predictive markers is particularly important to personalize therapy of men with clinically localized and newly diagnosed malignancy. Many automated grading systems have been developed for Gleason grading but acceptance in the medical community has been lacking due to poor interpretability. To overcome this problem, we developed a set of visually meaningful features to differentiate between low- and high-grade prostate cancer. The visually meaningful feature set consists of luminal and architectural features. For luminal features, we compute: 1) the shortest path from the nuclei to their closest luminal spaces; 2) ratio of the epithelial nuclei to the total number of nuclei. A nucleus is considered an epithelial nucleus if the shortest path between it and the luminal space does not contain any other nucleus; 3) average shortest distance of all nuclei to their closest luminal spaces. For architectural features, we compute directional changes in stroma and nuclei using directional filter banks. These features are utilized to create two subspaces; one for prostate images histopathologically assessed as low grade and the other for high grade. The grade associated with a subspace, which results in the minimum reconstruction error is considered as the prediction for the test image. For training, we utilized 43 regions of interest (ROI) images, which were extracted from 25 prostate whole slide images of The Cancer Genome Atlas (TCGA) database. For testing, we utilized an independent dataset of 88 ROIs extracted from 30 prostate whole slide images. The method resulted in 93.0% and 97.6% training and testing accuracies, respectively, for the spectrum of cases considered. The application of visually meaningful features provided promising levels of accuracy and consistency for grading prostate cancer.


Subject(s)
Histocytochemistry/methods , Image Interpretation, Computer-Assisted/methods , Prostatic Neoplasms/diagnostic imaging , Humans , Male , Neoplasm Grading , Prostate/diagnostic imaging
17.
Article in English | MEDLINE | ID: mdl-38347946

ABSTRACT

Accurate detection and quantification of normal lung tissue in the context of Mycobacterium tuberculosis infection is of interest from a biological perspective. The automatic detection and quantification of normal lung will allow the biologists to focus more intensely on regions of interest within normal and infected tissues. We present a computational framework to extract individual tissue sections from whole slide images having multiple tissue sections. It automatically detects the background, red blood cells and handwritten digits to bring efficiency as well as accuracy in quantification of tissue sections. For efficiency, we model our framework with logical and morphological operations as they can be performed in linear time. We further divide these individual tissue sections into normal and infected areas using deep neural network. The computational framework was trained on 60 whole slide images. The proposed computational framework resulted in an overall accuracy of 99.2% when extracting individual tissue sections from 120 whole slide images in the test dataset. The framework resulted in a relatively higher accuracy (99.7%) while classifying individual lung sections into normal and infected areas. Our preliminary findings suggest that the proposed framework has good agreement with biologists on how define normal and infected lung areas.

18.
Cytometry A ; 85(2): 151-61, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24339210

ABSTRACT

Infection with Mycobacterium tuberculosis (M.tb) results in immune cell recruitment to the lungs, forming macrophage-rich regions (granulomas) and lymphocyte-rich regions (lymphocytic cuffs). The objective of this study was to accurately identify and characterize these regions from hematoxylin and eosin (H&E)-stained tissue slides. The two target regions (granulomas and lymphocytic cuffs) can be identified by their morphological characteristics. Their most differentiating characteristic on H&E slides is cell density. We developed a computational framework, called DeHiDe, to detect and classify high cell-density regions in histology slides. DeHiDe employed a novel internuclei geodesic distance calculation and Dulmange Mendelsohn permutation to detect and classify high cell-density regions. Lung tissue slides of mice experimentally infected with M.tb were stained with H&E and digitized. A total of 21 digital slides were used to develop and train the computational framework. The performance of the framework was evaluated using two main outcome measures: correct detection of potential regions, and correct classification of potential regions into granulomas and lymphocytic cuffs. DeHiDe provided a detection accuracy of 99.39% while it correctly classified 90.87% of the detected regions for the images where the expert pathologist produced the same ground truth during the first and second round of annotations. We showed that DeHiDe could detect high cell-density regions in a heterogeneous cell environment with non-convex tissue shapes.


Subject(s)
Cell Nucleus/microbiology , Granuloma/microbiology , Lung/microbiology , Lymphocytes/microbiology , Mycobacterium tuberculosis/physiology , Software , Algorithms , Animals , Cell Count , Cell Nucleus/ultrastructure , Eosine Yellowish-(YS) , Granuloma/pathology , Hematoxylin , Host-Pathogen Interactions , Image Processing, Computer-Assisted , Lung/pathology , Lymphocytes/ultrastructure , Mice , Microscopy , Mycobacterium tuberculosis/pathogenicity
19.
Immun Ageing ; 11(1): 24, 2014.
Article in English | MEDLINE | ID: mdl-25606048

ABSTRACT

BACKGROUND: Tuberculosis, the disease due to Mycobacterium tuberculosis, is an important cause of morbidity and mortality in the elderly. Use of mouse models may accelerate insight into the disease and tests of therapies since mice age thirty times faster than humans. However, the majority of TB research relies on inbred mouse strains, and these results might not extrapolate well to the genetically diverse human population. We report here the first tests of M. tuberculosis infection in genetically heterogeneous aging mice, testing if old mice benefit from rapamycin. FINDINGS: We find that genetically diverse aging mice are much more susceptible than young mice to M. tuberculosis, as are aging human beings. We also find that rapamycin boosts immune responses during primary infection but fails to increase survival. CONCLUSIONS: Genetically diverse mouse models provide a valuable resource to study how age influences responses and susceptibility to pathogens and to test interventions. Additionally, surrogate markers such as immune measures may not predict whether interventions improve survival.

20.
Cytometry A ; 85(3): 242-55, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24376080

ABSTRACT

We present two novel automated image analysis methods to differentiate centroblast (CB) cells from noncentroblast (non-CB) cells in digital images of H&E-stained tissues of follicular lymphoma. CB cells are often confused by similar looking cells within the tissue, therefore a system to help their classification is necessary. Our methods extract the discriminatory features of cells by approximating the intrinsic dimensionality from the subspace spanned by CB and non-CB cells. In the first method, discriminatory features are approximated with the help of singular value decomposition (SVD), whereas in the second method they are extracted using Laplacian Eigenmaps. Five hundred high-power field images were extracted from 17 slides, which are then used to compose a database of 213 CB and 234 non-CB region of interest images. The recall, precision, and overall accuracy rates of the developed methods were measured and compared with existing classification methods. Moreover, the reproducibility of both classification methods was also examined. The average values of the overall accuracy were 99.22% ± 0.75% and 99.07% ± 1.53% for COB and CLEM, respectively. The experimental results demonstrate that both proposed methods provide better classification accuracy of CB/non-CB in comparison with the state of the art methods.


Subject(s)
Image Interpretation, Computer-Assisted , Lymphoma, Follicular/pathology , Pattern Recognition, Automated , Algorithms , Artificial Intelligence , Humans , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Lymphoma, Follicular/diagnosis , Pattern Recognition, Automated/methods , Statistics as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...