Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 57
Filter
1.
Article in English | MEDLINE | ID: mdl-38765185

ABSTRACT

Colorectal cancer (CRC) is the third most common cancer in the United States. Tumor Budding (TB) detection and quantification are crucial yet labor-intensive steps in determining the CRC stage through the analysis of histopathology images. To help with this process, we adapt the Segment Anything Model (SAM) on the CRC histopathology images to segment TBs using SAM-Adapter. In this approach, we automatically take task-specific prompts from CRC images and train the SAM model in a parameter-efficient way. We compare the predictions of our model with the predictions from a trained-from-scratch model using the annotations from a pathologist. As a result, our model achieves an intersection over union (IoU) of 0.65 and an instance-level Dice score of 0.75, which are promising in matching the pathologist's TB annotation. We believe our study offers a novel solution to identify TBs on H&E-stained histopathology images. Our study also demonstrates the value of adapting the foundation model for pathology image segmentation tasks.

2.
Article in English | MEDLINE | ID: mdl-38752165

ABSTRACT

Tumor budding refers to a cluster of one to four tumor cells located at the tumor-invasive front. While tumor budding is a prognostic factor for colorectal cancer, counting and grading tumor budding are time consuming and not highly reproducible. There could be high inter- and intra-reader disagreement on H&E evaluation. This leads to the noisy training (imperfect ground truth) of deep learning algorithms, resulting in high variability and losing their ability to generalize on unseen datasets. Pan-cytokeratin staining is one of the potential solutions to enhance the agreement, but it is not routinely used to identify tumor buds and can lead to false positives. Therefore, we aim to develop a weakly-supervised deep learning method for tumor bud detection from routine H&E-stained images that does not require strict tissue-level annotations. We also propose Bayesian Multiple Instance Learning (BMIL) that combines multiple annotated regions during the training process to further enhance the generalizability and stability in tumor bud detection. Our dataset consists of 29 colorectal cancer H&E-stained images that contain 115 tumor buds per slide on average. In six-fold cross-validation, our method demonstrated an average precision and recall of 0.94, and 0.86 respectively. These results provide preliminary evidence of the feasibility of our approach in improving the generalizability in tumor budding detection using H&E images while avoiding the need for non-routine immunohistochemical staining methods.

3.
Article in English | MEDLINE | ID: mdl-38756441

ABSTRACT

Current deep learning methods in histopathology are limited by the small amount of available data and time consumption in labeling the data. Colorectal cancer (CRC) tumor budding quantification performed using H&E-stained slides is crucial for cancer staging and prognosis but is subject to labor-intensive annotation and human bias. Thus, acquiring a large-scale, fully annotated dataset for training a tumor budding (TB) segmentation/detection system is difficult. Here, we present a DatasetGAN-based approach that can generate essentially an unlimited number of images with TB masks from a moderate number of unlabeled images and a few annotated images. The images generated by our model closely resemble the real colon tissue on H&E-stained slides. We test the performance of this model by training a downstream segmentation model, UNet++, on the generated images and masks. Our results show that the trained UNet++ model can achieve reasonable TB segmentation performance, especially at the instance level. This study demonstrates the potential of developing an annotation-efficient segmentation model for automatic TB detection and quantification.

4.
Diagn Pathol ; 19(1): 17, 2024 Jan 19.
Article in English | MEDLINE | ID: mdl-38243330

ABSTRACT

BACKGROUND: c-MYC and BCL2 positivity are important prognostic factors for diffuse large B-cell lymphoma. However, manual quantification is subject to significant intra- and inter-observer variability. We developed an automated method for quantification in whole-slide images of tissue sections where manual quantification requires evaluating large areas of tissue with possibly heterogeneous staining. We train this method using annotations of tumor positivity in smaller tissue microarray cores where expression and staining are more homogeneous and then translate this model to whole-slide images. METHODS: Our method applies a technique called attention-based multiple instance learning to regress the proportion of c-MYC-positive and BCL2-positive tumor cells from pathologist-scored tissue microarray cores. This technique does not require annotation of individual cell nuclei and is trained instead on core-level annotations of percent tumor positivity. We translate this model to scoring of whole-slide images by tessellating the slide into smaller core-sized tissue regions and calculating an aggregate score. Our method was trained on a public tissue microarray dataset from Stanford and applied to whole-slide images from a geographically diverse multi-center cohort produced by the Lymphoma Epidemiology of Outcomes study. RESULTS: In tissue microarrays, the automated method had Pearson correlations of 0.843 and 0.919 with pathologist scores for c-MYC and BCL2, respectively. When utilizing standard clinical thresholds, the sensitivity/specificity of our method was 0.743 / 0.963 for c-MYC and 0.938 / 0.951 for BCL2. For double-expressors, sensitivity and specificity were 0.720 and 0.974. When translated to the external WSI dataset scored by two pathologists, Pearson correlation was 0.753 & 0.883 for c-MYC and 0.749 & 0.765 for BCL2, and sensitivity/specificity was 0.857/0.991 & 0.706/0.930 for c-MYC, 0.856/0.719 & 0.855/0.690 for BCL2, and 0.890/1.00 & 0.598/0.952 for double-expressors. Survival analysis demonstrates that for progression-free survival, model-predicted TMA scores significantly stratify double-expressors and non double-expressors (p = 0.0345), whereas pathologist scores do not (p = 0.128). CONCLUSIONS: We conclude that proportion of positive stains can be regressed using attention-based multiple instance learning, that these models generalize well to whole slide images, and that our models can provide non-inferior stratification of progression-free survival outcomes.


Subject(s)
Deep Learning , Lymphoma, Large B-Cell, Diffuse , Humans , Prognosis , Proto-Oncogene Proteins c-myc/metabolism , Proto-Oncogene Proteins c-bcl-2/metabolism , Antineoplastic Combined Chemotherapy Protocols
5.
Semin Cancer Biol ; 97: 70-85, 2023 12.
Article in English | MEDLINE | ID: mdl-37832751

ABSTRACT

Artificial Intelligence (AI)-enhanced histopathology presents unprecedented opportunities to benefit oncology through interpretable methods that require only one overall label per hematoxylin and eosin (H&E) slide with no tissue-level annotations. We present a structured review of these methods organized by their degree of verifiability and by commonly recurring application areas in oncological characterization. First, we discuss morphological markers (tumor presence/absence, metastases, subtypes, grades) in which AI-identified regions of interest (ROIs) within whole slide images (WSIs) verifiably overlap with pathologist-identified ROIs. Second, we discuss molecular markers (gene expression, molecular subtyping) that are not verified via H&E but rather based on overlap with positive regions on adjacent tissue. Third, we discuss genetic markers (mutations, mutational burden, microsatellite instability, chromosomal instability) that current technologies cannot verify if AI methods spatially resolve specific genetic alterations. Fourth, we discuss the direct prediction of survival to which AI-identified histopathological features quantitatively correlate but are nonetheless not mechanistically verifiable. Finally, we discuss in detail several opportunities and challenges for these one-label-per-slide methods within oncology. Opportunities include reducing the cost of research and clinical care, reducing the workload of clinicians, personalized medicine, and unlocking the full potential of histopathology through new imaging-based biomarkers. Current challenges include explainability and interpretability, validation via adjacent tissue sections, reproducibility, data availability, computational needs, data requirements, domain adaptability, external validation, dataset imbalances, and finally commercialization and clinical potential. Ultimately, the relative ease and minimum upfront cost with which relevant data can be collected in addition to the plethora of available AI methods for outcome-driven analysis will surmount these current limitations and achieve the innumerable opportunities associated with AI-driven histopathology for the benefit of oncology.


Subject(s)
Artificial Intelligence , Chromosomal Instability , Humans , Reproducibility of Results , Eosine Yellowish-(YS) , Medical Oncology
6.
Article in English | MEDLINE | ID: mdl-37538448

ABSTRACT

Obstructive sleep apnea (OSA) is a prevalent disease affecting 10 to 15% of Americans and nearly one billion people worldwide. It leads to multiple symptoms including daytime sleepiness; snoring, choking, or gasping during sleep; fatigue; headaches; non-restorative sleep; and insomnia due to frequent arousals. Although polysomnography (PSG) is the gold standard for OSA diagnosis, it is expensive, not universally available, and time-consuming, so many patients go undiagnosed due to lack of access to the test. Given the incomplete access and high cost of PSG, many studies are seeking alternative diagnosis approaches based on different data modalities. Here, we propose a machine learning model to predict OSA severity from 2D frontal view craniofacial images. In a cross-validation study of 280 patients, our method achieves an average AUC of 0.780. In comparison, the craniofacial analysis model proposed by a recent study only achieves 0.638 AUC on our dataset. The proposed model also outperforms the widely used STOP-BANG OSA screening questionnaire, which achieves an AUC of 0.52 on our dataset. Our findings indicate that deep learning has the potential to significantly reduce the cost of OSA diagnosis.

7.
Cancers (Basel) ; 15(13)2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37444538

ABSTRACT

The early diagnosis of lymph node metastasis in breast cancer is essential for enhancing treatment outcomes and overall prognosis. Unfortunately, pathologists often fail to identify small or subtle metastatic deposits, leading them to rely on cytokeratin stains for improved detection, although this approach is not without its flaws. To address the need for early detection, multiple-instance learning (MIL) has emerged as the preferred deep learning method for automatic tumor detection on whole slide images (WSIs). However, existing methods often fail to identify some small lesions due to insufficient attention to small regions. Attention-based multiple-instance learning (ABMIL)-based methods can be particularly problematic because they may focus too much on normal regions, leaving insufficient attention for small-tumor lesions. In this paper, we propose a new ABMIL-based model called normal representative keyset ABMIL (NRK-ABMIL), which addresseses this issue by adjusting the attention mechanism to give more attention to lesions. To accomplish this, the NRK-ABMIL creates an optimal keyset of normal patch embeddings called the normal representative keyset (NRK). The NRK roughly represents the underlying distribution of all normal patch embeddings and is used to modify the attention mechanism of the ABMIL. We evaluated NRK-ABMIL on the publicly available Camelyon16 and Camelyon17 datasets and found that it outperformed existing state-of-the-art methods in accurately identifying small tumor lesions that may spread over a few patches. Additionally, the NRK-ABMIL also performed exceptionally well in identifying medium/large tumor lesions.

8.
Cancers (Basel) ; 14(23)2022 Nov 24.
Article in English | MEDLINE | ID: mdl-36497258

ABSTRACT

Recent methods in computational pathology have trended towards semi- and weakly-supervised methods requiring only slide-level labels. Yet, even slide-level labels may be absent or irrelevant to the application of interest, such as in clinical trials. Hence, we present a fully unsupervised method to learn meaningful, compact representations of WSIs. Our method initially trains a tile-wise encoder using SimCLR, from which subsets of tile-wise embeddings are extracted and fused via an attention-based multiple-instance learning framework to yield slide-level representations. The resulting set of intra-slide-level and inter-slide-level embeddings are attracted and repelled via contrastive loss, respectively. This resulted in slide-level representations with self-supervision. We applied our method to two tasks- (1) non-small cell lung cancer subtyping (NSCLC) as a classification prototype and (2) breast cancer proliferation scoring (TUPAC16) as a regression prototype-and achieved an AUC of 0.8641 ± 0.0115 and correlation (R2) of 0.5740 ± 0.0970, respectively. Ablation experiments demonstrate that the resulting unsupervised slide-level feature space can be fine-tuned with small datasets for both tasks. Overall, our method approaches computational pathology in a novel manner, where meaningful features can be learned from whole-slide images without the need for annotations of slide-level labels. The proposed method stands to benefit computational pathology, as it theoretically enables researchers to benefit from completely unlabeled whole-slide images.

9.
Med Image Anal ; 79: 102462, 2022 07.
Article in English | MEDLINE | ID: mdl-35512532

ABSTRACT

Deep learning consistently demonstrates high performance in classifying and segmenting medical images like CT, PET, and MRI. However, compared to these kinds of images, whole slide images (WSIs) of stained tissue sections are huge and thus much less efficient to process, especially for deep learning algorithms. To overcome these challenges, we present attention2majority, a weak multiple instance learning model to automatically and efficiently process WSIs for classification. Our method initially assigns exhaustively sampled label-free patches with the label of the respective WSIs and trains a convolutional neural network to perform patch-wise classification. Then, an intelligent sampling method is performed in which patches with high confidence are collected to form weak representations of WSIs. Lastly, we apply a multi-head attention-based multiple instance learning model to do slide-level classification based on high-confidence patches (intelligently sampled patches). Attention2majority was trained and tested on classifying the quality of 127 WSIs (of regenerated kidney sections) into three categories. On average, attention2majority resulted in 97.4%±2.4 AUC for the four-fold cross-validation. We demonstrate that the intelligent sampling module within attention2majority is superior to the current state-of-the-art random sampling method. Furthermore, we show that the replacement of random sampling with intelligent sampling in attention2majority results in its performance boost (from 94.9%±3.1 to 97.4%±2.4 average AUC for the four-fold cross-validation). We also tested a variation of attention2majority on the famous Camelyon16 dataset, which resulted in 89.1%±0.8 AUC1. When compared to random sampling, the attention2majority demonstrated excellent slide-level interpretability. It also provided an efficient framework to arrive at a multi-class slide-level prediction.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Kidney/diagnostic imaging
10.
Comput Biol Med ; 136: 104737, 2021 09.
Article in English | MEDLINE | ID: mdl-34391000

ABSTRACT

Failure to identify difficult intubation is the leading cause of anesthesia-related death and morbidity. Despite preoperative airway assessment, 75-93% of difficult intubations are unanticipated, and airway examination methods underperform, with sensitivities of 20-62% and specificities of 82-97%. To overcome these impediments, we aim to develop a deep learning model to identify difficult to intubate patients using frontal face images. We proposed an ensemble of convolutional neural networks which leverages a database of celebrity facial images to learn robust features of multiple face regions. This ensemble extracts features from patient images (n = 152) which are subsequently classified by a respective ensemble of attention-based multiple instance learning models. Through majority voting, a patient is classified as difficult or easy to intubate. Whereas two conventional bedside tests resulted in AUCs of 0.6042 and 0.4661, the proposed method resulted in an AUC of 0.7105 using a cohort of 76 difficult and 76 easy to intubate patients. Generic features yielded AUCs of 0.4654-0.6278. The proposed model can operate at high sensitivity and low specificity (0.9079 and 0.4474) or low sensitivity and high specificity (0.3684 and 0.9605). The proposed ensembled model outperforms conventional bedside tests and generic features. Side facial images may improve the performance of the proposed model. The proposed method significantly surpasses conventional bedside tests and deep learning methods. We expect our model will play an important role in developing deep learning methods where frontal face features play an important role.


Subject(s)
Deep Learning , Databases, Factual , Face/diagnostic imaging , Humans , Neural Networks, Computer
11.
Chemosphere ; 285: 131382, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34329141

ABSTRACT

Agro-Wastes are identified as to manufacture potential valuable organic biochar fertilizer product economically while also managing the waste. Biochar (BC) produced from agriculture waste is helps to improve the soil because of its neutral pH, addition of organic carbon to the soil and lower salt index values. This study focused on the development of nano-biochar into a more enhanced biochar product where it was checked whether the biochar derived from wheat straw can absorb nutrients and then act as support matter for releasing micro-nutrients and macro-nutrients for the plants on slow liberation basis. Wheat biochar (WBC) and wheat nano-biochar (WBNC) were synthesized by pyrolysis at two different temperatures and nutrients were fused into the WBC via impregnation technique. Physical parameters such as Proximate, Ultimate analysis & other were also studied and inspected by standard control procedures. Studies were also carried out on water retention (WR), water absorbance (WA), swelling ratio (SR) and equilibrium water content (EWC) for all samples; data was collected and compared for the better sample. Slow-release studies performed portrayed the release pattern of nutrients for prolonged periods, which are very important for the plant growth, yield and productivity. Overall, the experimental results displayed that BNC produced at 350 °C showed promising features of (SI:0.05, SR: 3.67, WA:64%, EWC:78.6%, FC:53.05% and pH:7.22), is a good substance however the nano-biochar has improved results; environmental friendly & could be utilized as a potential fertilizer on slow release for sustainable and green agriculture application.


Subject(s)
Fertilizers , Triticum , Agriculture , Charcoal
12.
EBioMedicine ; 67: 103388, 2021 May.
Article in English | MEDLINE | ID: mdl-34000621

ABSTRACT

BACKGROUND: Machine learning sustains successful application to many diagnostic and prognostic problems in computational histopathology. Yet, few efforts have been made to model gene expression from histopathology. This study proposes a methodology which predicts selected gene expression values (microarray) from haematoxylin and eosin whole-slide images as an intermediate data modality to identify fulminant-like pulmonary tuberculosis ('supersusceptible') in an experimentally infected cohort of Diversity Outbred mice (n=77). METHODS: Gradient-boosted trees were utilized as a novel feature selector to identify gene transcripts predictive of fulminant-like pulmonary tuberculosis. A novel attention-based multiple instance learning model for regression was used to predict selected genes' expression from whole-slide images. Gene expression predictions were shown to be sufficiently replicated to identify supersusceptible mice using gradient-boosted trees trained on ground truth gene expression data. FINDINGS: The model was accurate, showing high positive correlations with ground truth gene expression on both cross-validation (n = 77, 0.63 ≤ ρ ≤ 0.84) and external testing sets (n = 33, 0.65 ≤ ρ ≤ 0.84). The sensitivity and specificity for gene expression predictions to identify supersusceptible mice (n=77) were 0.88 and 0.95, respectively, and for an external set of mice (n=33) 0.88 and 0.93, respectively. IMPLICATIONS: Our methodology maps histopathology to gene expression with sufficient accuracy to predict a clinical outcome. The proposed methodology exemplifies a computational template for gene expression panels, in which relatively inexpensive and widely available tissue histopathology may be mapped to specific genes' expression to serve as a diagnostic or prognostic tool. FUNDING: National Institutes of Health and American Lung Association.


Subject(s)
Genetic Predisposition to Disease , Machine Learning , Transcriptome , Tuberculosis/genetics , Animals , Female , Hybridization, Genetic , Mice , Tuberculosis/metabolism , Tuberculosis/pathology
13.
EBioMedicine ; 62: 103094, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33166789

ABSTRACT

BACKGROUND: Identifying which individuals will develop tuberculosis (TB) remains an unresolved problem due to few animal models and computational approaches that effectively address its heterogeneity. To meet these shortcomings, we show that Diversity Outbred (DO) mice reflect human-like genetic diversity and develop human-like lung granulomas when infected with Mycobacterium tuberculosis (M.tb) . METHODS: Following M.tb infection, a "supersusceptible" phenotype develops in approximately one-third of DO mice characterized by rapid morbidity and mortality within 8 weeks. These supersusceptible DO mice develop lung granulomas patterns akin to humans. This led us to utilize deep learning to identify supersusceptibility from hematoxylin & eosin (H&E) lung tissue sections utilizing only clinical outcomes (supersusceptible or not-supersusceptible) as labels. FINDINGS: The proposed machine learning model diagnosed supersusceptibility with high accuracy (91.50 ± 4.68%) compared to two expert pathologists using H&E stained lung sections (94.95% and 94.58%). Two non-experts used the imaging biomarker to diagnose supersusceptibility with high accuracy (88.25% and 87.95%) and agreement (96.00%). A board-certified veterinary pathologist (GB) examined the imaging biomarker and determined the model was making diagnostic decisions using a form of granuloma necrosis (karyorrhectic and pyknotic nuclear debris). This was corroborated by one other board-certified veterinary pathologist. Finally, the imaging biomarker was quantified, providing a novel means to convert visual patterns within granulomas to data suitable for statistical analyses. IMPLICATIONS: Overall, our results have translatable implication to improve our understanding of TB and also to the broader field of computational pathology in which clinical outcomes alone can drive automatic identification of interpretable imaging biomarkers, knowledge discovery, and validation of existing clinical biomarkers. FUNDING: National Institutes of Health and American Lung Association.


Subject(s)
Biomarkers , Deep Learning , Molecular Imaging , Mycobacterium tuberculosis , Tuberculosis/diagnosis , Tuberculosis/etiology , Algorithms , Animals , Computational Biology/methods , Disease Models, Animal , Disease Susceptibility , Female , Humans , Image Processing, Computer-Assisted , Immunohistochemistry/methods , Machine Learning , Male , Molecular Imaging/methods , Prognosis , Reproducibility of Results
14.
Sci Rep ; 10(1): 2398, 2020 Feb 06.
Article in English | MEDLINE | ID: mdl-32024961

ABSTRACT

An amendment to this paper has been published and can be accessed via a link at the top of the paper.

15.
Sci Rep ; 9(1): 18969, 2019 12 12.
Article in English | MEDLINE | ID: mdl-31831792

ABSTRACT

Automatic identification of tissue structures in the analysis of digital tissue biopsies remains an ongoing problem in digital pathology. Common barriers include lack of reliable ground truth due to inter- and intra- reader variability, class imbalances, and inflexibility of discriminative models. To overcome these barriers, we are developing a framework that benefits from a reliable immunohistochemistry ground truth during labeling, overcomes class imbalances through single task learning, and accommodates any number of classes through a minimally supervised, modular model-per-class paradigm. This study explores an initial application of this framework, based on conditional generative adversarial networks, to automatically identify tumor from non-tumor regions in colorectal H&E slides. The average precision, sensitivity, and F1 score during validation was 95.13 ± 4.44%, 93.05 ± 3.46%, and 94.02 ± 3.23% and for an external test dataset was 98.75 ± 2.43%, 88.53 ± 5.39%, and 93.31 ± 3.07%, respectively. With accurate identification of tumor regions, we plan to further develop our framework to establish a tumor front, from which tumor buds can be detected in a restricted region. This model will be integrated into a larger system which will quantitatively determine the prognostic significance of tumor budding.


Subject(s)
Colorectal Neoplasms/diagnosis , Colorectal Neoplasms/metabolism , Colorectal Neoplasms/pathology , Image Processing, Computer-Assisted , Neural Networks, Computer , Female , Humans , Immunohistochemistry , Male
16.
Ann Burns Fire Disasters ; 32(2): 147-152, 2019 Jun 30.
Article in English | MEDLINE | ID: mdl-31528156

ABSTRACT

One of the main goals in the rehabilitation process of patients with burn to their hands is their return to society and their professional occupation, which has a direct positive influence on these patients' quality of life. The goal of this research project was to investigate the effect of early intervention with occupational therapy in patients with burns to their hands. The study included 30 patients with second or third degree hand burns. Patients were added to the study 12 days after their burn wounds and grafted areas had healed. They had 3 sessions of occupational therapy per week for 8 weeks. These sessions included active and passive range of motion exercises, active resistive exercises, stretching exercises and practicing activities of daily living. Functionality of the hand was assessed before and after the 8 weeks of occupational therapy using the DASH questionnaire. The average initial DASH score before intervention with occupational therapy was 60.9, and after 8 weeks of occupational therapy it was 33.9 (average difference between the pre-intervention and post-intervention DASH scores is 27 points, p < 0.001). After 8 weeks of occupational therapy, patients performed activities of daily living with a lot less difficulty, and an increase in functionality of the hands was observed. This study suggests that early intervention with rehabilitative therapies is advantageous and may result in improved hand function.


Un des buts de la rééducation des patients aux mains brûlées est la réinsertion à la société et au travail, ce qui a une influence directe sur leur qualité de vie. Le but de cette étude était d'évaluer l'effet de l'introduction précoce de l'ergothérapie dans le programme de rééducation de 30 patients brûlés des mains (2ème et 3ème degrés). Ce programme, comprenant 3 séances d'ergothérapie hebdomadaires pendant 8 semaines, débutait 12 j après la cicatrisation. Les séances comportaient des exercices moteurs actifs et passifs, des exercices actifs contre résistance, des étirements et des exercices mimant les mouvements de la vie quotidienne. Les capacités fonctionnelles des mains étaient évaluées avant et en fin de programme, en utilisant le questionnaire DASH. Il était initialement de 60,9 et de 33,9 en fin de programme (différence 27 points, p< 0,001). Après 8 semaines d'ergothérapie, les patients avaient beaucoup moins de difficulté à réaliser les gestes de la vie courante et on observait une augmentation des capacités fonctionnelles des mains. Cette étude suggère que l'introduction précoce de l'ergothérapie peut permettre une amélioration de la fonction des mains brûlées.

17.
Waste Manag ; 85: 131-140, 2019 Feb 15.
Article in English | MEDLINE | ID: mdl-30803566

ABSTRACT

This study investigates the thermal decomposition, thermodynamic and kinetic behavior of rice-husk (R), sewage sludge (S) and their blends during co-pyrolysis using thermogravimetric analysis at a constant heating rate of 20 °C/min. Coats-Redfern integral method is applied to mass loss data by employing seventeen models of five major reaction mechanisms to calculate the kinetics and thermodynamic parameters. Two temperature regions: I (200-400 °C) and II (400-600 °C) are identified and best fitted with different models. Among all models, diffusion models show high activation energy with higher R2(0.99) of rice husk (66.27-82.77 kJ/mol), sewage sludge (52.01-68.01 kJ/mol) and subsequent blends (45.10-65.81 kJ/mol) for region I and for rice husk (7.31-25.84 kJ/mol), sewage sludge (1.85-16.23 kJ/mol) and blends (4.95-16.32 kJ/mol) for region II, respectively. Thermodynamic parameters are calculated using kinetics data to assess the co-pyrolysis process enthalpy, Gibbs-free energy, and change in entropy. Artificial neural network (ANN) models are developed and employed on co-pyrolysis thermal decomposition data to study the reaction mechanism by calculating Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and coefficient of determination (R2). The co-pyrolysis results from a thermal behavior and kinetics perspective are promising and the process is viable to recover organic materials more efficiently.


Subject(s)
Oryza , Sewage , Kinetics , Pyrolysis , Thermodynamics , Thermogravimetry
18.
Artif Intell Med ; 95: 82-87, 2019 04.
Article in English | MEDLINE | ID: mdl-30266546

ABSTRACT

In this paper, we propose a pathological image compression framework to address the needs of Big Data image analysis in digital pathology. Big Data image analytics require analysis of large databases of high-resolution images using distributed storage and computing resources along with transmission of large amounts of data between the storage and computing nodes that can create a major processing bottleneck. The proposed image compression framework is based on the JPEG2000 Interactive Protocol and aims to minimize the amount of data transfer between the storage and computing nodes as well as to considerably reduce the computational demands of the decompression engine. The proposed framework was integrated into hotspot detection from images of breast biopsies, yielding considerable reduction of data and computing requirements.


Subject(s)
Big Data , Breast Neoplasms/diagnosis , Data Compression/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Information Storage and Retrieval
19.
PLoS One ; 13(10): e0205387, 2018.
Article in English | MEDLINE | ID: mdl-30359393

ABSTRACT

The development of whole slide scanners has revolutionized the field of digital pathology. Unfortunately, whole slide scanners often produce images with out-of-focus/blurry areas that limit the amount of tissue available for a pathologist to make accurate diagnosis/prognosis. Moreover, these artifacts hamper the performance of computerized image analysis systems. These areas are typically identified by visual inspection, which leads to a subjective evaluation causing high intra- and inter-observer variability. Moreover, this process is both tedious, and time-consuming. The aim of this study is to develop a deep learning based software called, DeepFocus, which can automatically detect and segment blurry areas in digital whole slide images to address these problems. DeepFocus is built on TensorFlow, an open source library that exploits data flow graphs for efficient numerical computation. DeepFocus was trained by using 16 different H&E and IHC-stained slides that were systematically scanned on nine different focal planes, generating 216,000 samples with varying amounts of blurriness. When trained and tested on two independent datasets, DeepFocus resulted in an average accuracy of 93.2% (± 9.6%), which is a 23.8% improvement over an existing method. DeepFocus has the potential to be integrated with whole slide scanners to automatically re-scan problematic areas, hence improving the overall image quality for pathologists and image analysis algorithms.


Subject(s)
Deep Learning , Neural Networks, Computer , Algorithms , Area Under Curve , Image Processing, Computer-Assisted , ROC Curve
20.
AJNR Am J Neuroradiol ; 39(9): 1622-1628, 2018 09.
Article in English | MEDLINE | ID: mdl-30093484

ABSTRACT

BACKGROUND AND PURPOSE: The limitations inherent in the current methods of diagnosing mild cognitive impairment have constrained the use of early therapeutic interventions to delay the progression of mild cognitive impairment to dementia. This study evaluated whether quantifying enlarged perivascular spaces observed on MR imaging can help differentiate those with mild cognitive impairment from cognitively healthy controls and, thus, have an application in the diagnosis of mild cognitive impairment. MATERIALS AND METHODS: We automated the identification of enlarged perivascular spaces in brain MR Images using a custom quantitative program designed with Matlab. We then quantified the densities of enlarged perivascular spaces for patients with mild cognitive impairment (n = 14) and age-matched cognitively healthy controls (n = 15) and compared them to determine whether the density of enlarged perivascular spaces can serve as an imaging surrogate for mild cognitive impairment diagnosis. RESULTS: Quantified as a percentage of volume fraction (v/v%), densities of enlarged perivascular spaces were calculated to be 2.82 ± 0.40 v/v% for controls and 4.17 ± 0.57 v/v% for the mild cognitive impairment group in the subcortical brain (P < .001), and 2.74 ± 0.57 v/v% for the controls and 3.90 ± 0.62 v/v% for the mild cognitive impairment cohort in the basal ganglia (P < .001). Maximum intensity projections exhibited a visually conspicuous difference in the distributions of enlarged perivascular spaces for a patient with mild cognitive impairment and a control patient. By means of receiver operating characteristic curve analysis, we determined the sensitivity and specificity of using enlarged perivascular spaces as a differentiating biomarker between mild cognitive impairment and controls to be 92.86% and 93.33%, respectively. CONCLUSIONS: The density of enlarged perivascular spaces was found to be significantly higher in those with mild cognitive impairment compared with age-matched healthy control subjects. The density of enlarged perivascular spaces, therefore, may be a useful imaging biomarker for the diagnosis of mild cognitive impairment.


Subject(s)
Cognitive Dysfunction/diagnostic imaging , Early Diagnosis , Glymphatic System/diagnostic imaging , Magnetic Resonance Imaging/methods , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...