Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Commun Med (Lond) ; 4(1): 84, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724730

ABSTRACT

BACKGROUND: Artificial Intelligence(AI)-based solutions for Gleason grading hold promise for pathologists, while image quality inconsistency, continuous data integration needs, and limited generalizability hinder their adoption and scalability. METHODS: We present a comprehensive digital pathology workflow for AI-assisted Gleason grading. It incorporates A!MagQC (image quality control), A!HistoClouds (cloud-based annotation), Pathologist-AI Interaction (PAI) for continuous model improvement, Trained on Akoya-scanned images only, the model utilizes color augmentation and image appearance migration to address scanner variations. We evaluate it on Whole Slide Images (WSI) from another five scanners and conduct validations with pathologists to assess AI efficacy and PAI. RESULTS: Our model achieves an average F1 score of 0.80 on annotations and 0.71 Quadratic Weighted Kappa on WSIs for Akoya-scanned images. Applying our generalization solution increases the average F1 score for Gleason pattern detection from 0.73 to 0.88 on images from other scanners. The model accelerates Gleason scoring time by 43% while maintaining accuracy. Additionally, PAI improve annotation efficiency by 2.5 times and led to further improvements in model performance. CONCLUSIONS: This pipeline represents a notable advancement in AI-assisted Gleason grading for improved consistency, accuracy, and efficiency. Unlike previous methods limited by scanner specificity, our model achieves outstanding performance across diverse scanners. This improvement paves the way for its seamless integration into clinical workflows.


Gleason grading is a well-accepted diagnostic standard to assess the severity of prostate cancer in patients' tissue samples, based on how abnormal the cells in their prostate tumor look under a microscope. This process can be complex and time-consuming. We explore how artificial intelligence (AI) can help pathologists perform Gleason grading more efficiently and consistently. We build an AI-based system which automatically checks image quality, standardizes the appearance of images from different equipment, learns from pathologists' feedback, and constantly improves model performance. Testing shows that our approach achieves consistent results across different equipment and improves efficiency of the grading process. With further testing and implementation in the clinic, our approach could potentially improve prostate cancer diagnosis and management.

2.
Bioinformatics ; 40(1)2024 01 02.
Article in English | MEDLINE | ID: mdl-38058211

ABSTRACT

MOTIVATION: Pediatric kidney disease is a widespread, progressive condition that severely impacts growth and development of children. Chronic kidney disease is often more insidious in children than in adults, usually requiring a renal biopsy for diagnosis. Biopsy evaluation requires copious examination by trained pathologists, which can be tedious and prone to human error. In this study, we propose an artificial intelligence (AI) method to assist pathologists in accurate segmentation and classification of pediatric kidney structures, named as AI-based Pediatric Kidney Diagnosis (APKD). RESULTS: We collected 2935 pediatric patients diagnosed with kidney disease for the development of APKD. The dataset comprised 93 932 histological structures annotated manually by three skilled nephropathologists. APKD scored an average accuracy of 94% for each kidney structure category, including 99% in the glomerulus. We found strong correlation between the model and manual detection in detected glomeruli (Spearman correlation coefficient r = 0.98, P < .001; intraclass correlation coefficient ICC = 0.98, 95% CI = 0.96-0.98). Compared to manual detection, APKD was approximately 5.5 times faster in segmenting glomeruli. Finally, we show how the pathological features extracted by APKD can identify focal abnormalities of the glomerular capillary wall to aid in the early diagnosis of pediatric kidney disease. AVAILABILITY AND IMPLEMENTATION: https://github.com/ChunyueFeng/Kidney-DataSet.


Subject(s)
Artificial Intelligence , Renal Insufficiency, Chronic , Adult , Humans , Child , Kidney/diagnostic imaging , Kidney/pathology , Renal Insufficiency, Chronic/pathology
3.
Bioinformatics ; 38(23): 5307-5314, 2022 11 30.
Article in English | MEDLINE | ID: mdl-36264128

ABSTRACT

MOTIVATION: Differentiating 12 stages of the mouse seminiferous epithelial cycle is vital towards understanding the dynamic spermatogenesis process. However, it is challenging since two adjacent spermatogenic stages are morphologically similar. Distinguishing Stages I-III from Stages IV-V is important for histologists to understand sperm development in wildtype mice and spermatogenic defects in infertile mice. To achieve this, we propose a novel pipeline for computerized spermatogenesis staging (CSS). RESULTS: The CSS pipeline comprises four parts: (i) A seminiferous tubule segmentation model is developed to extract every single tubule; (ii) A multi-scale learning (MSL) model is developed to integrate local and global information of a seminiferous tubule to distinguish Stages I-V from Stages VI-XII; (iii) a multi-task learning (MTL) model is developed to segment the multiple testicular cells for Stages I-V without an exhaustive requirement for manual annotation; (iv) A set of 204D image-derived features is developed to discriminate Stages I-III from Stages IV-V by capturing cell-level and image-level representation. Experimental results suggest that the proposed MSL and MTL models outperform classic single-scale and single-task models when manual annotation is limited. In addition, the proposed image-derived features are discriminative between Stages I-III and Stages IV-V. In conclusion, the CSS pipeline can not only provide histologists with a solution to facilitate quantitative analysis for spermatogenesis stage identification but also help them to uncover novel computerized image-derived biomarkers. AVAILABILITY AND IMPLEMENTATION: https://github.com/jydada/CSS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Semen , Spermatogenesis , Mice , Male , Animals , Seminiferous Tubules , Testis/anatomy & histology
4.
Bioinformatics ; 38(18): 4395-4402, 2022 09 15.
Article in English | MEDLINE | ID: mdl-35881697

ABSTRACT

MOTIVATION: DNA fibre assay has a potential application in genomic medicine, cancer and stem cell research at the single-molecule level. A major challenge for the clinical and research implementation of DNA fibre assays is the slow speed in which manual analysis takes place as it limits the clinical actionability. While automatic detection of DNA fibres speeds up this process considerably, current publicly available software have limited features in terms of their user interface for manual correction of results, which in turn limit their accuracy and ability to account for atypical structures that may be important in diagnosis or investigative studies. We recognize that core improvements can be made to the GUI to allow for direct interaction with automatic results to preserve accuracy as well as enhance the versatility of automatic DNA fibre detection for use in variety of situations. RESULTS: To address the unmet needs of diverse DNA fibre analysis investigations, we propose DNA Stranding, an open-source software that is able to perform accurate fibre length quantification (13.22% mean relative error) and fibre pattern recognition (R > 0.93) with up to six fibre patterns supported. With the graphical interface, we developed, user can conduct semi-automatic analyses which benefits from the advantages of both automatic and manual processes to improve workflow efficiency without compromising accuracy. AVAILABILITY AND IMPLEMENTATION: The software package is available at https://github.com/lgole/DNAStranding. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
DNA , Software , Workflow , DNA Replication
5.
Comput Biol Med ; 146: 105520, 2022 07.
Article in English | MEDLINE | ID: mdl-35537220

ABSTRACT

Intrahepatic cholangiocarcinoma (ICC) is cancer that originates from the liver's secondary ductal epithelium or branch. Due to the lack of early-stage clinical symptoms and very high mortality, the 5-year postoperative survival rate is only about 35%. A critical step to improve patients' survival is accurately predicting their survival status and giving appropriate treatment. The tumor microenvironment of ICC is the immediate environment on which the tumor cell growth depends. The differentiation of tumor glands, the stroma status, and the tumor-infiltrating lymphocytes in such environments are strictly related to the tumor progress. It is crucial to develop a computerized system for characterizing the tumor environment. This work aims to develop the quantitative histomorphological features that describe lymphocyte density distribution at the cell level and the different components at the tumor's tissue level in H&E-stained whole slide images (WSIs). The goal is to explore whether these features could stratify patients' survival. This study comprised of 127 patients diagnosed with ICC after surgery, where 78 cases were randomly chosen as the modeling set, and the rest of the 49 cases were testing set. Deep learning-based models were developed for tissue segmentation and lymphocyte detection in the WSIs. A total of 107-dimensional features, including different type of graph features on the WSIs were extracted by exploring the histomorphological patterns of these identified tumor tissue and lymphocytes. The top 3 discriminative features were chosen with the mRMR algorithm via 5-fold cross-validation to predict the patient's survival. The model's performance was evaluated on the independent testing set, which achieved an AUC of 0.6818 and the log-rank test p-value of 0.03. The Cox multivariable test was used to control the TNM staging, γ-Glutamytransferase, and the Peritumoral Glisson's Sheath Invasion. It showed that our model could independently predict survival risk with a p-value of 0.048 and HR (95% confidence interval) of 2.90 (1.01-8.32). These results indicated that the composition in tissue-level and global arrangement of lymphocytes in the cell-level could distinguish ICC patients' survival risk.


Subject(s)
Bile Duct Neoplasms , Cholangiocarcinoma , Bile Duct Neoplasms/diagnostic imaging , Bile Duct Neoplasms/pathology , Bile Ducts, Intrahepatic/pathology , Bile Ducts, Intrahepatic/surgery , Cholangiocarcinoma/diagnostic imaging , Cholangiocarcinoma/pathology , Humans , Neoplasm Staging , Tumor Microenvironment
6.
Cytometry A ; 101(8): 658-674, 2022 08.
Article in English | MEDLINE | ID: mdl-35388957

ABSTRACT

The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, and spermatids) together with supporting sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a squeeze-and-excitation (SE) module and a dense unit. The SE module optimizes and obtains features from different channels, whereas the dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potentially could be applied to the study of reproductive diseases such as infertility.


Subject(s)
Simulation Training , Testis , Animals , Humans , Male , Mice , Semen , Seminiferous Tubules/anatomy & histology , Seminiferous Tubules/metabolism , Sertoli Cells/metabolism , Spermatids , Spermatogenesis , Spermatozoa
7.
Magn Reson Med ; 87(1): 431-445, 2022 01.
Article in English | MEDLINE | ID: mdl-34337773

ABSTRACT

PURPOSE: MRI of organs and musculoskeletal structures in the female pelvis presents a unique display of pelvic anatomy. Automated segmentation of pelvic structures plays an important role in personalized diagnosis and treatment on pelvic structures disease. Pelvic organ systems are very complicated, and it is a challenging task for 3D segmentation of massive pelvic structures on MRI. METHODS: A new Scale- and Slice-aware Net ( S2 aNet) is presented for 3D dense segmentation of 54 organs and musculoskeletal structures in female pelvic MR images. A Scale-aware module is designed to capture the spatial and semantic information of different-scale structures. A Slice-aware module is introduced to model similar spatial relationships of consecutive slices in 3D data. Moreover, S2 aNet leverages a weight-adaptive loss optimization strategy to reinforce the supervision with more discriminative capability on hard samples and categories. RESULTS: Experiments have been performed on a pelvic MRI cohort of 27 MR images from 27 patient cases. Across the cohort and 54 categories of organs and musculoskeletal structures manually delineated, S2 aNet was shown to outperform the UNet framework and other state-of-the-art fully convolutional networks in terms of sensitivity, Dice similarity coefficient and relative volume difference. CONCLUSION: The experimental results on the pelvic 3D MR dataset show that the proposed S2 aNet achieves excellent segmentation results compared to other state-of-the-art models. To our knowledge, S2 aNet is the first model to achieve 3D dense segmentation for 54 musculoskeletal structures on pelvic MRI, which will be leveraged to the clinical application under the support of more cases in the future.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Female , Humans , Magnetic Resonance Imaging , Pelvis/diagnostic imaging , Tomography, X-Ray Computed
8.
Med Image Anal ; 70: 101835, 2021 05.
Article in English | MEDLINE | ID: mdl-33676102

ABSTRACT

Spermatogenesis in mammals is a cyclic process of spermatogenic cell development in the seminiferous epithelium that can be subdivided into 12 subsequent stages. Histological staging analysis of testis sections, specifically of seminiferous tubule cross-sections, is the only effective method to evaluate the quality of the spermatogenic process and to determine developmental defects leading to infertility. Such staging analysis, however, is tedious and time-consuming, and it may take a long time to become proficient. We now have developed a Computerized Staging system of Spermatogenesis (CSS) for mouse testis sections through learning of an expert with decades of experience in mouse testis staging. The development of the CSS system comprised three major parts: 1) Developing computational image analysis models for mouse testis sections; 2) Automated classification of each seminiferous tubule cross-section into three stage groups: Early Stages (ES: stages I-V), Middle Stages (MS: stages VI-VIII), and Late Stages (LS: stages IV-XII); 3) Automated classification of MS into distinct stages VI, VII-mVIII, and late VIII based on newly developed histomorphological features. A cohort of 40 H&E stained normal mouse testis sections was built according to three modules where 28 cross-sections were leveraged for developing tubule region segmentation, spermatogenic cells types and multi-concentric-layers segmentation models. The rest of 12 testis cross-sections, approximately 2314 tubules whose stages were manually annotated by two expert testis histologists, served as the basis for developing the CSS system. The CSS system's accuracy of mean and standard deviation (MSD) in identifying ES, MS, and LS were 0.93 ± 0.03, 0.94 ± 0.11, and 0.89 ± 0.05 and 0.85 ± 0.12, 0.88 ± 0.07, and 0.96 ± 0.04 for one with 5 years of experience, respectively. The CSS system's accuracy of MSD in identifying stages VI, VII-mVIII, and late VIII are 0.74 ± 0.03, 0.85 ± 0.04, and 0.78 ± 0.06 and 0.34 ± 0.18, 0.78 ± 0.16, and 0.44 ± 0.25 for one with 5 years of experience, respectively. In terms of time it takes to collect these data, it takes on average 3 hours for a histologist and 1.87 hours for the CSS system to finish evaluating an entire testis section (computed with a PC (I7-6800k 4.0 GHzwith 32GB of RAM & 256G SSD) and a Titan 1080Ti GPU). Therefore, the CSS system is more accurate and faster compared to a human histologist in staging, and further optimization and development will not only lead to a complete staging of all 12 stages of mouse spermatogenesis but also could aid in the future diagnosis of human infertility. Moreover, the top-ranking histomorphological features identified by the CSS classifier are consistent with the primary features used by histologists in discriminating stages VI, VII-mVIII, and late VIII.


Subject(s)
Spermatogenesis , Testis , Animals , Male , Mice , Seminiferous Epithelium , Seminiferous Tubules
9.
Comput Methods Programs Biomed ; 194: 105528, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32470903

ABSTRACT

BACKGROUND AND OBJECTIVE: Gleason grading system is currently the clinical gold standard for determining prostate cancer aggressiveness. Prostate cancer is typically classified into one of 5 different categories with 1 representing the most indolent disease and 5 reflecting the most aggressive disease. Grades 3 and 4 are the most common and difficult patterns to be discriminated in clinical practice. Even though the degree of gland differentiation is the strongest determinant of Gleason grade, manual grading is subjective and is hampered by substantial inter-reader disagreement, especially with regard to intermediate grade groups. METHODS: To capture the topological characteristics and the degree of connectivity between nuclei around the gland, the concept of Homology Profile (HP) for prostate cancer grading is presented in this paper. HP is an algebraic tool, whereby, certain algebraic invariants are computed based on the structure of a topological space. We utilized the Statistical Representation of Homology Profile (SRHP) features to quantify the extent of glandular differentiation. The quantitative characteristics which represent the image patch are fed into a supervised classifier model for discrimination of grade patterns 3 and 4. RESULTS: On the basis of the novel homology profile, we evaluated 43 digitized images of prostate biopsy slides annotated for regions corresponding to Grades 3 and 4. The quantitative patch-level evaluation results showed that our approach achieved an Area Under Curve (AUC) of 0.96 and an accuracy of 0.89 in terms of discriminating Grade 3 and 4 patches. Our approach was found to be superior to comparative methods including handcrafted cellular features, Stacked Sparse Autoencoder (SSAE) algorithm and end-to-end supervised learning method (DLGg). Also, slide-level quantitative and qualitative evaluation results reflect the ability of our approach in discriminating Gleason Grade 3 from 4 patterns on H&E tissue images. CONCLUSIONS: We presented a novel Statistical Representation of Homology Profile (SRHP) approach for automated Gleason grading on prostate biopsy slides. The most discriminating topological descriptions of cancerous regions for grade 3 and 4 in prostate cancer were identified. Moreover, these characteristics of homology profile are interpretable, visually meaningful and highly consistent with the rubric employed by pathologists for the task of Gleason grading.


Subject(s)
Image Interpretation, Computer-Assisted , Prostatic Neoplasms , Biopsy , Humans , Male , Neoplasm Grading , Prostatic Neoplasms/diagnostic imaging
10.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 37(1): 10-18, 2020 Feb 25.
Article in Chinese | MEDLINE | ID: mdl-32096372

ABSTRACT

Lung cancer is a most common malignant tumor of the lung and is the cancer with the highest morbidity and mortality worldwide. For patients with advanced non-small cell lung cancer who have undergone epidermal growth factor receptor (EGFR) gene mutations, targeted drugs can be used for targeted therapy. There are many methods for detecting EGFR gene mutations, but each method has its own advantages and disadvantages. This study aims to predict the risk of EGFR gene mutation by exploring the association between the histological features of the whole slides pathology of non-small cell lung cancer hematoxylin-eosin (HE) staining and the patient's EGFR mutant gene. The experimental results show that the area under the curve (AUC) of the EGFR gene mutation risk prediction model proposed in this paper reached 72.4% on the test set, and the accuracy rate was 70.8%, which reveals the close relationship between histomorphological features and EGFR gene mutations in the whole slides pathological images of non-small cell lung cancer. In this paper, the molecular phenotypes were analyzed from the scale of the whole slides pathological images, and the combination of pathology and molecular omics was used to establish the EGFR gene mutation risk prediction model, revealing the correlation between the whole slides pathological images and EGFR gene mutation risk. It could provide a promising research direction for this field.


Subject(s)
Carcinoma, Non-Small-Cell Lung/genetics , Deep Learning , Lung Neoplasms/genetics , ErbB Receptors/genetics , Humans , Mutation
SELECTION OF CITATIONS
SEARCH DETAIL
...