Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 12(1): 22430, 2022 12 27.
Article in English | MEDLINE | ID: mdl-36575209

ABSTRACT

Automatic diagnosis of malignant prostate cancer patients from mpMRI has been studied heavily in the past years. Model interpretation and domain drift have been the main road blocks for clinical utilization. As an extension from our previous work we trained on a public cohort with 201 patients and the cropped 2.5D slices of the prostate glands were used as the input, and the optimal model were searched in the model space using autoKeras. As an innovative move, peripheral zone (PZ) and central gland (CG) were trained and tested separately, the PZ detector and CG detector were demonstrated effective in highlighting the most suspicious slices out of a sequence, hopefully to greatly ease the workload for the physicians.


Subject(s)
Deep Learning , Multiparametric Magnetic Resonance Imaging , Prostatic Neoplasms , Male , Humans , Magnetic Resonance Imaging , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/pathology , Prostate/pathology
2.
Breast ; 64: 143-150, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35691250

ABSTRACT

BACKGROUND: As a special reproductive hormone and ovarian reserve indicator, the role of anti-Müllerian hormone (AMH) in premenopausal women with breast cancer deserves further study. METHODS: We conducted an in-depth analysis of the data from the EGOFACT study (NCT02518191), a phase Ⅲ, randomized, controlled trial involving premenopausal female breast cancer patients in two parallel groups: chemotherapy with or without gonadotropin-releasing hormone analogs (GnRHa). Three hundred thirty premenopausal women aged 25-49 years with operable stage I to III breast cancer were included in this study. The characteristics of ovarian reserve changes marked by AMH in the EGOFACT study and the factors affecting ovarian function in premenopausal women with breast cancer were analyzed. RESULTS: The AMH level of the chemotherapy alone group decreased gradually within one year, while the AMH level of the GnRHa group was significantly higher as early as 6 months after chemotherapy and recovered to close to the baseline level 12 months after chemotherapy (F = 34.991, P < 0.001). Correlation analysis showed that the factors affecting AMH levels mainly included age, menarche age, body mass index (BMI), reproductive history, baseline follicle stimulating hormone (FSH) level, pathological stage and GnRHa application, but they had different effects on the incidence of premature ovarian insufficiency (POI) at different periods. Multivariate logistic regression analysis showed that menarche age younger than 14 years (OR 0.470 [0.259, 0.852], P = 0.013), baseline AMH level higher than 0.5 ng/mL (OR 9.590 [3.366, 27.320], P < 0.001), pathological stage Ⅰ(OR 0.315 [0.124, 0.798], P = 0.015) and GnRHa application (OR 0.090 [0.045, 0.183], P < 0.001) were independent factors conducive to protection of ovarian reserve, as well as to recovery of ovarian reserve. CONCLUSIONS: Age, menarche age, baseline AMH level, and GnRHa application are the most important influencing factors for ovarian reserve in premenopausal women with breast cancer. TRIAL REGISTRATION: ClinicalTrials.gov, NCT02518191, registered on Aug 5, 2015.


Subject(s)
Breast Neoplasms , Ovarian Reserve , Primary Ovarian Insufficiency , Anti-Mullerian Hormone , Breast Neoplasms/pathology , Female , Humans , Premenopause
3.
Adv Radiat Oncol ; 5(3): 473-481, 2020.
Article in English | MEDLINE | ID: mdl-32529143

ABSTRACT

PURPOSE: Accurate delineation of the prostate gland and intraprostatic lesions (ILs) is essential for prostate cancer dose-escalated radiation therapy. The aim of this study was to develop a sophisticated deep neural network approach to magnetic resonance image analysis that will help IL detection and delineation for clinicians. METHODS AND MATERIALS: We trained and evaluated mask region-based convolutional neural networks to perform the prostate gland and IL segmentation. There were 2 cohorts in this study: 78 public patients (cohort 1) and 42 private patients from our institution (cohort 2). Prostate gland segmentation was performed using T2-weighted images (T2WIs), although IL segmentation was performed using T2WIs and coregistered apparent diffusion coefficient maps with prostate patches cropped out. The IL segmentation model was extended to select 5 highly suspicious volumetric lesions within the entire prostate. RESULTS: The mask region-based convolutional neural networks model was able to segment the prostate with dice similarity coefficient (DSC) of 0.88 ± 0.04, 0.86 ± 0.04, and 0.82 ± 0.05; sensitivity (Sens.) of 0.93, 0.95, and 0.95; and specificity (Spec.) of 0.98, 0.85, and 0.90. However, ILs were segmented with DSC of 0.62 ± 0.17, 0.59 ± 0.14, and 0.38 ± 0.19; Sens. of 0.55 ± 0.30, 0.63 ± 0.28, and 0.22 ± 0.24; and Spec. of 0.974 ± 0.010, 0.964 ± 0.015, and 0.972 ± 0.015 in public validation/public testing/private testing patients when trained with patients from cohort 1 only. When trained with patients from both cohorts, the values were as follows: DSC of 0.64 ± 0.11, 0.56 ± 0.15, and 0.46 ± 0.15; Sens. of 0.57 ± 0.23, 0.50 ± 0.28, and 0.33 ± 0.17; and Spec. of 0.980 ± 0.009, 0.969 ± 0.016, and 0.977 ± 0.013. CONCLUSIONS: Our research framework is able to perform as an end-to-end system that automatically segmented the prostate gland and identified and delineated highly suspicious ILs within the entire prostate. Therefore, this system demonstrated the potential for assisting the clinicians in tumor delineation.

4.
Med Phys ; 47(9): 4077-4086, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32449176

ABSTRACT

PURPOSE: Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. In this paper, an efficient convolutional neural network (CNN) was developed and the model interpretation at various convolutional layers was systematically analyzed to improve the understanding of how CNN interprets multimodality medical images and the predictive powers of features at each layer. The problem of small sample size was addressed by feeding the intermediate features into a traditional classification algorithm known as weighted extreme learning machine (wELM), with imbalanced distribution among output categories taken into consideration. METHODS: The training data collection used a retrospective set of prostate MR studies, from SPIE-AAPM-NCI PROSTATEx Challenges held in 2017. Three hundred twenty biopsy samples of lesions from 201 prostate cancer patients were diagnosed and identified as clinically significant (malignant) or not significant (benign). All studies included T2-weighted (T2W), proton density-weighted (PD-W), dynamic contrast enhanced (DCE) and diffusion-weighted (DW) imaging. After registration and lesion-based normalization, a CNN with four convolutional layers were developed and trained on tenfold cross validation. The features from intermediate layers were then extracted as input to wELM to test the discriminative power of each individual layer. The best performing model from the tenfolds was chosen to be tested on the holdout cohort from two sources. Feature maps after each convolutional layer were then visualized to monitor the trend, as the layer propagated. Scatter plotting was used to visualize the transformation of data distribution. Finally, a class activation map was generated to highlight the region of interest based on the model perspective. RESULTS: Experimental trials indicated that the best input for CNN was a modality combination of T2W, apparent diffusion coefficient (ADC) and DWIb50 . The convolutional features from CNN paired with a weighted extreme learning classifier showed substantial performance compared to a CNN end-to-end training model. The feature map visualization reveals similar findings on natural images where lower layers tend to learn lower level features such as edges, intensity changes, etc, while higher layers learn more abstract and task-related concept such as the lesion region. The generated saliency map revealed that the model was able to focus on the region of interest where the lesion resided and filter out background information, including prostate boundary, rectum, etc. CONCLUSIONS: This work designs a customized workflow for the small and imbalanced dataset of prostate mpMRI where features were extracted from a deep learning model and then analyzed by a traditional machine learning classifier. In addition, this work contributes to revealing how deep learning models interpret mpMRI for prostate cancer patient stratification.


Subject(s)
Multiparametric Magnetic Resonance Imaging , Diffusion Magnetic Resonance Imaging , Humans , Male , Neural Networks, Computer , Prostate/diagnostic imaging , Retrospective Studies
5.
Article in English | MEDLINE | ID: mdl-24110524

ABSTRACT

This paper presents an approach to detection and segmentation of liver tumors in 3D computed tomography (CT) images. The automatic detection of tumor can be formulized as novelty detection or two-class classification issue. The method can also be used for tumor segmentation, where each voxel is to be assigned with a correct label, either a tumor class or nontumor class. A voxel is represented with a rich feature vector that distinguishes itself from voxels in different classes. A fast learning algorithm Extreme Learning Machine (ELM) is trained as a voxel classifier. In automatic liver tumor detection, we propose and show that ELM can be trained as a one-class classifier with only healthy liver samples in training. It results in a method of tumor detection based on novelty detection. We compare it with two-class ELM. To extract the boundary of a tumor, we adopt the semi-automatic approach by randomly selecting samples in 3D space within a limited region of interest (ROI) for classifier training. Our approach is validated on a group of patients' CT data and the experiment shows good detection and encouraging segmentation results.


Subject(s)
Artificial Intelligence , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Algorithms , Humans , Software , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...