Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Magn Reson Imaging ; 2024 May 10.
Article in English | MEDLINE | ID: mdl-38726477

ABSTRACT

BACKGROUND: Accurate determination of human epidermal growth factor receptor 2 (HER2) is important for choosing optimal HER2 targeting treatment strategies. HER2-low is currently considered HER2-negative, but patients may be eligible to receive new anti-HER2 drug conjugates. PURPOSE: To use breast MRI BI-RADS features for classifying three HER2 levels, first to distinguish HER2-zero from HER2-low/positive (Task-1), and then to distinguish HER2-low from HER2-positive (Task-2). STUDY TYPE: Retrospective. POPULATION: 621 invasive ductal cancer, 245 HER2-zero, 191 HER2-low, and 185 HER2-positive. For Task-1, 488 cases for training and 133 for testing. For Task-2, 294 cases for training and 82 for testing. FIELD STRENGTH/SEQUENCE: 3.0 T; 3D T1-weighted DCE, short time inversion recovery T2, and single-shot EPI DWI. ASSESSMENT: Pathological information and BI-RADS features were compared. Random Forest was used to select MRI features, and then four machine learning (ML) algorithms: decision tree (DT), support vector machine (SVM), k-nearest neighbors (k-NN), and artificial neural nets (ANN), were applied to build models. STATISTICAL TESTS: Chi-square test, one-way analysis of variance, and Kruskal-Wallis test were performed. The P values <0.05 were considered statistically significant. For ML models, the generated probability was used to construct the ROC curves. RESULTS: Peritumoral edema, the presence of multiple lesions and non-mass enhancement (NME) showed significant differences. For distinguishing HER2-zero from non-zero (low + positive), multiple lesions, edema, margin, and tumor size were selected, and the k-NN model achieved the highest AUC of 0.86 in the training set and 0.79 in the testing set. For differentiating HER2-low from HER2-positive, multiple lesions, edema, and margin were selected, and the DT model achieved the highest AUC of 0.79 in the training set and 0.69 in the testing set. DATA CONCLUSION: BI-RADS features read by radiologists from preoperative MRI can be analyzed using more sophisticated feature selection and ML algorithms to build models for the classification of HER2 status and identify HER2-low. TECHNICAL EFFICACY: Stage 2.

2.
Phys Med Biol ; 69(5)2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38406849

ABSTRACT

MRI image segmentation is widely used in clinical practice as a prerequisite and a key for diagnosing brain tumors. The quest for an accurate automated segmentation method for brain tumor images, aiming to ease clinical doctors' workload, has gained significant attention as a research focal point. Despite the success of fully supervised methods in brain tumor segmentation, challenges remain. Due to the high cost involved in annotating medical images, the dataset available for training fully supervised methods is very limited. Additionally, medical images are prone to noise and motion artifacts, negatively impacting quality. In this work, we propose MAPSS, a motion-artifact-augmented pseudo-label network for semi-supervised segmentation. Our method combines motion artifact data augmentation with the pseudo-label semi-supervised training framework. We conduct several experiments under different semi-supervised settings on a publicly available dataset BraTS2020 for brain tumor segmentation. The experimental results show that MAPSS achieves accurate brain tumor segmentation with only a small amount of labeled data and maintains robustness in motion-artifact-influenced images. We also assess the generalization performance of MAPSS using the Left Atrium dataset. Our algorithm is of great significance for assisting doctors in formulating treatment plans and improving treatment quality.


Subject(s)
Artifacts , Brain Neoplasms , Humans , Brain Neoplasms/diagnostic imaging , Algorithms , Heart Atria , Motion , Image Processing, Computer-Assisted
3.
Comput Biol Med ; 159: 106884, 2023 06.
Article in English | MEDLINE | ID: mdl-37071938

ABSTRACT

Breast cancer is the most common cancer in women. Ultrasound is a widely used screening tool for its portability and easy operation, and DCE-MRI can highlight the lesions more clearly and reveal the characteristics of tumors. They are both noninvasive and nonradiative for assessment of breast cancer. Doctors make diagnoses and further instructions through the sizes, shapes and textures of the breast masses showed on medical images, so automatic tumor segmentation via deep neural networks can to some extent assist doctors. Compared to some challenges which the popular deep neural networks have faced, such as large amounts of parameters, lack of interpretability, overfitting problem, etc., we propose a segmentation network named Att-U-Node which uses attention modules to guide a neural ODE-based framework, trying to alleviate the problems mentioned above. Specifically, the network uses ODE blocks to make up an encoder-decoder structure, feature modeling by neural ODE is completed at each level. Besides, we propose to use an attention module to calculate the coefficient and generate a much refined attention feature for skip connection. Three public available breast ultrasound image datasets (i.e. BUSI, BUS and OASBUD) and a private breast DCE-MRI dataset are used to assess the efficiency of the proposed model, besides, we upgrade the model to 3D for tumor segmentation with the data selected from Public QIN Breast DCE-MRI. The experiments show that the proposed model achieves competitive results compared with the related methods while mitigates the common problems of deep neural networks.


Subject(s)
Breast Neoplasms , Mammary Neoplasms, Animal , Female , Humans , Animals , Breast Neoplasms/diagnostic imaging , Breast , Neural Networks, Computer , Image Processing, Computer-Assisted
4.
Neuroimage ; 244: 118568, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34508895

ABSTRACT

The annotation of brain lesion images is a key step in clinical diagnosis and treatment of a wide spectrum of brain diseases. In recent years, segmentation methods based on deep learning have gained unprecedented popularity, leveraging a large amount of data with high-quality voxel-level annotations. However, due to the limited time clinicians can provide for the cumbersome task of manual image segmentation, semi-supervised medical image segmentation methods present an alternative solution as they require only a few labeled samples for training. In this paper, we propose a novel semi-supervised segmentation framework that combines improved mean teacher and adversarial network. Specifically, our framework consists of (i) a student model and a teacher model for segmenting the target and generating the signed distance maps of object surfaces, and (ii) a discriminator network for extracting hierarchical features and distinguishing the signed distance maps of labeled and unlabeled data. Besides, based on two different adversarial learning processes, a multi-scale feature consistency loss derived from the student and teacher models is proposed, and a shape-aware embedding scheme is integrated into our framework. We evaluated the proposed method on the public brain lesion datasets from ISBI 2015, ISLES 2015, and BRATS 2018 for the multiple sclerosis lesion, ischemic stroke lesion, and brain tumor segmentation respectively. Experiments demonstrate that our method can effectively leverage unlabeled data while outperforming the supervised baseline and other state-of-the-art semi-supervised methods trained with the same labeled data. The proposed framework is suitable for joint training of limited labeled data and additional unlabeled data, which is expected to reduce the effort of obtaining annotated images.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain/diagnostic imaging , Deep Learning , Multiple Sclerosis/diagnostic imaging , Stroke/diagnostic imaging , Datasets as Topic , Humans , Magnetic Resonance Imaging , Research Design , Students
5.
Eur Radiol ; 31(4): 2559-2567, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33001309

ABSTRACT

OBJECTIVES: To apply deep learning algorithms using a conventional convolutional neural network (CNN) and a recurrent CNN to differentiate three breast cancer molecular subtypes on MRI. METHODS: A total of 244 patients were analyzed, 99 in training dataset scanned at 1.5 T and 83 in testing-1 and 62 in testing-2 scanned at 3 T. Patients were classified into 3 subtypes based on hormonal receptor (HR) and HER2 receptor: (HR+/HER2-), HER2+, and triple negative (TN). Only images acquired in the DCE sequence were used in the analysis. The smallest bounding box covering tumor ROI was used as the input for deep learning to develop the model in the training dataset, by using a conventional CNN and the convolutional long short-term memory (CLSTM). Then, transfer learning was applied to re-tune the model using testing-1(2) and evaluated in testing-2(1). RESULTS: In the training dataset, the mean accuracy evaluated using tenfold cross-validation was higher by using CLSTM (0.91) than by using CNN (0.79). When the developed model was applied to the independent testing datasets, the accuracy was 0.4-0.5. With transfer learning by re-tuning parameters in testing-1, the mean accuracy reached 0.91 by CNN and 0.83 by CLSTM, and improved accuracy in testing-2 from 0.47 to 0.78 by CNN and from 0.39 to 0.74 by CLSTM. Overall, transfer learning could improve the classification accuracy by greater than 30%. CONCLUSIONS: The recurrent network using CLSTM could track changes in signal intensity during DCE acquisition, and achieved a higher accuracy compared with conventional CNN during training. For datasets acquired using different settings, transfer learning can be applied to re-tune the model and improve accuracy. KEY POINTS: • Deep learning can be applied to differentiate breast cancer molecular subtypes. • The recurrent neural network using CLSTM could track the change of signal intensity in DCE images, and achieved a higher accuracy compared with conventional CNN during training. • For datasets acquired using different scanners with different imaging protocols, transfer learning provided an efficient method to re-tune the classification model and improve accuracy.


Subject(s)
Breast Neoplasms , Algorithms , Breast Neoplasms/diagnostic imaging , Humans , Machine Learning , Magnetic Resonance Imaging , Neural Networks, Computer
6.
J Magn Reson Imaging ; 51(3): 798-809, 2020 03.
Article in English | MEDLINE | ID: mdl-31675151

ABSTRACT

BACKGROUND: Computer-aided methods have been widely applied to diagnose lesions detected on breast MRI, but fully-automatic diagnosis using deep learning is rarely reported. PURPOSE: To evaluate the diagnostic accuracy of mass lesions using region of interest (ROI)-based, radiomics and deep-learning methods, by taking peritumor tissues into consideration. STUDY TYPE: Retrospective. POPULATION: In all, 133 patients with histologically confirmed 91 malignant and 62 benign mass lesions for training (74 patients with 48 malignant and 26 benign lesions for testing). FIELD STRENGTH/SEQUENCE: 3T, using the volume imaging for breast assessment (VIBRANT) dynamic contrast-enhanced (DCE) sequence. ASSESSMENT: 3D tumor segmentation was done automatically by using fuzzy-C-means algorithm with connected-component labeling. A total of 99 texture and histogram parameters were calculated for each case, and 15 were selected using random forest to build a radiomics model. Deep learning was implemented using ResNet50, evaluated with 10-fold crossvalidation. The tumor alone, smallest bounding box, and 1.2, 1.5, 2.0 times enlarged boxes were used as inputs. STATISTICAL TESTS: The malignancy probability was calculated using each model, and the threshold of 0.5 was used to make a diagnosis. RESULTS: In the training dataset, the diagnostic accuracy was 76% using three ROI-based parameters, 84% using the radiomics model, and 86% using ROI + radiomics model. In deep learning using the per-slice basis, the area under the receiver operating characteristic (ROC) was comparable for tumor alone, smallest and 1.2 times box (AUC = 0.97-0.99), which were significantly higher than 1.5 and 2.0 times box (AUC = 0.86 and 0.71, respectively). For per-lesion diagnosis, the highest accuracy of 91% was achieved when using the smallest bounding box, and that decreased to 84% for tumor alone and 1.2 times box, and further to 73% for 1.5 times box and 69% for 2.0 times box. In the independent testing dataset, the per-lesion diagnostic accuracy was also the highest when using the smallest bounding box, 89%. DATA CONCLUSION: Deep learning using ResNet50 achieved a high diagnostic accuracy. Using the smallest bounding box containing proximal peritumor tissue as input had higher accuracy compared to using tumor alone or larger boxes. LEVEL OF EVIDENCE: 3 Technical Efficacy: Stage 2.


Subject(s)
Breast Neoplasms , Deep Learning , Breast/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Contrast Media , Humans , Magnetic Resonance Imaging , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...