Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Diagnostics (Basel) ; 14(10)2024 May 15.
Article in English | MEDLINE | ID: mdl-38786313

ABSTRACT

Breast cancer is a major health concern worldwide. Mammography, a cost-effective and accurate tool, is crucial in combating this issue. However, low contrast, noise, and artifacts can limit the diagnostic capabilities of radiologists. Computer-Aided Diagnosis (CAD) systems have been developed to overcome these challenges, with the accurate outlining of the breast being a critical step for further analysis. This study introduces the SAM-breast model, an adaptation of the Segment Anything Model (SAM) for segmenting the breast region in mammograms. This method enhances the delineation of the breast and the exclusion of the pectoral muscle in both medio lateral-oblique (MLO) and cranio-caudal (CC) views. We trained the models using a large, multi-center proprietary dataset of 2492 mammograms. The proposed SAM-breast model achieved the highest overall Dice Similarity Coefficient (DSC) of 99.22% ± 1.13 and Intersection over Union (IoU) 98.48% ± 2.10 over independent test images from five different datasets (two proprietary and three publicly available). The results are consistent across the different datasets, regardless of the vendor or image resolution. Compared with other baseline and deep learning-based methods, the proposed method exhibits enhanced performance. The SAM-breast model demonstrates the power of the SAM to adapt when it is tailored to specific tasks, in this case, the delineation of the breast in mammograms. Comprehensive evaluations across diverse datasets-both private and public-attest to the method's robustness, flexibility, and generalization capabilities.

2.
Diagnostics (Basel) ; 12(8)2022 Jul 28.
Article in English | MEDLINE | ID: mdl-36010173

ABSTRACT

Breast density assessed from digital mammograms is a known biomarker related to a higher risk of developing breast cancer. Supervised learning algorithms have been implemented to determine this. However, the performance of these algorithms depends on the quality of the ground-truth information, which expert readers usually provide. These expert labels are noisy approximations to the ground truth, as there is both intra- and inter-observer variability among them. Thus, it is crucial to provide a reliable method to measure breast density from mammograms. This paper presents a fully automated method based on deep learning to estimate breast density, including breast detection, pectoral muscle exclusion, and dense tissue segmentation. We propose a novel confusion matrix (CM)-YNet model for the segmentation step. This architecture includes networks to model each radiologist's noisy label and gives the estimated ground-truth segmentation as well as two parameters that allow interaction with a threshold-based labeling tool. A multi-center study involving 1785 women whose "for presentation" mammograms were obtained from 11 different medical facilities was performed. A total of 2496 mammograms were used as the training corpus, and 844 formed the testing corpus. Additionally, we included a totally independent dataset from a different center, composed of 381 women with one image per patient. Each mammogram was labeled independently by two expert radiologists using a threshold-based tool. The implemented CM-Ynet model achieved the highest DICE score averaged over both test datasets (0.82±0.14) when compared to the closest dense-tissue segmentation assessment from both radiologists. The level of concordance between the two radiologists showed a DICE score of 0.76±0.17. An automatic breast density estimator based on deep learning exhibited higher performance when compared with two experienced radiologists. This suggests that modeling each radiologist's label allows for better estimation of the unknown ground-truth segmentation. The advantage of the proposed model is that it also provides the threshold parameters that enable user interaction with a threshold-based tool.

3.
Comput Methods Programs Biomed ; 221: 106885, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35594581

ABSTRACT

BACKGROUND AND OBJECTIVE: Breast density assessed from digital mammograms is a biomarker for higher risk of developing breast cancer. Experienced radiologists assess breast density using the Breast Image and Data System (BI-RADS) categories. Supervised learning algorithms have been developed with this objective in mind, however, the performance of these algorithms depends on the quality of the ground-truth information which is usually labeled by expert readers. These labels are noisy approximations of the ground truth, as there is often intra- and inter-reader variability among labels. Thus, it is crucial to provide a reliable method to obtain digital mammograms matching BI-RADS categories. This paper presents RegL (Labels Regularizer), a methodology that includes different image pre-processes to allow both a correct breast segmentation and the enhancement of image quality through an intensity adjustment, thus allowing the use of deep learning to classify the mammograms into BI-RADS categories. The Confusion Matrix (CM) - CNN network used implements an architecture that models each radiologist's noisy label. The final methodology pipeline was determined after comparing the performance of image pre-processes combined with different DL architectures. METHODS: A multi-center study composed of 1395 women whose mammograms were classified into the four BI-RADS categories by three experienced radiologists is presented. A total of 892 mammograms were used as the training corpus, 224 formed the validation corpus, and 279 the test corpus. RESULTS: The combination of five networks implementing the RegL methodology achieved the best results among all the models in the test set. The ensemble model obtained an accuracy of (0.85) and a kappa index of 0.71. CONCLUSIONS: The proposed methodology has a similar performance to the experienced radiologists in the classification of digital mammograms into BI-RADS categories. This suggests that the pre-processing steps and modelling of each radiologist's label allows for a better estimation of the unknown ground truth labels.


Subject(s)
Breast Neoplasms , Deep Learning , Breast/diagnostic imaging , Breast Density , Breast Neoplasms/diagnostic imaging , Female , Humans , Mammography/methods
4.
Eur Radiol ; 28(11): 4514-4523, 2018 Nov.
Article in English | MEDLINE | ID: mdl-29761357

ABSTRACT

OBJECTIVE: To examine the capability of MRI texture analysis to differentiate the primary site of origin of brain metastases following a radiomics approach. METHODS: Sixty-seven untreated brain metastases (BM) were found in 3D T1-weighted MRI of 38 patients with cancer: 27 from lung cancer, 23 from melanoma and 17 from breast cancer. These lesions were segmented in 2D and 3D to compare the discriminative power of 2D and 3D texture features. The images were quantized using different number of gray-levels to test the influence of quantization. Forty-three rotation-invariant texture features were examined. Feature selection and random forest classification were implemented within a nested cross-validation structure. Classification was evaluated with the area under receiver operating characteristic curve (AUC) considering two strategies: multiclass and one-versus-one. RESULTS: In the multiclass approach, 3D texture features were more discriminative than 2D features. The best results were achieved for images quantized with 32 gray-levels (AUC = 0.873 ± 0.064) using the top four features provided by the feature selection method based on the p-value. In the one-versus-one approach, high accuracy was obtained when differentiating lung cancer BM from breast cancer BM (four features, AUC = 0.963 ± 0.054) and melanoma BM (eight features, AUC = 0.936 ± 0.070) using the optimal dataset (3D features, 32 gray-levels). Classification of breast cancer and melanoma BM was unsatisfactory (AUC = 0.607 ± 0.180). CONCLUSION: Volumetric MRI texture features can be useful to differentiate brain metastases from different primary cancers after quantizing the images with the proper number of gray-levels. KEY POINTS: • Texture analysis is a promising source of biomarkers for classifying brain neoplasms. • MRI texture features of brain metastases could help identifying the primary cancer. • Volumetric texture features are more discriminative than traditional 2D texture features.


Subject(s)
Brain Neoplasms/classification , Brain Neoplasms/secondary , Breast Neoplasms/diagnostic imaging , Lung Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Melanoma/diagnostic imaging , Adult , Aged , Analysis of Variance , Diagnosis, Differential , Feasibility Studies , Female , Humans , Male , Middle Aged , ROC Curve , Retrospective Studies , Young Adult
5.
Med Phys ; 45(4): 1471-1480, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29389013

ABSTRACT

PURPOSE: To investigate the ability of texture analysis to differentiate between infarcted nonviable, viable, and remote segments on cardiac cine magnetic resonance imaging (MRI). METHODS: This retrospective study included 50 patients suffering chronic myocardial infarction. The data were randomly split into training (30 patients) and testing (20 patients) sets. The left ventricular myocardium was segmented according to the 17-segment model in both cine and late gadolinium enhancement (LGE) MRI. Infarcted myocardium regions were identified on LGE in short-axis views. Nonviable segments were identified as those showing LGE ≥ 50%, and viable segments those showing 0 < LGE < 50% transmural extension. Features derived from five texture analysis methods were extracted from the segments on cine images. A support vector machine (SVM) classifier was trained with different combination of texture features to obtain a model that provided optimal classification performance. RESULTS: The best classification on testing set was achieved with local binary patterns features using a 2D + t approach, in which the features are computed by including information of the time dimension available in cine sequences. The best overall area under the receiver operating characteristic curve (AUC) were: 0.849, sensitivity of 92% to detect nonviable segments, 72% to detect viable segments, and 85% to detect remote segments. CONCLUSION: Nonviable segments can be detected on cine MRI using texture analysis and this may be used as hypothesis for future research aiming to detect the infarcted myocardium by means of a gadolinium-free approach.


Subject(s)
Heart/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging, Cine , Myocardial Infarction/diagnostic imaging , Myocardial Infarction/pathology , Tissue Survival , Chronic Disease , Female , Humans , Male , Middle Aged , Retrospective Studies
6.
MAGMA ; 31(2): 285-294, 2018 Apr.
Article in English | MEDLINE | ID: mdl-28939952

ABSTRACT

OBJECTIVE: To find structural differences between brain metastases of lung and breast cancer, computing their heterogeneity parameters by means of both 2D and 3D texture analysis (TA). MATERIALS AND METHODS: Patients with 58 brain metastases from breast (26) and lung cancer (32) were examined by MR imaging. Brain lesions were manually delineated by 2D ROIs on the slices of contrast-enhanced T1-weighted (CET1) images, and local binary patterns (LBP) maps were created from each region. Histogram-based (minimum, maximum, mean, standard deviation, and variance), and co-occurrence matrix-based (contrast, correlation, energy, entropy, and homogeneity) 2D, weighted average of the 2D slices, and true 3D TA were obtained on the CET1 images and LBP maps. RESULTS: For LBP maps and 2D TA contrast, correlation, energy, and homogeneity were identified as statistically different heterogeneity parameters (SDHPs) between lung and breast metastasis. The weighted 3D TA identified entropy as an additional SDHP. Only two texture indexes (TI) were significantly different with true 3D TA: entropy and energy. All these TIs discriminated between the two tumor types significantly by ROC analysis. For the CET1 images there was no SDHP at all by 3D TA. CONCLUSION: Our results indicate that the used textural analysis methods may help with discriminating between brain metastases of different primary tumors.


Subject(s)
Brain Neoplasms/diagnostic imaging , Brain Neoplasms/secondary , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Neoplasm Metastasis , Brain/diagnostic imaging , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Contrast Media/chemistry , Female , Humans , Image Interpretation, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Models, Statistical , ROC Curve , Retrospective Studies , Sensitivity and Specificity
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 493-496, 2017 Jul.
Article in English | MEDLINE | ID: mdl-29059917

ABSTRACT

Brain metastases are occasionally detected before diagnosing their primary site of origin. In these cases, simple visual examination of medical images of the metastases is not enough to identify the primary cancer, so an extensive evaluation is needed. To avoid this procedure, a radiomics approach on magnetic resonance (MR) images of the metastatic lesions is proposed to classify two of the most frequent origins (lung cancer and melanoma). In this study, 50 T1-weighted MR images of brain metastases from 30 patients were analyzed: 27 of lung cancer and 23 of melanoma origin. A total of 43 statistical texture features were extracted from the segmented lesions in 2D and 3D. Five predictive models were evaluated using a nested cross-validation scheme. The best classification results were achieved using 3D texture features for all the models, obtaining an average AUC > 0.9 in all cases and an AUC = 0.947 ± 0.067 when using the best model (naïve Bayes).


Subject(s)
Brain Neoplasms/diagnostic imaging , Bayes Theorem , Brain Neoplasms/secondary , Humans , Lung Neoplasms , Magnetic Resonance Imaging , Melanoma
8.
Eur J Radiol ; 92: 78-83, 2017 Jul.
Article in English | MEDLINE | ID: mdl-28624024

ABSTRACT

The purpose of this study was to differentiate acute from chronic myocardial infarction using machine learning techniques and texture features extracted from cardiac magnetic resonance imaging (MRI). The study group comprised 22 cases with acute myocardial infarction (AMI) and 22 cases with chronic myocardial infarction (CMI). Cine and late gadolinium enhancement (LGE) MRI were analyzed independently to differentiate AMI from CMI. A total of 279 texture features were extracted from predefined regions of interest (ROIs): the infarcted area on LGE MRI, and the entire myocardium on cine MRI. Classification performance was evaluated by a nested cross-validation approach combining a feature selection technique with three predictive models: random forest, support vector machine (SVM) with Gaussian Kernel, and SVM with polynomial kernel. The polynomial SVM yielded the best classification performance. Receiver operating characteristic curves provided area-under-the-curve (AUC) (mean±standard deviation) of 0.86±0.06 on LGE MRI using 72 features; AMI sensitivity=0.81±0.08 and specificity=0.84±0.09. On cine MRI, AUC=0.82±0.06 using 75 features; AMI sensitivity=0.79±0.10 and specificity=0.80±0.10. We concluded that texture analysis can be used for differentiation of AMI from CMI on cardiac LGE MRI, and also on standard cine sequences in which the infarction is visually imperceptible in most cases.


Subject(s)
Contrast Media , Gadolinium , Myocardial Infarction/pathology , Acute Disease , Algorithms , Area Under Curve , Chronic Disease , Diagnosis, Differential , Female , Humans , Magnetic Resonance Angiography/methods , Magnetic Resonance Imaging, Cine/methods , Male , Middle Aged , Myocardium/pathology , ROC Curve , Reproducibility of Results , Sensitivity and Specificity , Support Vector Machine
9.
J Magn Reson Imaging ; 42(5): 1362-8, 2015 Nov.
Article in English | MEDLINE | ID: mdl-25865833

ABSTRACT

PURPOSE: To develop a classification model using texture features and support vector machine in contrast-enhanced T1-weighted images to differentiate between brain metastasis and radiation necrosis. METHODS: Texture features were extracted from 115 lesions: 32 of them previously diagnosed as radiation necrosis, 23 as radiation-treated metastasis and 60 untreated metastases; including a total of 179 features derived from six texture analysis methods. A feature selection technique based on support vector machine was used to obtain a subset of features that provide optimal performance. RESULTS: The highest classification accuracy evaluated over test sets was achieved with a subset of ten features when the untreated metastases were not considered; and with a subset of seven features when the classifier was trained with untreated metastases and tested on treated ones. Receiver operating characteristic curves provided area-under-the-curve (mean ± standard deviation) of 0.94 ± 0.07 in the first case, and 0.93 ± 0.02 in the second. CONCLUSION: High classification accuracy (AUC > 0.9) was obtained using texture features and a support vector machine classifier in an approach based on conventional MRI to differentiate between brain metastasis and radiation necrosis.


Subject(s)
Brain Neoplasms/diagnosis , Brain Neoplasms/secondary , Brain/pathology , Magnetic Resonance Imaging , Radiation Injuries/pathology , Support Vector Machine , Area Under Curve , Contrast Media , Diagnosis, Differential , Female , Humans , Image Enhancement , Male , Middle Aged , Necrosis , Reproducibility of Results , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...