Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 24
Filter
1.
J Imaging Inform Med ; 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39227537

ABSTRACT

Thermography is a non-invasive and non-contact method for detecting cancer in its initial stages by examining the temperature variation between both breasts. Preprocessing methods such as resizing, ROI (region of interest) segmentation, and augmentation are frequently used to enhance the accuracy of breast thermogram analysis. In this study, a modified U-Net architecture (DTCWAU-Net) that uses dual-tree complex wavelet transform (DTCWT) and attention gate for breast thermal image segmentation for frontal and lateral view thermograms, aiming to outline ROI for potential tumor detection, was proposed. The proposed approach achieved an average Dice coefficient of 93.03% and a sensitivity of 94.82%, showcasing its potential for accurate breast thermogram segmentation. Classification of breast thermograms into healthy or cancerous categories was carried out by extracting texture- and histogram-based features and deep features from segmented thermograms. Feature selection was performed using Neighborhood Component Analysis (NCA), followed by the application of machine learning classifiers. When compared to other state-of-the-art approaches for detecting breast cancer using a thermogram, the proposed methodology showed a higher accuracy of 99.90% for VGG16 deep features with NCA and Random Forest classifier. Simulation results expound that the proposed method can be used in breast cancer screening, facilitating early detection, and enhancing treatment outcomes.

2.
Radiol Med ; 129(6): 864-878, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38755477

ABSTRACT

OBJECTIVE: To evaluate the performance of radiomic analysis on contrast-enhanced mammography images to identify different histotypes of breast cancer mainly in order to predict grading, to identify hormone receptors, to discriminate human epidermal growth factor receptor 2 (HER2) and to identify luminal histotype of the breast cancer. METHODS: From four Italian centers were recruited 180 malignant lesions and 68 benign lesions. However, only the malignant lesions were considered for the analysis. All patients underwent contrast-enhanced mammography in cranium caudal (CC) and medium lateral oblique (MLO) view. Considering histological findings as the ground truth, four outcomes were considered: (1) G1 + G2 vs. G3; (2) HER2 + vs. HER2 - ; (3) HR + vs. HR - ; and (4) non-luminal vs. luminal A or HR + /HER2- and luminal B or HR + /HER2 + . For multivariate analysis feature selection, balancing techniques and patter recognition approaches were considered. RESULTS: The univariate findings showed that the diagnostic performance is low for each outcome, while the results of the multivariate analysis showed that better performances can be obtained. In the HER2 + detection, the best performance (73% of accuracy and AUC = 0.77) was obtained using a linear regression model (LRM) with 12 features extracted by MLO view. In the HR + detection, the best performance (77% of accuracy and AUC = 0.80) was obtained using a LRM with 14 features extracted by MLO view. In grading classification, the best performance was obtained by a decision tree trained with three predictors extracted by MLO view reaching an accuracy of 82% on validation set. In the luminal versus non-luminal histotype classification, the best performance was obtained by a bagged tree trained with 15 predictors extracted by CC view reaching an accuracy of 94% on validation set. CONCLUSIONS: The results suggest that radiomics analysis can be effectively applied to design a tool to support physician decision making in breast cancer classification. In particular, the classification of luminal versus non-luminal histotypes can be performed with high accuracy.


Subject(s)
Artificial Intelligence , Breast Neoplasms , Contrast Media , Mammography , Humans , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Female , Middle Aged , Mammography/methods , Aged , Italy , Adult , Neoplasm Grading , Radiographic Image Interpretation, Computer-Assisted/methods , Receptor, ErbB-2 , Sensitivity and Specificity , Radiomics
3.
Heliyon ; 10(2): e24094, 2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38293493

ABSTRACT

Breast cancer, a significant threat to women's health, demands early detection. Automating histopathological image analysis offers a promising solution to enhance efficiency and accuracy in diagnosis. This study addresses the challenge of breast cancer histopathological image classification by leveraging the ResNet architecture, known for its depth and skip connections. In this work, two distinct approaches were pursued, each driven by unique motivations. The first approach aimed to improve the learning process through self-supervised contrastive learning. It utilizes a small subset of the training data for initial model training and progressively expands the training set by incorporating confidently labeled data from the unlabeled pool, ultimately achieving a reliable model with limited training data. The second approach focused on optimizing the architecture by combining ResNet50 and Inception module to get a lightweight and efficient classifier. The dataset utilized in this work comprises histopathological images categorized into benign and malignant classes at varying magnification levels (40X, 100X, 200X, 400X), all originating from the same source image. The results demonstrate state-of-the-art performance, achieving 98% accuracy for images magnified at 40X and 200X, and 94% for 100X and 400X. Notably, the proposed architecture boasts a substantially reduced parameter count of approximately 3.6 million, contrasting with existing leading architectures, which possess parameter sizes at least twice as large.

4.
Front Oncol ; 13: 1179025, 2023.
Article in English | MEDLINE | ID: mdl-37397361

ABSTRACT

Background: Breast-conserving surgery is aimed at removing all cancerous cells while minimizing the loss of healthy tissue. To ensure a balance between complete resection of cancer and preservation of healthy tissue, it is necessary to assess themargins of the removed specimen during the operation. Deep ultraviolet (DUV) fluorescence scanning microscopy provides rapid whole-surface imaging (WSI) of resected tissues with significant contrast between malignant and normal/benign tissue. Intra-operative margin assessment with DUV images would benefit from an automated breast cancer classification method. Methods: Deep learning has shown promising results in breast cancer classification, but the limited DUV image dataset presents the challenge of overfitting to train a robust network. To overcome this challenge, the DUV-WSI images are split into small patches, and features are extracted using a pre-trained convolutional neural network-afterward, a gradient-boosting tree trains on these features for patch-level classification. An ensemble learning approach merges patch-level classification results and regional importance to determine the margin status. An explainable artificial intelligence method calculates the regional importance values. Results: The proposed method's ability to determine the DUV WSI was high with 95% accuracy. The 100% sensitivity shows that the method can detect malignant cases efficiently. The method could also accurately localize areas that contain malignant or normal/benign tissue. Conclusion: The proposed method outperforms the standard deep learning classification methods on the DUV breast surgical samples. The results suggest that it can be used to improve classification performance and identify cancerous regions more effectively.

5.
Cancers (Basel) ; 15(3)2023 Jan 31.
Article in English | MEDLINE | ID: mdl-36765839

ABSTRACT

Histopathological images are commonly used imaging modalities for breast cancer. As manual analysis of histopathological images is difficult, automated tools utilizing artificial intelligence (AI) and deep learning (DL) methods should be modelled. The recent advancements in DL approaches will be helpful in establishing maximal image classification performance in numerous application zones. This study develops an arithmetic optimization algorithm with deep-learning-based histopathological breast cancer classification (AOADL-HBCC) technique for healthcare decision making. The AOADL-HBCC technique employs noise removal based on median filtering (MF) and a contrast enhancement process. In addition, the presented AOADL-HBCC technique applies an AOA with a SqueezeNet model to derive feature vectors. Finally, a deep belief network (DBN) classifier with an Adamax hyperparameter optimizer is applied for the breast cancer classification process. In order to exhibit the enhanced breast cancer classification results of the AOADL-HBCC methodology, this comparative study states that the AOADL-HBCC technique displays better performance than other recent methodologies, with a maximum accuracy of 96.77%.

6.
Diagnostics (Basel) ; 12(12)2022 Dec 12.
Article in English | MEDLINE | ID: mdl-36553140

ABSTRACT

In computer-aided diagnosis methods for breast cancer, deep learning has been shown to be an effective method to distinguish whether lesions are present in tissues. However, traditional methods only classify masses as benign or malignant, according to their presence or absence, without considering the contextual features between them and their adjacent tissues. Furthermore, for contrast-enhanced spectral mammography, the existing studies have only performed feature extraction on a single image per breast. In this paper, we propose a multi-input deep learning network for automatic breast cancer classification. Specifically, we simultaneously input four images of each breast with different feature information into the network. Then, we processed the feature maps in both horizontal and vertical directions, preserving the pixel-level contextual information within the neighborhood of the tumor during the pooling operation. Furthermore, we designed a novel loss function according to the information bottleneck theory to optimize our multi-input network and ensure that the common information in the multiple input images could be fully utilized. Our experiments on 488 images (256 benign and 232 malignant images) from 122 patients show that the method's accuracy, precision, sensitivity, specificity, and f1-score values are 0.8806, 0.8803, 0.8810, 0.8801, and 0.8806, respectively. The qualitative, quantitative, and ablation experiment results show that our method significantly improves the accuracy of breast cancer classification and reduces the false positive rate of diagnosis. It can reduce misdiagnosis rates and unnecessary biopsies, helping doctors determine accurate clinical diagnoses of breast cancer from multiple CESM images.

7.
Healthcare (Basel) ; 10(12)2022 Nov 25.
Article in English | MEDLINE | ID: mdl-36553891

ABSTRACT

Breast cancer is one of the most widely recognized diseases after skin cancer. Though it can occur in all kinds of people, it is undeniably more common in women. Several analytical techniques, such as Breast MRI, X-ray, Thermography, Mammograms, Ultrasound, etc., are utilized to identify it. In this study, artificial intelligence was used to rapidly detect breast cancer by analyzing ultrasound images from the Breast Ultrasound Images Dataset (BUSI), which consists of three categories: Benign, Malignant, and Normal. The relevant dataset comprises grayscale and masked ultrasound images of diagnosed patients. Validation tests were accomplished for quantitative outcomes utilizing the exhibition measures for each procedure. The proposed framework is discovered to be effective, substantiating outcomes with only raw image evaluation giving a 78.97% test accuracy and masked image evaluation giving 81.02% test precision, which could decrease human errors in the determination cycle. Additionally, our described framework accomplishes higher accuracy after using multi-headed CNN with two processed datasets based on masked and original images, where the accuracy hopped up to 92.31% (±2) with a Mean Squared Error (MSE) loss of 0.05. This work primarily contributes to identifying the usefulness of multi-headed CNN when working with two different types of data inputs. Finally, a web interface has been made to make this model usable for non-technical personals.

8.
Comput Biol Med ; 150: 106155, 2022 11.
Article in English | MEDLINE | ID: mdl-36240595

ABSTRACT

Histopathological image classification has become one of the most challenging tasks among researchers due to the fine-grained variability of the disease. However, the rapid development of deep learning-based models such as the Convolutional Neural Network (CNN) has propelled much attentiveness to the classification of complex biomedical images. In this work, we propose a novel end-to-end deep learning model, named Multi-scale Dual Residual Recurrent Network (MTRRE-Net), for breast cancer classification from histopathological images. This model introduces a contrasting approach of dual residual block combined with the recurrent network to overcome the vanishing gradient problem even if the network is significantly deep. The proposed model has been evaluated on a publicly available standard dataset, namely BreaKHis, and achieved impressive accuracy in overcoming state-of-the-art models on all the images considered at various magnification levels.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Female , Breast Neoplasms/diagnostic imaging , Breast Neoplasms/pathology , Neural Networks, Computer , Breast/pathology
9.
Neural Comput Appl ; 34(20): 18015-18033, 2022.
Article in English | MEDLINE | ID: mdl-35698722

ABSTRACT

Breast cancer is the second leading cause of death in women; therefore, effective early detection of this cancer can reduce its mortality rate. Breast cancer detection and classification in the early phases of development may allow for optimal therapy. Convolutional neural networks (CNNs) have enhanced tumor detection and classification efficiency in medical imaging compared to traditional approaches. This paper proposes a novel classification model for breast cancer diagnosis based on a hybridized CNN and an improved optimization algorithm, along with transfer learning, to help radiologists detect abnormalities efficiently. The marine predators algorithm (MPA) is the optimization algorithm we used, and we improve it using the opposition-based learning strategy to cope with the implied weaknesses of the original MPA. The improved marine predators algorithm (IMPA) is used to find the best values for the hyperparameters of the CNN architecture. The proposed method uses a pretrained CNN model called ResNet50 (residual network). This model is hybridized with the IMPA algorithm, resulting in an architecture called IMPA-ResNet50. Our evaluation is performed on two mammographic datasets, the mammographic image analysis society (MIAS) and curated breast imaging subset of DDSM (CBIS-DDSM) datasets. The proposed model was compared with other state-of-the-art approaches. The obtained results showed that the proposed model outperforms the compared state-of-the-art approaches, which are beneficial to classification performance, achieving 98.32% accuracy, 98.56% sensitivity, and 98.68% specificity on the CBIS-DDSM dataset and 98.88% accuracy, 97.61% sensitivity, and 98.40% specificity on the MIAS dataset. To evaluate the performance of IMPA in finding the optimal values for the hyperparameters of ResNet50 architecture, it compared to four other optimization algorithms including gravitational search algorithm (GSA), Harris hawks optimization (HHO), whale optimization algorithm (WOA), and the original MPA algorithm. The counterparts algorithms are also hybrid with the ResNet50 architecture produce models named GSA-ResNet50, HHO-ResNet50, WOA-ResNet50, and MPA-ResNet50, respectively. The results indicated that the proposed IMPA-ResNet50 is achieved a better performance than other counterparts.

10.
J Colloid Interface Sci ; 611: 287-293, 2022 Apr.
Article in English | MEDLINE | ID: mdl-34953461

ABSTRACT

Breast cancer has seriously threatened women health in the world. Breast cancer classification may provide accurate molecular diagnosis information of the disease and prediction of tumor behavior to facilitate oncologic decision making. Here, we designed a dual-aptamers functionalized gold nanoprobe (DA-GNP) for classification of breast cancer based on Förster resonance energy transfer (FRET). The fluorescent labelled ER and HER2 (typical biomarker for breast cancer classification) specific aptamers are attached to gold nanoparticles' (GNPs) surface and fluorescence is quenched ultimately. The breast cancer subtype specific fluorescence will be recovered while the fluorescent labelled aptamer is bound to the biomarker protein, which are potentially useful for quantitative classification of different subtypes of breast cancer.


Subject(s)
Aptamers, Nucleotide , Biosensing Techniques , Breast Neoplasms , Metal Nanoparticles , Female , Fluorescence Resonance Energy Transfer , Gold , Humans
11.
Med Image Anal ; 75: 102264, 2022 01.
Article in English | MEDLINE | ID: mdl-34781160

ABSTRACT

Cancer diagnosis, prognosis, and therapy response predictions from tissue specimens highly depend on the phenotype and topological distribution of constituting histological entities. Thus, adequate tissue representations for encoding histological entities is imperative for computer aided cancer patient care. To this end, several approaches have leveraged cell-graphs, capturing the cell-microenvironment, to depict the tissue. These allow for utilizing graph theory and machine learning to map the tissue representation to tissue functionality, and quantify their relationship. Though cellular information is crucial, it is incomplete alone to comprehensively characterize complex tissue structure. We herein treat the tissue as a hierarchical composition of multiple types of histological entities from fine to coarse level, capturing multivariate tissue information at multiple levels. We propose a novel multi-level hierarchical entity-graph representation of tissue specimens to model the hierarchical compositions that encode histological entities as well as their intra- and inter-entity level interactions. Subsequently, a hierarchical graph neural network is proposed to operate on the hierarchical entity-graph and map the tissue structure to tissue functionality. Specifically, for input histology images, we utilize well-defined cells and tissue regions to build HierArchical Cell-to-Tissue (HACT) graph representations, and devise HACT-Net, a message passing graph neural network, to classify the HACT representations. As part of this work, we introduce the BReAst Carcinoma Subtyping (BRACS) dataset, a large cohort of Haematoxylin & Eosin stained breast tumor regions-of-interest, to evaluate and benchmark our proposed methodology against pathologists and state-of-the-art computer-aided diagnostic approaches. Through comparative assessment and ablation studies, our proposed method is demonstrated to yield superior classification results compared to alternative methods as well as individual pathologists. The code, data, and models can be accessed at https://github.com/histocartography/hact-net.


Subject(s)
Histological Techniques , Neural Networks, Computer , Benchmarking , Humans , Prognosis
12.
Curr Med Imaging ; 18(4): 409-416, 2022.
Article in English | MEDLINE | ID: mdl-33602102

ABSTRACT

AIMS: Early detection of breast cancer has reduced many deaths. Earlier CAD systems used to be the second opinion for radiologists and clinicians. Machine learning and deep learning have brought tremendous changes in medical diagnosis and imagining. BACKGROUND: Breast cancer is the most commonly occurring cancer in women and it is the second most common cancer overall. According to the 2018 statistics, there were over 2million cases all over the world. Belgium and Luxembourg have the highest rate of cancer. OBJECTIVE: A method for breast cancer detection has been proposed using Ensemble learning. 2- class and 8-class classification is performed. METHODS: To deal with imbalance classification, the authors have proposed an ensemble of pretrained models. RESULTS: 98.5% training accuracy and 89% of test accuracy are achieved on 8-class classification. Moreover, 99.1% and 98% train and test accuracy are achieved on 2 class classification. CONCLUSION: it is found that there are high misclassifications in class DC when compared to the other classes, this is due to the imbalance in the dataset. In the future, one can increase the size of the datasets or use different methods. In implement this research work, authors have used 2 Nvidia Tesla V100 GPU's in google cloud platform.


Subject(s)
Breast Neoplasms , Breast , Breast Neoplasms/diagnostic imaging , Female , Humans , Machine Learning , Neural Networks, Computer
13.
Biology (Basel) ; 10(12)2021 Dec 17.
Article in English | MEDLINE | ID: mdl-34943262

ABSTRACT

BACKGROUND: Identification and treatment of breast cancer at an early stage can reduce mortality. Currently, mammography is the most widely used effective imaging technique in breast cancer detection. However, an erroneous mammogram based interpretation may result in false diagnosis rate, as distinguishing cancerous masses from adjacent tissue is often complex and error-prone. METHODS: Six pre-trained and fine-tuned deep CNN architectures: VGG16, VGG19, MobileNetV2, ResNet50, DenseNet201, and InceptionV3 are evaluated to determine which model yields the best performance. We propose a BreastNet18 model using VGG16 as foundational base, since VGG16 performs with the highest accuracy. An ablation study is performed on BreastNet18, to evaluate its robustness and achieve the highest possible accuracy. Various image processing techniques with suitable parameter values are employed to remove artefacts and increase the image quality. A total dataset of 1442 preprocessed mammograms was augmented using seven augmentation techniques, resulting in a dataset of 11,536 images. To investigate possible overfitting issues, a k-fold cross validation is carried out. The model was then tested on noisy mammograms to evaluate its robustness. Results were compared with previous studies. RESULTS: Proposed BreastNet18 model performed best with a training accuracy of 96.72%, a validating accuracy of 97.91%, and a test accuracy of 98.02%. In contrast to this, VGGNet19 yielded test accuracy of 96.24%, MobileNetV2 77.84%, ResNet50 79.98%, DenseNet201 86.92%, and InceptionV3 76.87%. CONCLUSIONS: Our proposed approach based on image processing, transfer learning, fine-tuning, and ablation study has demonstrated a high correct breast cancer classification while dealing with a limited number of complex medical images.

14.
PeerJ Comput Sci ; 7: e344, 2021.
Article in English | MEDLINE | ID: mdl-33816995

ABSTRACT

Artificial neural networks (ANN) perform well in real-world classification problems. In this paper, a robust classification model using ANN was constructed to enhance the accuracy of breast cancer classification. The Taguchi method was used to determine the suitable number of neurons in a single hidden layer of the ANN. The selection of a suitable number of neurons helps to solve the overfitting problem by affecting the classification performance of an ANN. With this, a robust classification model was then built for breast cancer classification. Based on the Taguchi method results, the suitable number of neurons selected for the hidden layer in this study is 15, which was used for the training of the proposed ANN model. The developed model was benchmarked upon the Wisconsin Diagnostic Breast Cancer Dataset, popularly known as the UCI dataset. Finally, the proposed model was compared with seven other existing classification models, and it was confirmed that the model in this study had the best accuracy at breast cancer classification, at 98.8%. This confirmed that the proposed model significantly improved performance.

15.
BMC Med Inform Decis Mak ; 21(Suppl 1): 134, 2021 04 22.
Article in English | MEDLINE | ID: mdl-33888098

ABSTRACT

BACKGROUND: Deep learning algorithms significantly improve the accuracy of pathological image classification, but the accuracy of breast cancer classification using only single-mode pathological images still cannot meet the needs of clinical practice. Inspired by the real scenario of pathologists reading pathological images for diagnosis, we integrate pathological images and structured data extracted from clinical electronic medical record (EMR) to further improve the accuracy of breast cancer classification. METHODS: In this paper, we propose a new richer fusion network for the classification of benign and malignant breast cancer based on multimodal data. To make pathological image can be integrated more sufficient with structured EMR data, we proposed a method to extract richer multilevel feature representation of the pathological image from multiple convolutional layers. Meanwhile, to minimize the information loss for each modality before data fusion, we use the denoising autoencoder as a way to increase the low-dimensional structured EMR data to high-dimensional, instead of reducing the high-dimensional image data to low-dimensional before data fusion. In addition, denoising autoencoder naturally generalizes our method to make the accurate prediction with partially missing structured EMR data. RESULTS: The experimental results show that the proposed method is superior to the most advanced method in terms of the average classification accuracy (92.9%). In addition, we have released a dataset containing structured data from 185 patients that were extracted from EMR and 3764 paired pathological images of breast cancer, which can be publicly downloaded from http://ear.ict.ac.cn/?page_id=1663 . CONCLUSIONS: We utilized a new richer fusion network to integrate highly heterogeneous data to leverage the structured EMR data to improve the accuracy of pathological image classification. Therefore, the application of automatic breast cancer classification algorithms in clinical practice becomes possible. Due to the generality of the proposed fusion method, it can be straightforwardly extended to the fusion of other structured data and unstructured data.


Subject(s)
Breast Neoplasms , Algorithms , Breast , Breast Neoplasms/diagnostic imaging , Electronic Health Records , Humans , Neural Networks, Computer
16.
Comput Methods Programs Biomed ; 203: 106018, 2021 May.
Article in English | MEDLINE | ID: mdl-33714900

ABSTRACT

BACKGROUND AND OBJECTIVE: The capability of deep learning radiomics (DLR) to extract high-level medical imaging features has promoted the use of computer-aided diagnosis of breast mass detected on ultrasound. Recently, generative adversarial network (GAN) has aided in tackling a general issue in DLR, i.e., obtaining a sufficient number of medical images. However, GAN methods require a pair of input and labeled images, which require an exhaustive human annotation process that is very time-consuming. The aim of this paper is to develop a radiomics model based on a semi-supervised GAN method to perform data augmentation in breast ultrasound images. METHODS: A total of 1447 ultrasound images, including 767 benign masses and 680 malignant masses were acquired from a tertiary hospital. A semi-supervised GAN model was developed to augment the breast ultrasound images. The synthesized images were subsequently used to classify breast masses using a convolutional neural network (CNN). The model was validated using a 5-fold cross-validation method. RESULTS: The proposed GAN architecture generated high-quality breast ultrasound images, verified by two experienced radiologists. The improved performance of semi-supervised learning increased the quality of the synthetic data produced in comparison to the baseline method. We achieved more accurate breast mass classification results (accuracy 90.41%, sensitivity 87.94%, specificity 85.86%) with our synthetic data augmentation compared to other state-of-the-art methods. CONCLUSION: The proposed radiomics model has demonstrated a promising potential to synthesize and classify breast masses on ultrasound in a semi-supervised manner.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Breast/diagnostic imaging , Female , Humans , Ultrasonography , Ultrasonography, Mammary
17.
Med Phys ; 48(6): 2827-2837, 2021 Jun.
Article in English | MEDLINE | ID: mdl-33368376

ABSTRACT

PURPOSE: Transfer learning is commonly used in deep learning for medical imaging to alleviate the problem of limited available data. In this work, we studied the risk of feature leakage and its dependence on sample size when using pretrained deep convolutional neural network (DCNN) as feature extractor for classification breast masses in mammography. METHODS: Feature leakage occurs when the training set is used for feature selection and classifier modeling while the cost function is guided by the validation performance or informed by the test performance. The high-dimensional feature space extracted from pretrained DCNN suffers from the curse of dimensionality; feature subsets that can provide excessively optimistic performance can be found for the validation set or test set if the latter is allowed for unlimited reuse during algorithm development. We designed a simulation study to examine feature leakage when using DCNN as feature extractor for mass classification in mammography. Four thousand five hundred and seventy-seven unique mass lesions were partitioned by patient into three sets: 3222 for training, 508 for validation, and 847 for independent testing. Three pretrained DCNNs, AlexNet, GoogLeNet, and VGG16, were first compared using a training set in fourfold cross validation and one was selected as the feature extractor. To assess generalization errors, the independent test set was sequestered as truly unseen cases. A training set of a range of sizes from 10% to 75% was simulated by random drawing from the available training set in addition to 100% of the training set. Three commonly used feature classifiers, the linear discriminant, the support vector machine, and the random forest were evaluated. A sequential feature selection method was used to find feature subsets that could achieve high classification performance in terms of the area under the receiver operating characteristic curve (AUC) in the validation set. The extent of feature leakage and the impact of training set size were analyzed by comparison to the performance in the unseen test set. RESULTS: All three classifiers showed large generalization error between the validation set and the independent sequestered test set at all sample sizes. The generalization error decreased as the sample size increased. At 100% of the sample size, one classifier achieved an AUC as high as 0.91 on the validation set while the corresponding performance on the unseen test set only reached an AUC of 0.72. CONCLUSIONS: Our results demonstrate that large generalization errors can occur in AI tools due to feature leakage. Without evaluation on unseen test cases, optimistically biased performance may be reported inadvertently, and can lead to unrealistic expectations and reduce confidence for clinical implementation.


Subject(s)
Mammography , Neural Networks, Computer , Algorithms , Breast/diagnostic imaging , Humans , Sample Size
18.
Sensors (Basel) ; 20(17)2020 Aug 22.
Article in English | MEDLINE | ID: mdl-32842640

ABSTRACT

Cancer identification and classification from histopathological images of the breast depends greatly on experts, and computer-aided diagnosis can play an important role in disagreement of experts. This automatic process has increased the accuracy of the classification at a reduced cost. The advancement in Convolution Neural Network (CNN) structure has outperformed the traditional approaches in biomedical imaging applications. One of the limiting factors of CNN is it uses spatial image features only for classification. The spectral features from the transform domain have equivalent importance in the complex image classification algorithm. This paper proposes a new CNN structure to classify the histopathological cancer images based on integrating the spectral features obtained using a multi-resolution wavelet transform with the spatial features of CNN. In addition, batch normalization process is used after every layer in the convolution network to improve the poor convergence problem of CNN and the deep layers of CNN are trained with spectral-spatial features. The proposed structure is tested on malignant histology images of the breast for both binary and multi-class classification of tissue using the BreaKHis Dataset and the Breast Cancer Classification Challenge 2015 Datasest. Experimental results show that the combination of spectral-spatial features improves classification accuracy of the CNN network and requires less training parameters in comparison with the well known models (i.e., VGG16 and ALEXNET). The proposed structure achieves an average accuracy of 97.58% and 97.45% with 7.6 million training parameters on both datasets, respectively.


Subject(s)
Breast Neoplasms , Neural Networks, Computer , Algorithms , Breast , Breast Neoplasms/classification , Breast Neoplasms/diagnostic imaging , Female , Humans , Wavelet Analysis
19.
Ultrasound Med Biol ; 46(5): 1119-1132, 2020 05.
Article in English | MEDLINE | ID: mdl-32059918

ABSTRACT

To assist radiologists in breast cancer classification in automated breast ultrasound (ABUS) imaging, we propose a computer-aided diagnosis based on a convolutional neural network (CNN) that classifies breast lesions as benign and malignant. The proposed CNN adopts a modified Inception-v3 architecture to provide efficient feature extraction in ABUS imaging. Because the ABUS images can be visualized in transverse and coronal views, the proposed CNN provides an efficient way to extract multiview features from both views. The proposed CNN was trained and evaluated on 316 breast lesions (135 malignant and 181 benign). An observer performance test was conducted to compare five human reviewers' diagnostic performance before and after referring to the predicting outcomes of the proposed CNN. Our method achieved an area under the curve (AUC) value of 0.9468 with five-folder cross-validation, for which the sensitivity and specificity were 0.886 and 0.876, respectively. Compared with conventional machine learning-based feature extraction schemes, particularly principal component analysis (PCA) and histogram of oriented gradients (HOG), our method achieved a significant improvement in classification performance. The proposed CNN achieved a >10% increased AUC value compared with PCA and HOG. During the observer performance test, the diagnostic results of all human reviewers had increased AUC values and sensitivities after referring to the classification results of the proposed CNN, and four of the five human reviewers' AUCs were significantly improved. The proposed CNN employing a multiview strategy showed promise for the diagnosis of breast cancer, and could be used as a second reviewer for increasing diagnostic reliability.


Subject(s)
Breast Neoplasms/classification , Breast Neoplasms/diagnostic imaging , Deep Learning , Image Interpretation, Computer-Assisted/methods , Neural Networks, Computer , Ultrasonography, Mammary/methods , Adult , Aged , Area Under Curve , Breast Diseases/classification , Breast Diseases/diagnostic imaging , Breast Diseases/pathology , Breast Neoplasms/pathology , Female , Humans , Middle Aged , Principal Component Analysis , Retrospective Studies
20.
Adv Exp Med Biol ; 1152: 75-104, 2019.
Article in English | MEDLINE | ID: mdl-31456181

ABSTRACT

Breast cancer encompasses a heterogeneous collection of neoplasms with diverse morphologies, molecular phenotypes, responses to therapy, probabilities of relapse and overall survival. Traditional histopathological classification aims to categorise tumours into subgroups to inform clinical management decisions, but the diversity within these subgroups remains considerable. Application of massively parallel sequencing technologies in breast cancer research has revealed the true depth of variability in terms of the genetic, phenotypic, cellular and microenvironmental constitution of individual tumours, with the realisation that each tumour is exquisitely unique. This poses great challenges in predicting the development of drug resistance, and treating metastatic disease. Central to achieving fully personalised clinical management is translating new insights on breast cancer heterogeneity into the clinical setting, to evolve the taxonomy of breast cancer and improve risk stratification.


Subject(s)
Breast Neoplasms/pathology , Neoplasms, Second Primary/pathology , Drug Resistance, Neoplasm , Female , High-Throughput Nucleotide Sequencing , Humans , Neoplasm Recurrence, Local
SELECTION OF CITATIONS
SEARCH DETAIL