Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 10753, 2024 05 10.
Article in English | MEDLINE | ID: mdl-38730248

ABSTRACT

This paper proposes an approach to enhance the differentiation task between benign and malignant Breast Tumors (BT) using histopathology images from the BreakHis dataset. The main stages involve preprocessing, which encompasses image resizing, data partitioning (training and testing sets), followed by data augmentation techniques. Both feature extraction and classification tasks are employed by a Custom CNN. The experimental results show that the proposed approach using the Custom CNN model exhibits better performance with an accuracy of 84% than applying the same approach using other pretrained models, including MobileNetV3, EfficientNetB0, Vgg16, and ResNet50V2, that present relatively lower accuracies, ranging from 74 to 82%; these four models are used as both feature extractors and classifiers. To increase the accuracy and other performance metrics, Grey Wolf Optimization (GWO), and Modified Gorilla Troops Optimization (MGTO) metaheuristic optimizers are applied to each model separately for hyperparameter tuning. In this case, the experimental results show that the Custom CNN model, refined with MGTO optimization, reaches an exceptional accuracy of 93.13% in just 10 iterations, outperforming the other state-of-the-art methods, and the other four used pretrained models based on the BreakHis dataset.


Subject(s)
Breast Neoplasms , Deep Learning , Humans , Breast Neoplasms/classification , Breast Neoplasms/pathology , Breast Neoplasms/diagnosis , Female , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Algorithms
2.
Sci Rep ; 14(1): 2702, 2024 02 01.
Article in English | MEDLINE | ID: mdl-38302545

ABSTRACT

In the healthcare sector, the health status and biological, and physical activity of the patient are monitored among different sensors that collect the required information about these activities using Wireless body area network (WBAN) architecture. Sensor-based human activity recognition (HAR), which offers remarkable qualities of ease and privacy, has drawn increasing attention from researchers with the growth of the Internet of Things (IoT) and wearable technology. Deep learning has the ability to extract high-dimensional information automatically, making end-to-end learning. The most significant obstacles to computer vision, particularly convolutional neural networks (CNNs), are the effect of the environment background, camera shielding, and other variables. This paper aims to propose and develop a new HAR system in WBAN dependence on the Gramian angular field (GAF) and DenseNet. Once the necessary signals are obtained, the input signals undergo pre-processing through artifact removal and median filtering. In the initial stage, the time series data captured by the sensors undergoes a conversion process, transforming it into 2-dimensional images by using the GAF algorithm. Then, DenseNet automatically makes the processes and integrates the data collected from diverse sensors. The experiment results show that the proposed method achieves the best outcomes in which it achieves 97.83% accuracy, 97.83% F-measure, and 97.64 Matthews correlation coefficient (MCC).


Subject(s)
Deep Learning , Wearable Electronic Devices , Humans , Neural Networks, Computer , Algorithms , Human Activities
3.
Sci Rep ; 14(1): 851, 2024 01 08.
Article in English | MEDLINE | ID: mdl-38191606

ABSTRACT

The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.


Subject(s)
COVID-19 , Humans , COVID-19/diagnostic imaging , Tomography, X-Ray Computed , Neural Networks, Computer , Oxygen , Patients
4.
Cancers (Basel) ; 15(21)2023 Oct 30.
Article in English | MEDLINE | ID: mdl-37958390

ABSTRACT

Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.

5.
Sensors (Basel) ; 23(12)2023 Jun 07.
Article in English | MEDLINE | ID: mdl-37420558

ABSTRACT

Retinal optical coherence tomography (OCT) imaging is a valuable tool for assessing the condition of the back part of the eye. The condition has a great effect on the specificity of diagnosis, the monitoring of many physiological and pathological procedures, and the response and evaluation of therapeutic effectiveness in various fields of clinical practices, including primary eye diseases and systemic diseases such as diabetes. Therefore, precise diagnosis, classification, and automated image analysis models are crucial. In this paper, we propose an enhanced optical coherence tomography (EOCT) model to classify retinal OCT based on modified ResNet (50) and random forest algorithms, which are used in the proposed study's training strategy to enhance performance. The Adam optimizer is applied during the training process to increase the efficiency of the ResNet (50) model compared with the common pre-trained models, such as spatial separable convolutions and visual geometry group (VGG) (16). The experimentation results show that the sensitivity, specificity, precision, negative predictive value, false discovery rate, false negative rate accuracy, and Matthew's correlation coefficient are 0.9836, 0.9615, 0.9740, 0.9756, 0.0385, 0.0260, 0.0164, 0.9747, 0.9788, and 0.9474, respectively.


Subject(s)
Deep Learning , Neural Networks, Computer , Tomography, Optical Coherence/methods , Retina/diagnostic imaging , Predictive Value of Tests
6.
Sci Rep ; 13(1): 8814, 2023 May 31.
Article in English | MEDLINE | ID: mdl-37258633

ABSTRACT

There are several methods that have been discovered to improve the performance of Deep Learning (DL). Many of these methods reached the best performance of their models by tuning several parameters such as Transfer Learning, Data augmentation, Dropout, and Batch Normalization, while other selects the best optimizer and the best architecture for their model. This paper is mainly concerned with the optimization algorithms in DL. It proposes a modified version of Root Mean Squared Propagation (RMSProp) algorithm, called NRMSProp, to improve the speed of convergence, and to find the minimum of the loss function quicker than the original RMSProp optimizer. Moreover, NRMSProp takes the original algorithm, RMSProp, a step further by using the advantages of Nesterov Accelerated Gradient (NAG). It also takes in consideration the direction of the gradient at the next step, with respect to the history of the previous gradients, and adapts the value of the learning rate. As a result, this modification helps NRMSProp to convergence quicker than the original RMSProp, without any increase in the complexity of the RMSProp. In this work, many experiments had been conducted to evaluate the performance of NRMSProp with performing several tests with deep Convolution Neural Networks (CNNs) using different datasets on RMSProp, Adam, and NRMSProp optimizers. The experimental results showed that NRMSProp has achieved effective performance, and accuracy up to 0.97 in most cases, in comparison to RMSProp and Adam optimizers, without any increase in the complexity of the algorithm and with fine amount of memory and time.

7.
Sci Rep ; 13(1): 166, 2023 Jan 04.
Article in English | MEDLINE | ID: mdl-36599906

ABSTRACT

Counting number of triangles in the graph is considered a major task in many large-scale graph analytics problems such as clustering coefficient, transitivity ratio, trusses, etc. In recent years, MapReduce becomes one of the most popular and powerful frameworks for analyzing large-scale graphs in clusters of machines. In this paper, we propose two new MapReduce algorithms based on graph partitioning. The two algorithms avoid the problem of duplicate counting triangles that other algorithms suffer from. The experimental results show a high efficiency of the two algorithms in comparison with an existing algorithm, overcoming it in the execution time performance, especially in very large-scale graphs.

8.
Multimed Tools Appl ; 82(11): 16591-16633, 2023.
Article in English | MEDLINE | ID: mdl-36185324

ABSTRACT

Optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles involved in the learning process. Some of these strategies have been considered in this study to learn more about their complexities. It is crucial to analyse and summarise optimization techniques methodically from a machine learning standpoint since this can provide direction for future work in both machine learning and optimization. The approaches under consideration include the Stochastic Gradient Descent (SGD), Stochastic Optimization Descent with Momentum, Rung Kutta, Adaptive Learning Rate, Root Mean Square Propagation, Adaptive Moment Estimation, Deep Ensembles, Feedback Alignment, Direct Feedback Alignment, Adfactor, AMSGrad, and Gravity. prove the ability of each optimizer applied to machine learning models. Firstly, tests on a skin cancer using the ISIC standard dataset for skin cancer detection were applied using three common optimizers (Adaptive Moment, SGD, and Root Mean Square Propagation) to explore the effect of the algorithms on the skin images. The optimal training results from the analysis indicate that the performance values are enhanced using the Adam optimizer, which achieved 97.30% accuracy. The second dataset is COVIDx CT images, and the results achieved are 99.07% accuracy based on the Adam optimizer. The result indicated that the utilisation of optimizers such as SGD and Adam improved the accuracy in training, testing, and validation stages.

9.
Diagnostics (Basel) ; 12(3)2022 Mar 12.
Article in English | MEDLINE | ID: mdl-35328249

ABSTRACT

Early grading of coronavirus disease 2019 (COVID-19), as well as ventilator support machines, are prime ways to help the world fight this virus and reduce the mortality rate. To reduce the burden on physicians, we developed an automatic Computer-Aided Diagnostic (CAD) system to grade COVID-19 from Computed Tomography (CT) images. This system segments the lung region from chest CT scans using an unsupervised approach based on an appearance model, followed by 3D rotation invariant Markov-Gibbs Random Field (MGRF)-based morphological constraints. This system analyzes the segmented lung and generates precise, analytical imaging markers by estimating the MGRF-based analytical potentials. Three Gibbs energy markers were extracted from each CT scan by tuning the MGRF parameters on each lesion separately. The latter were healthy/mild, moderate, and severe lesions. To represent these markers more reliably, a Cumulative Distribution Function (CDF) was generated, then statistical markers were extracted from it, namely, 10th through 90th CDF percentiles with 10% increments. Subsequently, the three extracted markers were combined together and fed into a backpropagation neural network to make the diagnosis. The developed system was assessed on 76 COVID-19-infected patients using two metrics, namely, accuracy and Kappa. In this paper, the proposed system was trained and tested by three approaches. In the first approach, the MGRF model was trained and tested on the lungs. This approach achieved 95.83% accuracy and 93.39% kappa. In the second approach, we trained the MGRF model on the lesions and tested it on the lungs. This approach achieved 91.67% accuracy and 86.67% kappa. Finally, we trained and tested the MGRF model on lesions. It achieved 100% accuracy and 100% kappa. The results reported in this paper show the ability of the developed system to accurately grade COVID-19 lesions compared to other machine learning classifiers, such as k-Nearest Neighbor (KNN), decision tree, naïve Bayes, and random forest.

10.
Med Phys ; 49(2): 988-999, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34890061

ABSTRACT

PURPOSE: To assess whether the integration between (a) functional imaging features that will be extracted from diffusion-weighted imaging (DWI); and (b) shape and texture imaging features as well as volumetric features that will be extracted from T2-weighted magnetic resonance imaging (MRI) can noninvasively improve the diagnostic accuracy of thyroid nodules classification. PATIENTS AND METHODS: In a retrospective study of 55 patients with pathologically proven thyroid nodules, T2-weighted and diffusion-weighted MRI scans of the thyroid gland were acquired. Spatial maps of the apparent diffusion coefficient (ADC) were reconstructed in all cases. To quantify the nodules' morphology, we used spherical harmonics as a new parametric shape descriptor to describe the complexity of the thyroid nodules in addition to traditional volumetric descriptors (e.g., tumor volume and cuboidal volume). To capture the inhomogeneity of the texture of the thyroid nodules, we used the histogram-based statistics (e.g., kurtosis, entropy, skewness, etc.) of the T2-weighted signal. To achieve the main goal of this paper, a fusion system using an artificial neural network (NN) is proposed to integrate both the functional imaging features (ADC) with the structural morphology and texture features. This framework has been tested on 55 patients (20 patients with malignant nodules and 35 patients with benign nodules), using leave-one-subject-out (LOSO) for training/testing validation tests. RESULTS: The functionality, morphology, and texture imaging features were estimated for 55 patients. The accuracy of the computer-aided diagnosis (CAD) system steadily improved as we integrate the proposed imaging features. The fusion system combining all biomarkers achieved a sensitivity, specificity, positive predictive value, negative predictive value, F1-score, and accuracy of 92.9 % $92.9\%$ (confidence interval [CI]: 78.9 % -- 99.5 % $78.9\%\text{--}99.5\%$ ), 95.8 % $95.8\%$ (CI: 87.4 % -- 99.7 % $87.4\%\text{--}99.7\%$ ), 93 % $93\%$ (CI: 80.7 % -- 99.5 % $80.7\%\text{--}99.5\%$ ), 96 % $96\%$ (CI: 88.8 % -- 99.7 % $88.8\%\text{--}99.7\%$ ), 92.8 % $92.8\%$ (CI: 83.5 % -- 98.5 % $83.5\%\text{--}98.5\%$ ), and 95.5 % $95.5\%$ (CI: 88.8 % -- 99.2 % $88.8\%\text{--}99.2\%$ ), respectively, using the LOSO cross-validation approach. CONCLUSION: The results demonstrated in this paper show the promise that integrating the functional features with morphology as well as texture features by using the current state-of-the-art machine learning approaches will be extremely useful for identifying thyroid nodules as well as diagnosing their malignancy.


Subject(s)
Thyroid Nodule , Diffusion Magnetic Resonance Imaging , Humans , Machine Learning , Magnetic Resonance Imaging , Retrospective Studies , Thyroid Nodule/diagnostic imaging
11.
Complex Intell Systems ; : 1-12, 2021 Sep 07.
Article in English | MEDLINE | ID: mdl-34777979

ABSTRACT

In recent years, the adoption of machine learning has grown steadily in different fields affecting the day-to-day decisions of individuals. This paper presents an intelligent system for recognizing human's daily activities in a complex IoT environment. An enhanced model of capsule neural network called 1D-HARCapsNe is proposed. This proposed model consists of convolution layer, primary capsule layer, activity capsules flat layer and output layer. It is validated using WISDM dataset collected via smart devices and normalized using the random-SMOTE algorithm to handle the imbalanced behavior of the dataset. The experimental results indicate the potential and strengths of the proposed 1D-HARCapsNet that achieved enhanced performance with an accuracy of 98.67%, precision of 98.66%, recall of 98.67%, and F1-measure of 0.987 which shows major performance enhancement compared to the Conventional CapsNet (accuracy 90.11%, precision 91.88%, recall 89.94%, and F1-measure 0.93).

SELECTION OF CITATIONS
SEARCH DETAIL
...