Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Front Plant Sci ; 15: 1352935, 2024.
Article in English | MEDLINE | ID: mdl-38938642

ABSTRACT

Introduction: Precise semantic segmentation of microbial alterations is paramount for their evaluation and treatment. This study focuses on harnessing the SegFormer segmentation model for precise semantic segmentation of strawberry diseases, aiming to improve disease detection accuracy under natural acquisition conditions. Methods: Three distinct Mix Transformer encoders - MiT-B0, MiT-B3, and MiT-B5 - were thoroughly analyzed to enhance disease detection, targeting diseases such as Angular leaf spot, Anthracnose rot, Blossom blight, Gray mold, Leaf spot, Powdery mildew on fruit, and Powdery mildew on leaves. The dataset consisted of 2,450 raw images, expanded to 4,574 augmented images. The Segment Anything Model integrated into the Roboflow annotation tool facilitated efficient annotation and dataset preparation. Results: The results reveal that MiT-B0 demonstrates balanced but slightly overfitting behavior, MiT-B3 adapts rapidly with consistent training and validation performance, and MiT-B5 offers efficient learning with occasional fluctuations, providing robust performance. MiT-B3 and MiT-B5 consistently outperformed MiT-B0 across disease types, with MiT-B5 achieving the most precise segmentation in general. Discussion: The findings provide key insights for researchers to select the most suitable encoder for disease detection applications, propelling the field forward for further investigation. The success in strawberry disease analysis suggests potential for extending this approach to other crops and diseases, paving the way for future research and interdisciplinary collaboration.

2.
Sci Rep ; 14(1): 1507, 2024 01 17.
Article in English | MEDLINE | ID: mdl-38233458

ABSTRACT

This paper investigated the use of language models and deep learning techniques for automating disease prediction from symptoms. Specifically, we explored the use of two Medical Concept Normalization-Bidirectional Encoder Representations from Transformers (MCN-BERT) models and a Bidirectional Long Short-Term Memory (BiLSTM) model, each optimized with a different hyperparameter optimization method, to predict diseases from symptom descriptions. In this paper, we utilized two distinct dataset called Dataset-1, and Dataset-2. Dataset-1 consists of 1,200 data points, with each point representing a unique combination of disease labels and symptom descriptions. While, Dataset-2 is designed to identify Adverse Drug Reactions (ADRs) from Twitter data, comprising 23,516 rows categorized as ADR (1) or Non-ADR (0) tweets. The results indicate that the MCN-BERT model optimized with AdamP achieved 99.58% accuracy for Dataset-1 and 96.15% accuracy for Dataset-2. The MCN-BERT model optimized with AdamW performed well with 98.33% accuracy for Dataset-1 and 95.15% for Dataset-2, while the BiLSTM model optimized with Hyperopt achieved 97.08% accuracy for Dataset-1 and 94.15% for Dataset-2. Our findings suggest that language models and deep learning techniques have promise for supporting earlier detection and more prompt treatment of diseases, as well as expanding remote diagnostic capabilities. The MCN-BERT and BiLSTM models demonstrated robust performance in accurately predicting diseases from symptoms, indicating the potential for further related research.


Subject(s)
Drug-Related Side Effects and Adverse Reactions , Humans , Electric Power Supplies , Language , Memory, Long-Term , Natural Language Processing
3.
Sci Rep ; 14(1): 2428, 2024 01 29.
Article in English | MEDLINE | ID: mdl-38287066

ABSTRACT

Combination therapy is a fundamental strategy in cancer chemotherapy. It involves administering two or more anti-cancer agents to increase efficacy and overcome multidrug resistance compared to monotherapy. However, drug combinations can exhibit synergy, additivity, or antagonism. This study presents a machine learning framework to classify and predict cancer drug combinations. The framework utilizes several key steps including data collection and annotation from the O'Neil drug interaction dataset, data preprocessing, stratified splitting into training and test sets, construction and evaluation of classification models to categorize combinations as synergistic, additive, or antagonistic, application of regression models to predict combination sensitivity scores for enhanced predictions compared to prior work, and the last step is examination of drug features and mechanisms of action to understand synergy behaviors for optimal combinations. The models identified combination pairs most likely to synergize against different cancers. Kinase inhibitors combined with mTOR inhibitors, DNA damage-inducing drugs or HDAC inhibitors showed benefit, particularly for ovarian, melanoma, prostate, lung and colorectal carcinomas. Analysis highlighted Gemcitabine, MK-8776 and AZD1775 as frequently synergizing across cancer types. This machine learning framework provides a valuable approach to uncover more effective multi-drug regimens.


Subject(s)
Antineoplastic Agents , Neoplasms , Humans , Drug Synergism , Antineoplastic Agents/pharmacology , Antineoplastic Agents/therapeutic use , Neoplasms/drug therapy , Antineoplastic Combined Chemotherapy Protocols/pharmacology , Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Drug Combinations , Machine Learning
4.
Diagnostics (Basel) ; 13(22)2023 Nov 13.
Article in English | MEDLINE | ID: mdl-37998575

ABSTRACT

The paper focuses on the hepatitis C virus (HCV) infection in Egypt, which has one of the highest rates of HCV in the world. The high prevalence is linked to several factors, including the use of injection drugs, poor sterilization practices in medical facilities, and low public awareness. This paper introduces a hyOPTGB model, which employs an optimized gradient boosting (GB) classifier to predict HCV disease in Egypt. The model's accuracy is enhanced by optimizing hyperparameters with the OPTUNA framework. Min-Max normalization is used as a preprocessing step for scaling the dataset values and using the forward selection (FS) wrapped method to identify essential features. The dataset used in the study contains 1385 instances and 29 features and is available at the UCI machine learning repository. The authors compare the performance of five machine learning models, including decision tree (DT), support vector machine (SVM), dummy classifier (DC), ridge classifier (RC), and bagging classifier (BC), with the hyOPTGB model. The system's efficacy is assessed using various metrics, including accuracy, recall, precision, and F1-score. The hyOPTGB model outperformed the other machine learning models, achieving a 95.3% accuracy rate. The authors also compared the hyOPTGB model against other models proposed by authors who used the same dataset.

5.
Biomimetics (Basel) ; 8(7)2023 Nov 17.
Article in English | MEDLINE | ID: mdl-37999193

ABSTRACT

The COVID-19 epidemic poses a worldwide threat that transcends provincial, philosophical, spiritual, radical, social, and educational borders. By using a connected network, a healthcare system with the Internet of Things (IoT) functionality can effectively monitor COVID-19 cases. IoT helps a COVID-19 patient recognize symptoms and receive better therapy more quickly. A critical component in measuring, evaluating, and diagnosing the risk of infection is artificial intelligence (AI). It can be used to anticipate cases and forecast the alternate incidences number, retrieved instances, and injuries. In the context of COVID-19, IoT technologies are employed in specific patient monitoring and diagnosing processes to reduce COVID-19 exposure to others. This work uses an Indian dataset to create an enhanced convolutional neural network with a gated recurrent unit (CNN-GRU) model for COVID-19 death prediction via IoT. The data were also subjected to data normalization and data imputation. The 4692 cases and eight characteristics in the dataset were utilized in this research. The performance of the CNN-GRU model for COVID-19 death prediction was assessed using five evaluation metrics, including median absolute error (MedAE), mean absolute error (MAE), root mean squared error (RMSE), mean square error (MSE), and coefficient of determination (R2). ANOVA and Wilcoxon signed-rank tests were used to determine the statistical significance of the presented model. The experimental findings showed that the CNN-GRU model outperformed other models regarding COVID-19 death prediction.

6.
Sensors (Basel) ; 23(4)2023 Feb 13.
Article in English | MEDLINE | ID: mdl-36850682

ABSTRACT

Parkinson's disease (PD) has become widespread these days all over the world. PD affects the nervous system of the human and also affects a lot of human body parts that are connected via nerves. In order to make a classification for people who suffer from PD and who do not suffer from the disease, an advanced model called Bayesian Optimization-Support Vector Machine (BO-SVM) is presented in this paper for making the classification process. Bayesian Optimization (BO) is a hyperparameter tuning technique for optimizing the hyperparameters of machine learning models in order to obtain better accuracy. In this paper, BO is used to optimize the hyperparameters for six machine learning models, namely, Support Vector Machine (SVM), Random Forest (RF), Logistic Regression (LR), Naive Bayes (NB), Ridge Classifier (RC), and Decision Tree (DT). The dataset used in this study consists of 23 features and 195 instances. The class label of the target feature is 1 and 0, where 1 refers to the person suffering from PD and 0 refers to the person who does not suffer from PD. Four evaluation metrics, namely, accuracy, F1-score, recall, and precision were computed to evaluate the performance of the classification models used in this paper. The performance of the six machine learning models was tested on the dataset before and after the process of hyperparameter tuning. The experimental results demonstrated that the SVM model achieved the best results when compared with other machine learning models before and after the process of hyperparameter tuning, with an accuracy of 92.3% obtained using BO.


Subject(s)
Parkinson Disease , Humans , Parkinson Disease/diagnosis , Bayes Theorem , Support Vector Machine , Benchmarking , Machine Learning
7.
Multimed Tools Appl ; 82(11): 16591-16633, 2023.
Article in English | MEDLINE | ID: mdl-36185324

ABSTRACT

Optimization algorithms are used to improve model accuracy. The optimization process undergoes multiple cycles until convergence. A variety of optimization strategies have been developed to overcome the obstacles involved in the learning process. Some of these strategies have been considered in this study to learn more about their complexities. It is crucial to analyse and summarise optimization techniques methodically from a machine learning standpoint since this can provide direction for future work in both machine learning and optimization. The approaches under consideration include the Stochastic Gradient Descent (SGD), Stochastic Optimization Descent with Momentum, Rung Kutta, Adaptive Learning Rate, Root Mean Square Propagation, Adaptive Moment Estimation, Deep Ensembles, Feedback Alignment, Direct Feedback Alignment, Adfactor, AMSGrad, and Gravity. prove the ability of each optimizer applied to machine learning models. Firstly, tests on a skin cancer using the ISIC standard dataset for skin cancer detection were applied using three common optimizers (Adaptive Moment, SGD, and Root Mean Square Propagation) to explore the effect of the algorithms on the skin images. The optimal training results from the analysis indicate that the performance values are enhanced using the Adam optimizer, which achieved 97.30% accuracy. The second dataset is COVIDx CT images, and the results achieved are 99.07% accuracy based on the Adam optimizer. The result indicated that the utilisation of optimizers such as SGD and Adam improved the accuracy in training, testing, and validation stages.

8.
Environ Sci Pollut Res Int ; 29(6): 9318-9340, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34499306

ABSTRACT

To monitor groundwater salinization due to seawater intrusion (SWI) in the aquifer of the eastern Nile Delta, Egypt, we developed a predictive regression model based on an innovative approach using SWI indicators and artificial intelligence (AI) methodologies. Hydrogeological and hydrogeochemical data of the groundwater wells in three periods (1996, 2007, and 2018) were used as input data for the AI methods. All the studied indicators were enrolled in feature extraction process where the most significant inputs were determined, including the studied year, the distance from the shoreline, the aquifer type, and the hydraulic head. These inputs were used to build four basic AI models to get the optimal prediction results of the used indicators (the base exchange index (BEX), the groundwater quality index for seawater intrusion (GQISWI), and water quality). The machine learning models utilized in this study are logistic regression, Gaussian process regression, feedforward backpropagation neural networks (FFBPN), and deep learning-based long-short-term memory. The FFBPN model achieved higher evaluation results than other models in terms of root mean square error (RMSE) and R2 values in the testing phase, with R2 values of 0.9667, 0.9316, and 0.9259 for BEX, GQISWI, and water quality, respectively. Accordingly, the FFBPN was used to build a predictive model for electrical conductivity for the years 2020 and 2030. Reasonable results were attained despite the imbalanced nature of the dataset for different times and sample sizes. The results show that the 1000 µS/cm boundary is expected to move inland ~9.5 km (eastern part) to ~10 km (western part) to ~12.4 km (central part) between 2018 and 2030. This encroachment would be hazardous to water resources and agriculture unless action plans are taken.


Subject(s)
Artificial Intelligence , Groundwater , Egypt , Environmental Monitoring , Seawater
9.
Biomed Signal Process Control ; 73: 103441, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34899960

ABSTRACT

Today, the earth planet suffers from the decay of active pandemic COVID-19 which motivates scientists and researchers to detect and diagnose the infected people. Chest X-ray (CXR) image is a common utility tool for detection. Even the CXR suffers from low informative details about COVID-19 patches; the computer vision helps to overcome it through grayscale spatial exploitation analysis. In turn, it is highly recommended to acquire more CXR images to increase the capacity and ability to learn for mining the grayscale spatial exploitation. In this paper, an efficient Gray-scale Spatial Exploitation Net (GSEN) is designed by employing web pages crawling across cloud computing environments. The motivation of this work are i) utilizing a framework methodology for constructing consistent dataset by web crawling to update the dataset continuously per crawling iteration; ii) designing lightweight, fast learning, comparable accuracy, and fine-tuned parameters gray-scale spatial exploitation deep neural net; iii) comprehensive evaluation of the designed gray-scale spatial exploitation net for different collected dataset(s) based on web COVID-19 crawling verse the transfer learning of the pre-trained nets. Different experiments have been performed for benchmarking both the proposed web crawling framework methodology and the designed gray-scale spatial exploitation net. Due to the accuracy metric, the proposed net achieves 95.60% for two-class labels, and 92.67% for three-class labels, respectively compared with the most recent transfer learning Google-Net, VGG-19, Res-Net 50, and Alex-Net approaches. Furthermore, web crawling utilizes the accuracy rates improvement in a positive relationship to the cardinality of crawled CXR dataset.

10.
Comput Biol Med ; 135: 104606, 2021 08.
Article in English | MEDLINE | ID: mdl-34247134

ABSTRACT

BACKGROUND AND OBJECTIVE: The impact of diet on COVID-19 patients has been a global concern since the pandemic began. Choosing different types of food affects peoples' mental and physical health and, with persistent consumption of certain types of food and frequent eating, there may be an increased likelihood of death. In this paper, a regression system is employed to evaluate the prediction of death status based on food categories. METHODS: A Healthy Artificial Nutrition Analysis (HANA) model is proposed. The proposed model is used to generate a food recommendation system and track individual habits during the COVID-19 pandemic to ensure healthy foods are recommended. To collect information about the different types of foods that most of the world's population eat, the COVID-19 Healthy Diet Dataset was used. This dataset includes different types of foods from 170 countries around the world as well as obesity, undernutrition, death, and COVID-19 data as percentages of the total population. The dataset was used to predict the status of death using different machine learning regression models, i.e., linear regression (ridge regression, simple linear regularization, and elastic net regression), and AdaBoost models. RESULTS: The death status was predicted with high accuracy, and the food categories related to death were identified with promising accuracy. The Mean Square Error (MSE), Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and R2 metrics and 20-fold cross-validation were used to evaluate the accuracy of the prediction models for the COVID-19 Healthy Diet Dataset. The evaluations demonstrated that elastic net regression was the most efficient prediction model. Based on an in-depth analysis of recent nutrition recommendations by WHO, we confirm the same advice already introduced in the WHO report1. Overall, the outcomes also indicate that the remedying effects of COVID-19 patients are most important to people which eat more vegetal products, oilcrops grains, beverages, and cereals - excluding beer. Moreover, people consuming more animal products, animal fats, meat, milk, sugar and sweetened foods, sugar crops, were associated with a higher number of deaths and fewer patient recoveries. The outcome of sugar consumption was important and the rates of death and recovery were influenced by obesity. CONCLUSIONS: Based on evaluation metrics, the proposed HANA model may outperform other algorithms used to predict death status. The results of this study may direct patients to eat particular types of food to reduce the possibility of becoming infected with the COVID-19 virus.


Subject(s)
COVID-19 , Pandemics , Animals , Diet , Diet, Healthy , Humans , SARS-CoV-2
11.
Comput Intell Neurosci ; 2020: 8821868, 2020.
Article in English | MEDLINE | ID: mdl-33029115

ABSTRACT

Multipose face recognition system is one of the recent challenges faced by the researchers interested in security applications. Different researches have been introduced discussing the accuracy improvement of multipose face recognition through enhancing the face detector as Viola-Jones, Real Adaboost, and Cascade Object Detector while others concentrated on the recognition systems as support vector machine and deep convolution neural networks. In this paper, a combined adaptive deep learning vector quantization (CADLVQ) classifier is proposed. The proposed classifier has boosted the weakness of the adaptive deep learning vector quantization classifiers through using the majority voting algorithm with the speeded up robust feature extractor. Experimental results indicate that, the proposed classifier provided promising results in terms of sensitivity, specificity, precision, and accuracy compared to recent approaches in deep learning, statistical, and classical neural networks. Finally, the comparison is empirically performed using confusion matrix to ensure the reliability and robustness of the proposed system compared to the state-of art.


Subject(s)
Deep Learning , Facial Recognition , Neural Networks, Computer , Reproducibility of Results , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...