Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Sustainability ; 15(11):8924, 2023.
Article in English | ProQuest Central | ID: covidwho-20245432

ABSTRACT

Assessing e-learning readiness is crucial for educational institutions to identify areas in their e-learning systems needing improvement and to develop strategies to enhance students' readiness. This paper presents an effective approach for assessing e-learning readiness by combining the ADKAR model and machine learning-based feature importance identification methods. The motivation behind using machine learning approaches lies in their ability to capture nonlinearity in data and flexibility as data-driven models. This study surveyed faculty members and students in the Economics faculty at Tlemcen University, Algeria, to gather data based on the ADKAR model's five dimensions: awareness, desire, knowledge, ability, and reinforcement. Correlation analysis revealed a significant relationship between all dimensions. Specifically, the pairwise correlation coefficients between readiness and awareness, desire, knowledge, ability, and reinforcement are 0.5233, 0.5983, 0.6374, 0.6645, and 0.3693, respectively. Two machine learning algorithms, random forest (RF) and decision tree (DT), were used to identify the most important ADKAR factors influencing e-learning readiness. In the results, ability and knowledge were consistently identified as the most significant factors, with scores of ability (0.565, 0.514) and knowledge (0.170, 0.251) using RF and DT algorithms, respectively. Additionally, SHapley Additive exPlanations (SHAP) values were used to explore further the impact of each variable on the final prediction, highlighting ability as the most influential factor. These findings suggest that universities should focus on enhancing students' abilities and providing them with the necessary knowledge to increase their readiness for e-learning. This study provides valuable insights into the factors influencing university students' e-learning readiness.

2.
Molecules ; 28(8)2023 Apr 17.
Article in English | MEDLINE | ID: covidwho-2298470

ABSTRACT

Favipiravir (FP) and Ebselen (EB) belong to a broad range of antiviral drugs that have shown active potential as medications against many viruses. Employing molecular dynamics simulations and machine learning (ML) combined with van der Waals density functional theory, we have uncovered the binding characteristics of these two antiviral drugs on a phosphorene nanocarrier. Herein, by using four different machine learning models (i.e., Bagged Trees, Gaussian Process Regression (GPR), Support Vector Regression (SVR), and Regression Trees (RT)), the Hamiltonian and the interaction energy of antiviral molecules in a phosphorene monolayer are trained in an appropriate way. However, training efficient and accurate models for approximating the density functional theory (DFT) is the final step in using ML to aid in the design of new drugs. To improve the prediction accuracy, the Bayesian optimization approach has been employed to optimize the GPR, SVR, RT, and BT models. Results revealed that the GPR model obtained superior prediction performance with an R2 of 0.9649, indicating that it can explain 96.49% of the data's variability. Then, by means of DFT calculations, we examine the interaction characteristics and thermodynamic properties in a vacuum and a continuum solvent interface. These results illustrate that the hybrid drug is an enabled, functionalized 2D complex with vigorous thermostability. The change in Gibbs free energy at different surface charges and temperatures implies that the FP and EB molecules are allowed to adsorb from the gas phase onto the 2D monolayer at different pH conditions and high temperatures. The results reveal a valuable antiviral drug therapy loaded by 2D biomaterials that may possibly open a new way of auto-treating different diseases, such as SARS-CoV, in primary terms.


Subject(s)
Antiviral Agents , Molecular Dynamics Simulation , Antiviral Agents/pharmacology , Antiviral Agents/chemistry , Bayes Theorem , Machine Learning , Density Functional Theory
3.
Diagnostics (Basel) ; 13(8)2023 Apr 18.
Article in English | MEDLINE | ID: covidwho-2296206

ABSTRACT

This study introduces a new method for identifying COVID-19 infections using blood test data as part of an anomaly detection problem by combining the kernel principal component analysis (KPCA) and one-class support vector machine (OCSVM). This approach aims to differentiate healthy individuals from those infected with COVID-19 using blood test samples. The KPCA model is used to identify nonlinear patterns in the data, and the OCSVM is used to detect abnormal features. This approach is semi-supervised as it uses unlabeled data during training and only requires data from healthy cases. The method's performance was tested using two sets of blood test samples from hospitals in Brazil and Italy. Compared to other semi-supervised models, such as KPCA-based isolation forest (iForest), local outlier factor (LOF), elliptical envelope (EE) schemes, independent component analysis (ICA), and PCA-based OCSVM, the proposed KPCA-OSVM approach achieved enhanced discrimination performance for detecting potential COVID-19 infections. For the two COVID-19 blood test datasets that were considered, the proposed approach attained an AUC (area under the receiver operating characteristic curve) of 0.99, indicating a high accuracy level in distinguishing between positive and negative samples based on the test results. The study suggests that this approach is a promising solution for detecting COVID-19 infections without labeled data.

4.
Sci Rep ; 12(1): 2467, 2022 02 14.
Article in English | MEDLINE | ID: covidwho-1684106

ABSTRACT

This study aims to develop an assumption-free data-driven model to accurately forecast COVID-19 spread. Towards this end, we firstly employed Bayesian optimization to tune the Gaussian process regression (GPR) hyperparameters to develop an efficient GPR-based model for forecasting the recovered and confirmed COVID-19 cases in two highly impacted countries, India and Brazil. However, machine learning models do not consider the time dependency in the COVID-19 data series. Here, dynamic information has been taken into account to alleviate this limitation by introducing lagged measurements in constructing the investigated machine learning models. Additionally, we assessed the contribution of the incorporated features to the COVID-19 prediction using the Random Forest algorithm. Results reveal that significant improvement can be obtained using the proposed dynamic machine learning models. In addition, the results highlighted the superior performance of the dynamic GPR compared to the other models (i.e., Support vector regression, Boosted trees, Bagged trees, Decision tree, Random Forest, and XGBoost) by achieving an averaged mean absolute percentage error of around 0.1%. Finally, we provided the confidence level of the predicted results based on the dynamic GPR model and showed that the predictions are within the 95% confidence interval. This study presents a promising shallow and simple approach for predicting COVID-19 spread.


Subject(s)
Algorithms , COVID-19/transmission , Forecasting/methods , Machine Learning , Neural Networks, Computer , Bayes Theorem , Brazil/epidemiology , COVID-19/epidemiology , COVID-19/virology , Humans , India/epidemiology , Pandemics/prevention & control , Reproducibility of Results , SARS-CoV-2/physiology
5.
IEEE Trans Instrum Meas ; 71: 2500211, 2022.
Article in English | MEDLINE | ID: covidwho-1566252

ABSTRACT

A sample blood test has recently become an important tool to help identify false-positive/false-negative real-time reverse transcription polymerase chain reaction (rRT-PCR) tests. Importantly, this is mainly because it is an inexpensive and handy option to detect the potential COVID-19 patients. However, this test should be conducted by certified laboratories, expensive equipment, and trained personnel, and 3-4 h are needed to deliver results. Furthermore, it has relatively large false-negative rates around 15%-20%. Consequently, an alternative and more accessible solution, quicker and less costly, is needed. This article introduces flexible and unsupervised data-driven approaches to detect the COVID-19 infection based on blood test samples. In other words, we address the problem of COVID-19 infection detection using a blood test as an anomaly detection problem through an unsupervised deep hybrid model. Essentially, we amalgamate the features extraction capability of the variational autoencoder (VAE) and the detection sensitivity of the one-class support vector machine (1SVM) algorithm. Two sets of routine blood tests samples from the Albert Einstein Hospital, S ao Paulo, Brazil, and the San Raffaele Hospital, Milan, Italy, are used to assess the performance of the investigated deep learning models. Here, missing values have been imputed based on a random forest regressor. Compared to generative adversarial networks (GANs), deep belief network (DBN), and restricted Boltzmann machine (RBM)-based 1SVM, the traditional VAE, GAN, DBN, and RBM with softmax layer as discriminator layer, and the standalone 1SVM, the proposed VAE-based 1SVM detector offers superior discrimination performance of potential COVID-19 infections. Results also revealed that the deep learning-driven 1SVM detection approaches provide promising detection performance compared to the conventional deep learning models.

6.
J Biomed Inform ; 118: 103791, 2021 06.
Article in English | MEDLINE | ID: covidwho-1201224

ABSTRACT

Within the recent pandemic, scientists and clinicians are engaged in seeking new technology to stop or slow down the COVID-19 pandemic. The benefit of machine learning, as an essential aspect of artificial intelligence, on past epidemics offers a new line to tackle the novel Coronavirus outbreak. Accurate short-term forecasting of COVID-19 spread plays an essential role in improving the management of the overcrowding problem in hospitals and enables appropriate optimization of the available resources (i.e., materials and staff).This paper presents a comparative study of machine learning methods for COVID-19 transmission forecasting. We investigated the performances of deep learning methods, including the hybrid convolutional neural networks-Long short-term memory (LSTM-CNN), the hybrid gated recurrent unit-convolutional neural networks (GAN-GRU), GAN, CNN, LSTM, and Restricted Boltzmann Machine (RBM), as well as baseline machine learning methods, namely logistic regression (LR) and support vector regression (SVR). The employment of hybrid models (i.e., LSTM-CNN and GAN-GRU) is expected to eventually improve the forecasting accuracy of COVID-19 future trends. The performance of the investigated deep learning and machine learning models was tested using confirmed and recovered COVID-19 cases time-series data from seven impacted countries: Brazil, France, India, Mexico, Russia, Saudi Arabia, and the US. The results reveal that hybrid deep learning models can efficiently forecast COVID-19 cases. Also, results confirmed the superior performance of deep learning models compared to the two considered baseline machine learning models. Furthermore, results showed that LSTM-CNN achieved improved performances with an averaged mean absolute percentage error of 3.718%, among others.


Subject(s)
COVID-19/transmission , Machine Learning , Pandemics , Artificial Intelligence , Brazil , Deep Learning , Forecasting , France , Humans , India , Mexico , Neural Networks, Computer , Russia , Saudi Arabia , United States
7.
Chaos Solitons Fractals ; 140: 110121, 2020 Nov.
Article in English | MEDLINE | ID: covidwho-651706

ABSTRACT

The novel coronavirus (COVID-19) has significantly spread over the world and comes up with new challenges to the research community. Although governments imposing numerous containment and social distancing measures, the need for the healthcare systems has dramatically increased and the effective management of infected patients becomes a challenging problem for hospitals. Thus, accurate short-term forecasting of the number of new contaminated and recovered cases is crucial for optimizing the available resources and arresting or slowing down the progression of such diseases. Recently, deep learning models demonstrated important improvements when handling time-series data in different applications. This paper presents a comparative study of five deep learning methods to forecast the number of new cases and recovered cases. Specifically, simple Recurrent Neural Network (RNN), Long short-term memory (LSTM), Bidirectional LSTM (BiLSTM), Gated recurrent units (GRUs) and Variational AutoEncoder (VAE) algorithms have been applied for global forecasting of COVID-19 cases based on a small volume of data. This study is based on daily confirmed and recovered cases collected from six countries namely Italy, Spain, France, China, USA, and Australia. Results demonstrate the promising potential of the deep learning model in forecasting COVID-19 cases and highlight the superior performance of the VAE compared to the other algorithms.

SELECTION OF CITATIONS
SEARCH DETAIL