Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Brain Sci ; 13(7)2023 Jun 25.
Article in English | MEDLINE | ID: mdl-37508926

ABSTRACT

In today's world, stress is a major factor for various diseases in modern societies which affects the day-to-day activities of human beings. The measurement of stress is a contributing factor for governments and societies that impacts the quality of daily lives. The strategy of stress monitoring systems requires an accurate stress classification technique which is identified via the reactions of the body to regulate itself to changes within the environment through mental and emotional responses. Therefore, this research proposed a novel deep learning approach for the stress classification system. In this paper, we presented an Enhanced Long Short-Term Memory(E-LSTM) based on the feature attention mechanism that focuses on determining and categorizing the stress polarity using sequential modeling and word-feature seizing. The proposed approach integrates pre-feature attention in E-LSTM to identify the complicated relationship and extract the keywords through an attention layer for stress classification. This research has been evaluated using a selected dataset accessed from the sixth Korea National Health and Nutrition Examination Survey conducted from 2013 to 2015 (KNHANES VI) to analyze health-related stress data. Statistical performance of the developed approach was analyzed based on the nine features of stress detection, and we compared the effectiveness of the developed approach with other different stress classification approaches. The experimental results shown that the developed approach obtained accuracy, precision, recall and a F1-score of 75.54%, 74.26%, 72.99% and 74.58%, respectively. The feature attention mechanism-based E-LSTM approach demonstrated superior performance in stress detection classification when compared to other classification methods including naïve Bayesian, SVM, deep belief network, and standard LSTM. The results of this study demonstrated the efficiency of the proposed approach in accurately classifying stress detection, particularly in stress monitoring systems where it is expected to be effective for stress prediction.

2.
PLoS One ; 17(10): e0275022, 2022.
Article in English | MEDLINE | ID: mdl-36264851

ABSTRACT

A Stock market collapse occurs when stock prices drop by more than 10% across all main indexes. Predicting a stock market crisis is difficult because of the increased volatility in the stock market. Stock price drops can be triggered by a variety of factors, including corporate results, geopolitical tensions, financial crises, and pandemic events. For scholars and investors, predicting a crisis is a difficult endeavor. We developed a model for the prediction of stock crisis using Hybridized Feature Selection (HFS) approach. Firstly, we went for the suggestion of the HFS method for the removal of stock's unnecessary financial attributes. The Naïve Bayes approach, on the other hand, is used for the classification of strong fundamental stocks. In the third step, Stochastic Relative Strength Index (StochRSI) is employed to identify a stock price bubble. In the fourth step, we identified the stock market crisis point in stock prices through moving average statistics. The fifth is the prediction of stock crises by using deep learning algorithms such as Gated Recurrent Unit (GRU) and Long-Short Term Memory (LSTM). Root Mean Square Error (RMSE), Mean Squared Error (MSE) and Mean Absolute Error (MAE) are implemented for assessing the performance of the models. The HFS-based GRU technique outperformed the HFS-based LSTM method to anticipate the stock crisis. To complete the task, the experiments used Pakistan datasets. The researchers can look at additional technical factors to forecast when a crisis would occur in the future. With a new optimizer, the GRU approach may be improved and fine-tuned even more.


Subject(s)
Algorithms , Investments , Pakistan , Bayes Theorem , Forecasting
3.
PLoS One ; 17(8): e0273486, 2022.
Article in English | MEDLINE | ID: mdl-36007091

ABSTRACT

Recommender systems (RSs) have become increasingly vital in the modern information era and connected economy. They play a key role in business operations by generating personalized suggestions and minimizing information overload. However, the performance of traditional RSs is limited by data sparseness and cold-start issues. Though deep learning-based recommender systems (DLRSs) are very popular, they underperform when considering rating matrices with sparse entries. Despite their performance improvements, DLRSs also suffer from data sparsity, cold start, serendipity, and generalizability issues. We propose a multistage model that uses multimodal data embedding and deep transfer learning for effective and personalized product recommendations, and is designed to overcome data sparsity and cold-start issues. The proposed model includes two phases. In the first-offline-phase, a deep learning technique is implemented to learn hidden features from a large image dataset (targeting new item cold start), and a multimodal data embedding is used to produce dense user feature and item feature vectors (targeting user cold start). This phase produces three different similarity matrices that are used as inputs for the second-online-phase to generate a list of top-n relevant items for a target user. We analyzed the accuracy and effectiveness of the proposed model against the existing baseline RSs using a Brazilian E-commerce dataset. The results show that our model scored 0.5882 for MAE and 0.4011 for RMSE which is lower than baseline RSs which indicates that the model achieved an improved accuracy and was able to minimize the typical cold start and data sparseness issues during the recommendation process.


Subject(s)
Algorithms , Commerce , Brazil , Machine Learning
4.
PLoS One ; 17(5): e0265190, 2022.
Article in English | MEDLINE | ID: mdl-35559954

ABSTRACT

MOTIVATION: Many real applications such as businesses and health generate large categorical datasets with uncertainty. A fundamental task is to efficiently discover hidden and non-trivial patterns from such large uncertain categorical datasets. Since the exact value of an attribute is often unknown in uncertain categorical datasets, conventional clustering analysis algorithms do not provide a suitable means for dealing with categorical data, uncertainty, and stability. PROBLEM STATEMENT: The ability of decision making in the presence of vagueness and uncertainty in data can be handled using Rough Set Theory. Though, recent categorical clustering techniques based on Rough Set Theory help but they suffer from low accuracy, high computational complexity, and generalizability especially on data sets where they sometimes fail or hardly select their best clustering attribute. OBJECTIVES: The main objective of this research is to propose a new information theoretic based Rough Purity Approach (RPA). Another objective of this work is to handle the problems of traditional Rough Set Theory based categorical clustering techniques. Hence, the ultimate goal is to cluster uncertain categorical datasets efficiently in terms of the performance, generalizability and computational complexity. METHODS: The RPA takes into consideration information-theoretic attribute purity of the categorical-valued information systems. Several extensive experiments are conducted to evaluate the efficiency of RPA using a real Supplier Base Management (SBM) and six benchmark UCI datasets. The proposed RPA is also compared with several recent categorical data clustering techniques. RESULTS: The experimental results show that RPA outperforms the baseline algorithms. The significant percentage improvement with respect to time (66.70%), iterations (83.13%), purity (10.53%), entropy (14%), and accuracy (12.15%) as well as Rough Accuracy of clusters show that RPA is suitable for practical usage. CONCLUSION: We conclude that as compared to other techniques, the attribute purity of categorical-valued information systems can better cluster the data. Hence, RPA technique can be recommended for large scale clustering in multiple domains and its performance can be enhanced for further research.


Subject(s)
Algorithms , Cluster Analysis , Entropy , Uncertainty
5.
PLoS One ; 16(8): e0255269, 2021.
Article in English | MEDLINE | ID: mdl-34358237

ABSTRACT

The Sine-Cosine algorithm (SCA) is a population-based metaheuristic algorithm utilizing sine and cosine functions to perform search. To enable the search process, SCA incorporates several search parameters. But sometimes, these parameters make the search in SCA vulnerable to local minima/maxima. To overcome this problem, a new Multi Sine-Cosine algorithm (MSCA) is proposed in this paper. MSCA utilizes multiple swarm clusters to diversify & intensify the search in-order to avoid the local minima/maxima problem. Secondly, during update MSCA also checks for better search clusters that offer convergence to global minima effectively. To assess its performance, we tested the MSCA on unimodal, multimodal and composite benchmark functions taken from the literature. Experimental results reveal that the MSCA is statistically superior with regards to convergence as compared to recent state-of-the-art metaheuristic algorithms, including the original SCA.


Subject(s)
Biometry , Algorithms , Benchmarking , Computer Simulation
6.
PeerJ Comput Sci ; 7: e570, 2021.
Article in English | MEDLINE | ID: mdl-34435091

ABSTRACT

Question classification is one of the essential tasks for automatic question answering implementation in natural language processing (NLP). Recently, there have been several text-mining issues such as text classification, document categorization, web mining, sentiment analysis, and spam filtering that have been successfully achieved by deep learning approaches. In this study, we illustrated and investigated our work on certain deep learning approaches for question classification tasks in an extremely inflected Turkish language. In this study, we trained and tested the deep learning architectures on the questions dataset in Turkish. In addition to this, we used three main deep learning approaches (Gated Recurrent Unit (GRU), Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN)) and we also applied two different deep learning combinations of CNN-GRU and CNN-LSTM architectures. Furthermore, we applied the Word2vec technique with both skip-gram and CBOW methods for word embedding with various vector sizes on a large corpus composed of user questions. By comparing analysis, we conducted an experiment on deep learning architectures based on test and 10-cross fold validation accuracy. Experiment results were obtained to illustrate the effectiveness of various Word2vec techniques that have a considerable impact on the accuracy rate using different deep learning approaches. We attained an accuracy of 93.7% by using these techniques on the question dataset.

7.
PLoS One ; 12(1): e0164803, 2017.
Article in English | MEDLINE | ID: mdl-28068344

ABSTRACT

Clustering a set of objects into homogeneous groups is a fundamental operation in data mining. Recently, many attentions have been put on categorical data clustering, where data objects are made up of non-numerical attributes. For categorical data clustering the rough set based approaches such as Maximum Dependency Attribute (MDA) and Maximum Significance Attribute (MSA) has outperformed their predecessor approaches like Bi-Clustering (BC), Total Roughness (TR) and Min-Min Roughness(MMR). This paper presents the limitations and issues of MDA and MSA techniques on special type of data sets where both techniques fails to select or faces difficulty in selecting their best clustering attribute. Therefore, this analysis motivates the need to come up with better and more generalize rough set theory approach that can cope the issues with MDA and MSA. Hence, an alternative technique named Maximum Indiscernible Attribute (MIA) for clustering categorical data using rough set indiscernible relations is proposed. The novelty of the proposed approach is that, unlike other rough set theory techniques, it uses the domain knowledge of the data set. It is based on the concept of indiscernibility relation combined with a number of clusters. To show the significance of proposed approach, the effect of number of clusters on rough accuracy, purity and entropy are described in the form of propositions. Moreover, ten different data sets from previously utilized research cases and UCI repository are used for experiments. The results produced in tabular and graphical forms shows that the proposed MIA technique provides better performance in selecting the clustering attribute in terms of purity, entropy, iterations, time, accuracy and rough accuracy.


Subject(s)
Cluster Analysis , Data Mining/methods , Models, Theoretical , Algorithms
8.
PLoS One ; 11(12): e0167248, 2016.
Article in English | MEDLINE | ID: mdl-27959927

ABSTRACT

Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.


Subject(s)
Forecasting/methods , Models, Statistical , Neural Networks, Computer , Feedback , Time
9.
PLoS One ; 9(8): e105766, 2014.
Article in English | MEDLINE | ID: mdl-25157950

ABSTRACT

Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.


Subject(s)
Neural Networks, Computer , Algorithms , Data Interpretation, Statistical , Forecasting , Weather
SELECTION OF CITATIONS
SEARCH DETAIL
...