Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
MethodsX ; 12: 102790, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38966714

ABSTRACT

Stochastic Calculus-guided Reinforcement learning (SCRL) is a new way to make decisions in situations where things are uncertain. It uses mathematical principles to make better choices and improve decision-making in complex situations. SCRL works better than traditional Stochastic Reinforcement Learning (SRL) methods. In tests, SCRL showed that it can adapt and perform well. It was better than the SRL methods. SCRL had a lower dispersion value of 63.49 compared to SRL's 65.96. This means SCRL had less variation in its results. SCRL also had lower risks than SRL in the short- and long-term. SCRL's short-term risk value was 0.64, and its long-term risk value was 0.78. SRL's short-term risk value was much higher at 18.64, and its long-term risk value was 10.41. Lower risk values are better because they mean less chance of something going wrong. Overall, SCRL is a better way to make decisions when things are uncertain. It uses math to make smarter choices and has less risk than other methods. Also, different metrics, viz training rewards, learning progress, and rolling averages between SRL and SCRL, were assessed, and the study found that SCRL outperforms well compared to SRL. This makes SCRL very useful for real-world situations where decisions must be made carefully.•By leveraging mathematical principles derived from stochastic calculus, SCRL offers a robust framework for making informed choices and enhancing performance in complex scenarios.•In comparison to traditional SRL methods, SCRL demonstrates superior adaptability and efficacy, as evidenced by empirical tests.

2.
MethodsX ; 12: 102659, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38550761

ABSTRACT

The objective of the study is to enhance uncertainty prediction in regression problems by introducing a revolutionary Bayesian Neural Network (BNN) model. Experimental results reveal significant improvements in uncertainty prediction and point forecasts with the integrated BNN model compared to the plain BNN. Performance metrics, including mean squared error (MSE), mean absolute error (MAE), and R-squared (R²), demonstrate superior results for the proposed BNN. The experimental results show that for plain BNN, MSE is 87.3, MAE is 6.62 and R2 is -0.0492, whereas for the proposed BNN model MSE was found to be 44.64, MAE is 4.4 and R2 is 0.46. This research brings a fresh approach to Bayesian Neural Networks by incorporating both dropout and KL regularization techniques, resulting in a powerful tool for handling regression tasks with certainty. By combining these techniques, the study enhances model stability, avoids overfitting, and achieves more reliable uncertainty estimation. This study adds to our knowledge of uncertainty-aware machine learning models and offers a valuable solution for accurately assessing uncertainty in various applications.•The innovative BNN model merges the power of Bayesian principles with the effectiveness of dropout and KL regularization.•To test and refine our model, the study utilizes the Boston Housing dataset for both training and evaluation purposes.

SELECTION OF CITATIONS
SEARCH DETAIL
...