Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 182
Filter
1.
Sci Rep ; 14(1): 15051, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38951605

ABSTRACT

Electrical conductivity (EC) is widely recognized as one of the most essential water quality metrics for predicting salinity and mineralization. In the current research, the EC of two Australian rivers (Albert River and Barratta Creek) was forecasted for up to 10 days using a novel deep learning algorithm (Convolutional Neural Network combined with Long Short-Term Memory Model, CNN-LSTM). The Boruta-XGBoost feature selection method was used to determine the significant inputs (time series lagged data) to the model. To compare the performance of Boruta-XGB-CNN-LSTM models, three machine learning approaches-multi-layer perceptron neural network (MLP), K-nearest neighbour (KNN), and extreme gradient boosting (XGBoost) were used. Different statistical metrics, such as correlation coefficient (R), root mean square error (RMSE), and mean absolute percentage error, were used to assess the models' performance. From 10 years of data in both rivers, 7 years (2012-2018) were used as a training set, and 3 years (2019-2021) were used for testing the models. Application of the Boruta-XGB-CNN-LSTM model in forecasting one day ahead of EC showed that in both stations, Boruta-XGB-CNN-LSTM can forecast the EC parameter better than other machine learning models for the test dataset (R = 0.9429, RMSE = 45.6896, MAPE = 5.9749 for Albert River, and R = 0.9215, RMSE = 43.8315, MAPE = 7.6029 for Barratta Creek). Considering the better performance of the Boruta-XGB-CNN-LSTM model in both rivers, this model was used to forecast 3-10 days ahead of EC. The results showed that the Boruta-XGB-CNN-LSTM model is very capable of forecasting the EC for the next 10 days. The results showed that by increasing the forecasting horizon from 3 to 10 days, the performance of the Boruta-XGB-CNN-LSTM model slightly decreased. The results of this study show that the Boruta-XGB-CNN-LSTM model can be used as a good soft computing method for accurately predicting how the EC will change in rivers.

2.
Front Public Health ; 12: 1359368, 2024.
Article in English | MEDLINE | ID: mdl-38989122

ABSTRACT

Accurate predictive modeling of pandemics is essential for optimally distributing biomedical resources and setting policy. Dozens of case prediction models have been proposed but their accuracy over time and by model type remains unclear. In this study, we systematically analyze all US CDC COVID-19 forecasting models, by first categorizing them and then calculating their mean absolute percent error, both wave-wise and on the complete timeline. We compare their estimates to government-reported case numbers, one another, as well as two baseline models wherein case counts remain static or follow a simple linear trend. The comparison reveals that around two-thirds of models fail to outperform a simple static case baseline and one-third fail to outperform a simple linear trend forecast. A wave-by-wave comparison of models revealed that no overall modeling approach was superior to others, including ensemble models and errors in modeling have increased over time during the pandemic. This study raises concerns about hosting these models on official public platforms of health organizations including the US CDC which risks giving them an official imprimatur and when utilized to formulate policy. By offering a universal evaluation method for pandemic forecasting models, we expect this study to serve as the starting point for the development of more accurate models.


Subject(s)
COVID-19 , Centers for Disease Control and Prevention, U.S. , Forecasting , Models, Statistical , United States/epidemiology , Humans , COVID-19/epidemiology , SARS-CoV-2 , Pandemics
3.
Int J Pharm ; 662: 124509, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39048040

ABSTRACT

Due to the continuously increasing Cost of Goods Sold, the pharmaceutical industry has faced several challenges, and the Right First-Time principle with data-driven decision-making has become more pressing to sustain competitiveness. Thus, in this work, three different types of artificial neural network (ANN) models were developed, compared, and interpreted by analyzing an open-access dataset from a real pharmaceutical tableting production process. First, the multilayer perceptron (MLP) model was used to describe the total waste based on 20 raw material properties and 25 statistical descriptors of the time series data collected throughout the tableting (e.g., tableting speed and compression force). Then using 10 process time series data in addition to the raw material properties, the cumulative waste, during manufacturing was also predicted by long short-term memory (LSTM) and bidirectional LSTM (biLSTM) recurrent neural networks (RNN). The LSTM network was used to forecast the waste production profile to allow preventive actions. The results showed that RNNs were able to predict the waste trajectory, the best model resulting in 1096 and 2174 tablets training and testing root mean squared errors, respectively. For a better understanding of the process, and the models and to help the decision-support systems and control strategies, interpretation methods were implemented for all ANNs, which increased the process understanding by identifying the most influential material attributes and process parameters. The presented methodology is applicable to various critical quality attributes in several fields of pharmaceutics and therefore is a useful tool for realizing the Pharma 4.0 concept.

4.
Comput Biol Med ; 178: 108707, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38870726

ABSTRACT

This article introduces a novel mathematical model analyzing the dynamics of Dengue in the recent past, specifically focusing on the 2023 outbreak of this disease. The model explores the patterns and behaviors of dengue fever in Bangladesh. Incorporating a sinusoidal function reveals significant mid-May to Late October outbreak predictions, aligning with the government's exposed data in our simulation. For different amplitudes (A) within a sequence of values (A = 0.1 to 0.5), the highest number of infected mosquitoes occurs in July. However, simulations project that when ßM = 0.5 and A = 0.1, the peak of human infections occurs in late September. Not only the next-generation matrix approach along with the stability of disease-free and endemic equilibrium points are observed, but also a cutting-edge Machine learning (ML) approach such as the Prophet model is explored for forecasting future Dengue outbreaks in Bangladesh. Remarkably, we have fitted our solution curve of infection with the reported data by the government of Bangladesh. We can predict the outcome of 2024 based on the ML Prophet model situation of Dengue will be detrimental and proliferate 25 % compared to 2023. Finally, the study marks a significant milestone in understanding and managing Dengue outbreaks in Bangladesh.


Subject(s)
Dengue , Disease Outbreaks , Machine Learning , Dengue/epidemiology , Humans , Bangladesh/epidemiology , Animals , Epidemiological Models
5.
J Appl Stat ; 51(9): 1818-1841, 2024.
Article in English | MEDLINE | ID: mdl-38933138

ABSTRACT

Modeling and accurately forecasting trend and seasonal patterns of a time series is a crucial activity in economics. The main propose of this study is to evaluate and compare the performance of three traditional forecasting methods, namely the ARIMA models and their extensions, the classical decomposition time series associated with multiple linear regression models with correlated errors, and the Holt-Winters method. These methodologies are applied to retail time series from seven different European countries that present strong trend and seasonal fluctuations. In general, the results indicate that all the forecasting models somehow follow the seasonal pattern exhibited in the data. Based on mean squared error (MSE), root mean squared error (RMSE), mean absolute percentage error (MAPE), mean absolute scaled error (MASE) and U-Theil statistic, the results demonstrate the superiority of the ARIMA model over the other two forecasting approaches. Holt-Winters method also produces accurate forecasts, so it is considered a viable alternative to ARIMA. The performance of the forecasting methods in terms of coverage rates matches the results for accuracy measures.

6.
Entropy (Basel) ; 26(6)2024 May 29.
Article in English | MEDLINE | ID: mdl-38920477

ABSTRACT

The applications of deep learning and artificial intelligence have permeated daily life, with time series prediction emerging as a focal area of research due to its significance in data analysis. The evolution of deep learning methods for time series prediction has progressed from the Convolutional Neural Network (CNN) and the Recurrent Neural Network (RNN) to the recently popularized Transformer network. However, each of these methods has encountered specific issues. Recent studies have questioned the effectiveness of the self-attention mechanism in Transformers for time series prediction, prompting a reevaluation of approaches to LTSF (Long Time Series Forecasting) problems. To circumvent the limitations present in current models, this paper introduces a novel hybrid network, Temporal Convolutional Network-Linear (TCN-Linear), which leverages the temporal prediction capabilities of the Temporal Convolutional Network (TCN) to enhance the capacity of LSTF-Linear. Time series from three classical chaotic systems (Lorenz, Mackey-Glass, and Rossler) and real-world stock data serve as experimental datasets. Numerical simulation results indicate that, compared to classical networks and novel hybrid models, our model achieves the lowest RMSE, MAE, and MSE with the fewest training parameters, and its R2 value is the closest to 1.

7.
Front Big Data ; 7: 1376023, 2024.
Article in English | MEDLINE | ID: mdl-38903951

ABSTRACT

Time series forecasting is an essential tool across numerous domains, yet traditional models often falter when faced with unilateral boundary conditions, where data is systematically overestimated or underestimated. This paper introduces a novel approach to the task of unilateral boundary time series forecasting. Our research bridges the gap in existing methods by proposing a specialized framework to accurately forecast within these skewed datasets. The cornerstone of our approach is the unilateral mean square error (UMSE), an asymmetric loss function that strategically addresses underestimation biases in training data, improving the precision of forecasts. We further enhance model performance through the implementation of a dual model structure that processes underestimated and accurately estimated data points separately, allowing for a nuanced analysis of the data trends. Additionally, feature reconstruction is employed to recapture obscured dynamics, ensuring a comprehensive understanding of the data. We demonstrate the effectiveness of our methods through extensive experimentation with LightGBM and GRU models across diverse datasets, showcasing superior accuracy and robustness in comparison to traditional models and existing methods. Our findings not only validate the efficacy of our approach but also reveal its model-independence and broad applicability. This work lays the groundwork for future research in this domain, opening new avenues for sophisticated analytical models in various industries where precise time series forecasting is crucial.

8.
Sci Total Environ ; 943: 173958, 2024 Sep 15.
Article in English | MEDLINE | ID: mdl-38871320

ABSTRACT

Accurately and precisely estimating global horizontal irradiance (GHI) poses significant challenges due to the unpredictable nature of climate parameters and geographical limitations. To address this challenge, this study proposes a forecasting framework using an integrated model of the convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU). The proposed model uses a dataset of four different districts in Rajasthan, each with unique solar irradiance patterns. Firstly, the data was preprocessed and then trained with the optimized parameters of the standalone and hybrid models and compared. It can be observed that the proposed hybrid model (CNN-LSTM-GRU) consistently outperformed all other models regarding Mean absolute error (MAE) and Root mean squared error (RMSE). The experimental results demonstrate that the proposed method forecasts accurate GHI with a RMSE of 0.00731, 0.00730, 0.00775, 0.00810 and MAE of 0.00516, 0.00524, 0.00552, 0.00592 for Barmer, Jaisalmer, Jodhpur and Bikaner respectively. This indicates that the model is better at minimizing prediction errors and providing more accurate GHI estimates. Additionally, the proposed model achieved a higher coefficient of determination (R (Ghimire et al., 2019)), suggesting that it best fits the dataset. A higher R2 value signifies that the proposed model could explain a significant portion of the variance in the GHI dataset, further emphasizing its predictive capabilities. In conclusion, this work demonstrates the effectiveness of the hybrid algorithm in improving adaptability and enhancing prediction accuracy for GHI estimation.

9.
Heliyon ; 10(7): e27860, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38689959

ABSTRACT

Time series forecasting across different domains has received massive attention as it eases intelligent decision-making activities. Recurrent neural networks and various deep learning algorithms have been applied to modeling and forecasting multivariate time series data. Due to intricate non-linear patterns and significant variations in the randomness of characteristics across various categories of real-world time series data, achieving effectiveness and robustness simultaneously poses a considerable challenge for specific deep-learning models. We have proposed a novel prediction framework with a multi-phase feature selection technique, a long short-term memory-based autoencoder, and a temporal convolution-based autoencoder to fill this gap. The multi-phase feature selection is applied to retrieve the optimal feature selection and optimal lag window length for different features. Moreover, the customized stacked autoencoder strategy is employed in the model. The first autoencoder is used to resolve the random weight initialization problem. Additionally, the second autoencoder models the temporal relation between non-linear correlated features with convolution networks and recurrent neural networks. Finally, the model's ability to generalize, predict accurately, and perform effectively is validated through experimentation with three distinct real-world time series datasets. In this study, we conducted experiments on three real-world datasets: Energy Appliances, Beijing PM2.5 Concentration, and Solar Radiation. The Energy Appliances dataset consists of 29 attributes with a training size of 15,464 instances and a testing size of 4239 instances. For the Beijing PM2.5 Concentration dataset, there are 18 attributes, with 34,952 instances in the training set and 8760 instances in the testing set. The Solar Radiation dataset comprises 11 attributes, with 22,857 instances in the training set and 9797 instances in the testing set. The experimental setup involved evaluating the performance of forecasting models using two distinct error measures: root mean square error and mean absolute error. To ensure robust evaluation, the errors were calculated at the identical scale of the data. The results of the experiments demonstrate the superiority of the proposed model compared to existing models, as evidenced by significant advantages in various metrics such as mean squared error and mean absolute error. For PM2.5 air quality data, the proposed model's mean absolute error is 7.51 over 12.45, about ∼40% improvement. Similarly, the mean square error for the dataset is improved from 23.75 to 11.62, which is ∼51%of improvement. For the solar radiation dataset, the proposed model resulted in ∼34.7% improvement in means squared error and ∼75% in mean absolute error. The recommended framework demonstrates outstanding capabilities in generalization and outperforms datasets spanning multiple indigenous domains.

10.
J Med Syst ; 48(1): 53, 2024 May 22.
Article in English | MEDLINE | ID: mdl-38775899

ABSTRACT

Myocardial Infarction (MI) commonly referred to as a heart attack, results from the abrupt obstruction of blood supply to a section of the heart muscle, leading to the deterioration or death of the affected tissue due to a lack of oxygen. MI, poses a significant public health concern worldwide, particularly affecting the citizens of the Chittagong Metropolitan Area. The challenges lie in both prevention and treatment, as the emergence of MI has inflicted considerable suffering among residents. Early warning systems are crucial for managing epidemics promptly, especially given the escalating disease burden in older populations and the complexities of assessing present and future demands. The primary objective of this study is to forecast MI incidence early using a deep learning model, predicting the prevalence of heart attacks in patients. Our approach involves a novel dataset collected from daily heart attack incidence Time Series Patient Data spanning January 1, 2020, to December 31, 2021, in the Chittagong Metropolitan Area. Initially, we applied various advanced models, including Autoregressive Integrated Moving Average (ARIMA), Error-Trend-Seasonal (ETS), Trigonometric seasonality, Box-Cox transformation, ARMA errors, Trend and Seasonal (TBATS), and Long Short Time Memory (LSTM). To enhance prediction accuracy, we propose a novel Myocardial Sequence Classification (MSC)-LSTM method tailored to forecast heart attack occurrences in patients using the newly collected data from the Chittagong Metropolitan Area. Comprehensive results comparisons reveal that the novel MSC-LSTM model outperforms other applied models in terms of performance, achieving a minimum Mean Percentage Error (MPE) score of 1.6477. This research aids in predicting the likely future course of heart attack occurrences, facilitating the development of thorough plans for future preventive measures. The forecasting of MI occurrences contributes to effective resource allocation, capacity planning, policy creation, budgeting, public awareness, research identification, quality improvement, and disaster preparedness.


Subject(s)
Deep Learning , Forecasting , Myocardial Infarction , Humans , Myocardial Infarction/epidemiology , Myocardial Infarction/diagnosis , Forecasting/methods , Incidence , Seasons
11.
Front Big Data ; 7: 1308236, 2024.
Article in English | MEDLINE | ID: mdl-38562648

ABSTRACT

With the increasing utilization of data in various industries and applications, constructing an efficient data pipeline has become crucial. In this study, we propose a machine learning operations-centric data pipeline specifically designed for an energy consumption management system. This pipeline seamlessly integrates the machine learning model with real-time data management and prediction capabilities. The overall architecture of our proposed pipeline comprises several key components, including Kafka, InfluxDB, Telegraf, Zookeeper, and Grafana. To enable accurate energy consumption predictions, we adopt two time-series prediction models, long short-term memory (LSTM), and seasonal autoregressive integrated moving average (SARIMA). Our analysis reveals a clear trade-off between speed and accuracy, where SARIMA exhibits faster model learning time while LSTM outperforms SARIMA in prediction accuracy. To validate the effectiveness of our pipeline, we measure the overall processing time by optimizing the configuration of Telegraf, which directly impacts the load in the pipeline. The results are promising, as our pipeline achieves an average end-to-end processing time of only 0.39 s for handling 10,000 data records and an impressive 1.26 s when scaling up to 100,000 records. This indicates 30.69-90.88 times faster processing compared to the existing Python-based approach. Additionally, when the number of records increases by ten times, the increased overhead is reduced by 3.07 times. This verifies that the proposed pipeline exhibits an efficient and scalable structure suitable for real-time environments.

12.
Neural Netw ; 176: 106334, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38688070

ABSTRACT

In order to enhance the performance of Transformer models for long-term multivariate forecasting while minimizing computational demands, this paper introduces the Joint Time-Frequency Domain Transformer (JTFT). JTFT combines time and frequency domain representations to make predictions. The frequency domain representation efficiently extracts multi-scale dependencies while maintaining sparsity by utilizing a small number of learnable frequencies. Simultaneously, the time domain (TD) representation is derived from a fixed number of the most recent data points, strengthening the modeling of local relationships and mitigating the effects of non-stationarity. Importantly, the length of the representation remains independent of the input sequence length, enabling JTFT to achieve linear computational complexity. Furthermore, a low-rank attention layer is proposed to efficiently capture cross-dimensional dependencies, thus preventing performance degradation resulting from the entanglement of temporal and channel-wise modeling. Experimental results on eight real-world datasets demonstrate that JTFT outperforms state-of-the-art baselines in predictive performance.


Subject(s)
Forecasting , Time Factors , Neural Networks, Computer , Algorithms , Multivariate Analysis , Humans
13.
JMIR Med Inform ; 12: e53400, 2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38513229

ABSTRACT

BACKGROUND: Predicting the bed occupancy rate (BOR) is essential for efficient hospital resource management, long-term budget planning, and patient care planning. Although macro-level BOR prediction for the entire hospital is crucial, predicting occupancy at a detailed level, such as specific wards and rooms, is more practical and useful for hospital scheduling. OBJECTIVE: The aim of this study was to develop a web-based support tool that allows hospital administrators to grasp the BOR for each ward and room according to different time periods. METHODS: We trained time-series models based on long short-term memory (LSTM) using individual bed data aggregated hourly each day to predict the BOR for each ward and room in the hospital. Ward training involved 2 models with 7- and 30-day time windows, and room training involved models with 3- and 7-day time windows for shorter-term planning. To further improve prediction performance, we added 2 models trained by concatenating dynamic data with static data representing room-specific details. RESULTS: We confirmed the results of a total of 12 models using bidirectional long short-term memory (Bi-LSTM) and LSTM, and the model based on Bi-LSTM showed better performance. The ward-level prediction model had a mean absolute error (MAE) of 0.067, mean square error (MSE) of 0.009, root mean square error (RMSE) of 0.094, and R2 score of 0.544. Among the room-level prediction models, the model that combined static data exhibited superior performance, with a MAE of 0.129, MSE of 0.050, RMSE of 0.227, and R2 score of 0.600. Model results can be displayed on an electronic dashboard for easy access via the web. CONCLUSIONS: We have proposed predictive BOR models for individual wards and rooms that demonstrate high performance. The results can be visualized through a web-based dashboard, aiding hospital administrators in bed operation planning. This contributes to resource optimization and the reduction of hospital resource use.

14.
Sensors (Basel) ; 24(5)2024 Feb 23.
Article in English | MEDLINE | ID: mdl-38474990

ABSTRACT

The modeling and forecasting of cerebral pressure-flow dynamics in the time-frequency domain have promising implications for veterinary and human life sciences research, enhancing clinical care by predicting cerebral blood flow (CBF)/perfusion, nutrient delivery, and intracranial pressure (ICP)/compliance behavior in advance. Despite its potential, the literature lacks coherence regarding the optimal model type, structure, data streams, and performance. This systematic scoping review comprehensively examines the current landscape of cerebral physiological time-series modeling and forecasting. It focuses on temporally resolved cerebral pressure-flow and oxygen delivery data streams obtained from invasive/non-invasive cerebral sensors. A thorough search of databases identified 88 studies for evaluation, covering diverse cerebral physiologic signals from healthy volunteers, patients with various conditions, and animal subjects. Methodologies range from traditional statistical time-series analysis to innovative machine learning algorithms. A total of 30 studies in healthy cohorts and 23 studies in patient cohorts with traumatic brain injury (TBI) concentrated on modeling CBFv and predicting ICP, respectively. Animal studies exclusively analyzed CBF/CBFv. Of the 88 studies, 65 predominantly used traditional statistical time-series analysis, with transfer function analysis (TFA), wavelet analysis, and autoregressive (AR) models being prominent. Among machine learning algorithms, support vector machine (SVM) was widely utilized, and decision trees showed promise, especially in ICP prediction. Nonlinear models and multi-input models were prevalent, emphasizing the significance of multivariate modeling and forecasting. This review clarifies knowledge gaps and sets the stage for future research to advance cerebral physiologic signal analysis, benefiting neurocritical care applications.


Subject(s)
Brain Injuries, Traumatic , Animals , Humans
15.
Sci Rep ; 14(1): 6653, 2024 Mar 20.
Article in English | MEDLINE | ID: mdl-38509162

ABSTRACT

Integration renewable energy sources into current power generation systems necessitates accurate forecasting to optimize and preserve supply-demand restrictions in the electrical grids. Due to the highly random nature of environmental conditions, accurate prediction of PV power has limitations, particularly on long and short periods. Thus, this research provides a new hybrid model for forecasting short PV power based on the fusing of multi-frequency information of different decomposition techniques that will allow a forecaster to provide reliable forecasts. We evaluate and provide insights into the performance of five multi-scale decomposition algorithms combined with a deep convolution neural network (CNN). Additionally, we compare the suggested combination approach's performance to that of existing forecast models. An exhaustive assessment is carried out using three grid-connected PV power plants in Algeria with a total installed capacity of 73.1 MW. The developed fusing strategy displayed an outstanding forecasting performance. The comparative analysis of the proposed combination method with the stand-alone forecast model and other hybridization techniques proves its superiority in terms of forecasting precision, with an RMSE varying in the range of [0.454-1.54] for the three studied PV stations.

16.
Heliyon ; 10(6): e27795, 2024 Mar 30.
Article in English | MEDLINE | ID: mdl-38496905

ABSTRACT

Bangladesh's subtropical climate with an abundance of sunlight throughout the greater portion of the year results in increased effectiveness of solar panels. Solar irradiance forecasting is an essential aspect of grid-connected photovoltaic systems to efficiently manage solar power's variation and uncertainty and to assist in balancing power supply and demand. This is why it is essential to forecast solar irradiation accurately. Many meteorological factors influence solar irradiation, which has a high degree of fluctuation and uncertainty. Predicting solar irradiance multiple steps ahead makes it difficult for forecasting models to capture long-term sequential relationships. Attention-based models are widely used in the field of Natural Language Processing for their ability to learn long-term dependencies within sequential data. In this paper, our aim is to present an attention-based model framework for multivariate time series forecasting. Using data from two different locations in Bangladesh with a resolution of 30 min, the Attention-based encoder-decoder, Transformer, and Temporal Fusion Transformer (TFT) models are trained and tested to predict over 24 steps ahead and compared with other forecasting models. According to our findings, adding the attention mechanism significantly increased prediction accuracy and TFT has shown to be more precise than the rest of the algorithms in terms of accuracy and robustness. The obtained mean square error (MSE), the mean absolute error (MAE), and the coefficient of determination (R2) values for TFT are 0.151, 0.212, and 0.815, respectively. In comparison to the benchmark and sequential models (including the Naive, MLP, and Encoder-Decoder models), TFT has a reduction in the MSE and MAE of 8.4-47.9% and 6.1-22.3%, respectively, while R2 is raised by 2.13-26.16%. The ability to incorporate long-distance dependency increases the predictive power of attention models.

17.
Comput Methods Programs Biomed ; 246: 108060, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38350189

ABSTRACT

BACKGROUND AND OBJECTIVE: Vital sign monitoring in the Intensive Care Unit (ICU) is crucial for enabling prompt interventions for patients. This underscores the need for an accurate predictive system. Therefore, this study proposes a novel deep learning approach for forecasting Heart Rate (HR), Systolic Blood Pressure (SBP), and Diastolic Blood Pressure (DBP) in the ICU. METHODS: We extracted 24,886 ICU stays from the MIMIC-III database which contains data from over 46 thousand patients, to train and test the model. The model proposed in this study, Transformer-based Diffusion Probabilistic Model for Sparse Time Series Forecasting (TDSTF), merges Transformer and diffusion models to forecast vital signs. The TDSTF model showed state-of-the-art performance in predicting vital signs in the ICU, outperforming other models' ability to predict distributions of vital signs and being more computationally efficient. The code is available at https://github.com/PingChang818/TDSTF. RESULTS: The results of the study showed that TDSTF achieved a Standardized Average Continuous Ranked Probability Score (SACRPS) of 0.4438 and a Mean Squared Error (MSE) of 0.4168, an improvement of 18.9% and 34.3% over the best baseline model, respectively. The inference speed of TDSTF is more than 17 times faster than the best baseline model. CONCLUSION: TDSTF is an effective and efficient solution for forecasting vital signs in the ICU, and it shows a significant improvement compared to other models in the field.


Subject(s)
Intensive Care Units , Vital Signs , Humans , Blood Pressure , Heart Rate , Vital Signs/physiology , Models, Statistical
18.
BMC Psychiatry ; 24(1): 112, 2024 Feb 09.
Article in English | MEDLINE | ID: mdl-38336744

ABSTRACT

BACKGROUND: Although the COVID-19 pandemic and its implications have been associated with mental health services utilization and medication consumption, there is no longitudinal study on the long-term impact on ADHD medication use trends. METHODS: This study examines the European ADHD medication consumption in 2020 to 2022 compared to the predicted consumption assuming the persistence of pre-pandemic trends. Predictions are calculated using Seasonal Autoregressive Integrated Moving Average (SARIMA) models. RESULTS: While European ADHD medication sales recorded a drop in 2020, they returned to the predicted level in 2021, even slightly exceeding it. In 2022, we found a clear exceedance of the predicted level by 16.4% on average at country level. Furthermore, the increase in consumption growth in the post-pandemic period (2021-2022) compared to the pre-pandemic period (2014-2019) was significant in 26 of the 28 European countries under consideration. CONCLUSION: There is strong evidence of a trend change in the ADHD medicine consumption growth throughout Europe after the COVID-19 pandemic.


Subject(s)
Attention Deficit Disorder with Hyperactivity , COVID-19 , Mental Health Services , Humans , Attention Deficit Disorder with Hyperactivity/drug therapy , Attention Deficit Disorder with Hyperactivity/epidemiology , Pandemics , Europe/epidemiology
19.
Heliyon ; 10(3): e25034, 2024 Feb 15.
Article in English | MEDLINE | ID: mdl-38317988

ABSTRACT

The new e-commerce field has attracted businesses of all sizes, retailers, and individuals. Consequently, there is an ongoing necessity for applications that can offer predictions on trending products and optimal selling time. This research suggests aiding businesses in forecasting demand for various product categories by employing data mining algorithms on multivariate time series data. To ensure the most recent information, real-time data was gathered through APIs to build the first block in this research. While search volume was derived from the Keywords Everywhere tool, Amazon's search volume was derived from the Helium 10 tool and external features about actual purchased data. The harvested raw datasets went through multiple processes to generate the dataset and were validated. The models XGBoost, Linear Regression, Random Forest, long-short-term memory, and K-nearest neighbor were employed to predict the trends, and the performance is demonstrated using evaluation metrics, namely Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and Coefficient of Determination (R2). Overall, Linear Regression outperformed, especially at a correlation coefficient of 0.9, with R2 = 90.688, MAE = 0.038, MSE = 0.003, and RMSE = 0.057. KNN outperformed on correlation coefficient of 0.7, R2 = 85.129, MAE = 0.045, MSE = 0.005, and RMSE = 0.068. XGBoost produced the best results with a correlation coefficient of 0.9, yielding R2 = 85.89, MAE = 0.042, MSE = 0.004, and RMSE = 0.062. Random Forest, on the other hand, achieves peak metrics with a correlation coefficient of 0.6, R2 = 84.854, MAE = 0.041, MSE = 0.004, and RMSE = 0.066.

20.
Heliyon ; 10(4): e25821, 2024 Feb 29.
Article in English | MEDLINE | ID: mdl-38375305

ABSTRACT

The global surge in energy demand, driven by technological advances and population growth, underscores the critical need for effective management of electricity supply and demand. In certain developing nations, a significant challenge arises because the energy demand of their population exceeds their capacity to generate, as is the case in Iraq. This study focuses on energy forecasting in Iraq, using a previously unstudied dataset from 2019 to 2021, sourced from the Iraqi Ministry of Electricity. The study employs a diverse set of advanced forecasting models, including Linear Regression, XGBoost, Random Forest, Long Short-Term Memory, Temporal Convolutional Networks, and Multi-Layer Perceptron, evaluating their performance across four distinct forecast horizons (24, 48, 72, and 168 hours ahead). Key findings reveal that Linear Regression is a consistent top performer in demand forecasting, while XGBoost excels in supply forecasting. Statistical analysis detects differences in models performances for both datasets, although no significant differences are found in pairwise comparisons for the supply dataset. This study emphasizes the importance of accurate energy forecasting for energy security, resource allocation, and policy-making in Iraq. It provides tools for decision-makers to address energy challenges, mitigate power shortages, and stimulate economic growth. It also encourages innovative forecasting methods, the use of external variables like weather and economic data, and region-specific models tailored to Iraq's energy landscape. The research contributes valuable insights into the dynamics of electricity supply and demand in Iraq and offers performance evaluations for better energy planning and management, ultimately promoting sustainable development and improving the quality of life for the Iraqi population.

SELECTION OF CITATIONS
SEARCH DETAIL
...