Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(22)2023 Nov 13.
Article in English | MEDLINE | ID: mdl-38005540

ABSTRACT

In wireless communication, multiple signals are utilized to receive and send information in the form of signals simultaneously. These signals consume little power and are usually inexpensive, with a high data rate during data transmission. An Multi Input Multi Output (MIMO) system uses numerous antennas to enhance the functionality of the system. Moreover, system intricacy and power utilization are difficult and highly complicated tasks to achieve in an Analog to Digital Converter (ADC) at the receiver side. An infinite number of MIMO channels are used in wireless networks to improve efficiency with Cross Entropy Optimization (CEO). ADC is a serious issue because the data of the accepted signal are completely lost. ADC is used in the MIMO channels to overcome the above issues, but it is very hard to implement and design. So, an efficient way to enhance the estimation of channels in the MIMO system is proposed in this paper with the utilization of the heuristic-based optimization technique. The main task of the implemented channel prediction framework is to predict the channel coefficient of the MIMO system at the transmitter side based on the receiver side error ratio, which is obtained from feedback information using a Hybrid Serial Cascaded Network (HSCN). Then, this multi-scaled cascaded autoencoder is combined with Long Short Term Memory (LSTM) with an attention mechanism. The parameters in the developed Hybrid Serial Cascaded Multi-scale Autoencoder and Attention LSTM are optimized using the developed Hybrid Revised Position-based Wild Horse and Energy Valley Optimizer (RP-WHEVO) algorithm for minimizing the "Root Mean Square Error (RMSE), Bit Error Rate (BER) and Mean Square Error (MSE)" of the estimated channel. Various experiments were carried out to analyze the accomplishment of the developed MIMO model. It was visible from the tests that the developed model enhanced the convergence rate and prediction performance along with a reduction in the computational costs.

2.
iScience ; 26(10): 107896, 2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37860760

ABSTRACT

An improved whale optimization algorithm (SWEWOA) is presented for global optimization issues. Firstly, the sine mapping initialization strategy (SS) is used to generate the population. Secondly, the escape energy (EE) is introduced to balance the exploration and exploitation of WOA. Finally, the wormhole search (WS) strengthens the capacity for exploitation. The hybrid design effectively reinforces the optimization capability of SWEWOA. To prove the effectiveness of the design, SWEWOA is performed in two test sets, CEC 2017 and 2022, respectively. The advantage of SWEWOA is demonstrated in 26 superior comparison algorithms. Then a new feature selection method called BSWEWOA-KELM is developed based on the binary SWEWOA and kernel extreme learning machine (KELM). To verify its performance, 8 high-performance algorithms are selected and experimentally studied in 16 public datasets of different difficulty. The test results demonstrate that SWEWOA performs excellently in selecting the most valuable features for classification problems.

3.
Sensors (Basel) ; 23(10)2023 May 10.
Article in English | MEDLINE | ID: mdl-37430549

ABSTRACT

The intrinsic and liveness detection behavior of electrocardiogram (ECG) signals has made it an emerging biometric modality for the researcher with several applications including forensic, surveillance and security. The main challenge is the low recognition performance with datasets of large populations, including healthy and heart-disease patients, with a short interval of an ECG signal. This research proposes a novel method with the feature-level fusion of the discrete wavelet transform and a one-dimensional convolutional recurrent neural network (1D-CRNN). ECG signals were preprocessed by removing high-frequency powerline interference, followed by a low-pass filter with a cutoff frequency of 1.5 Hz for physiological noises and by baseline drift removal. The preprocessed signal is segmented with PQRST peaks, while the segmented signals are passed through Coiflets' 5 Discrete Wavelet Transform for conventional feature extraction. The 1D-CRNN with two long short-term memory (LSTM) layers followed by three 1D convolutional layers was applied for deep learning-based feature extraction. These combinations of features result in biometric recognition accuracies of 80.64%, 98.81% and 99.62% for the ECG-ID, MIT-BIH and NSR-DB datasets, respectively. At the same time, 98.24% is achieved when combining all of these datasets. This research also compares conventional feature extraction, deep learning-based feature extraction and a combination of these for performance enhancement, compared to transfer learning approaches such as VGG-19, ResNet-152 and Inception-v3 with a small segment of ECG data.


Subject(s)
Arrhythmias, Cardiac , Electrocardiography , Humans , Biometry , Deep Learning , Wavelet Analysis , Arrhythmias, Cardiac/diagnosis
4.
Sensors (Basel) ; 23(13)2023 Jul 03.
Article in English | MEDLINE | ID: mdl-37447966

ABSTRACT

Cloud computing plays an important role in every IT sector. Many tech giants such as Google, Microsoft, and Facebook as deploying their data centres around the world to provide computation and storage services. The customers either submit their job directly or they take the help of the brokers for the submission of the jobs to the cloud centres. The preliminary aim is to reduce the overall power consumption which was ignored in the early days of cloud development. This was due to the performance expectations from cloud servers as they were supposed to provide all the services through their services layers IaaS, PaaS, and SaaS. As time passed and researchers came up with new terminologies and algorithmic architecture for the reduction of power consumption and sustainability, other algorithmic anarchies were also introduced, such as statistical oriented learning and bioinspired algorithms. In this paper, an indepth focus has been done on multiple approaches for migration among virtual machines and find out various issues among existing approaches. The proposed work utilizes elastic scheduling inspired by the smart elastic scheduling algorithm (SESA) to develop a more energy-efficient VM allocation and migration algorithm. The proposed work uses cosine similarity and bandwidth utilization as additional utilities to improve the current performance in terms of QoS. The proposed work is evaluated for overall power consumption and service level agreement violation (SLA-V) and is compared with related state of art techniques. A proposed algorithm is also presented in order to solve problems found during the survey.


Subject(s)
Algorithms , Cloud Computing , Humans
5.
Sensors (Basel) ; 23(11)2023 May 26.
Article in English | MEDLINE | ID: mdl-37299834

ABSTRACT

The phenomenon of acoustic wave reflection off fluid-solid surfaces is the focus of this research. This research aims to measure the effect of material physical qualities on oblique incidence acoustic attenuation across a large frequency range. To construct the extensive comparison shown in the supporting documentation, reflection coefficient curves were generated by carefully adjusting the porousness and permeability of the poroelastic solid. The next stage in determining its acoustic response is to determine the pseudo-Brewster angle shift and the reflection coefficient minimum dip for the previously indicated attenuation permutations. This circumstance is made possible by modeling and studying the reflection and absorption of acoustic plane waves encountering half-space and two-layer surfaces. For this purpose, both viscous and thermal losses are taken into account. According to the research findings, the propagation medium has a significant impact on the form of the curve that represents the reflection coefficient, whereas the effects of permeability, porosity, and driving frequency are relatively less significant to the pseudo-Brewster angle and curve minima, respectively. This research additionally found that as permeability and porosity increase, the pseudo-Brewster angle shifts to the left (proportionally to porosity increase) until it reaches a limiting value of 73.4 degrees, and that the reflection coefficient curves for each level of porosity exhibit a greater angular dependence, with an overall decrease in magnitude at all incident angles. These findings are given within the framework of the investigation (in proportion to the increase in porosity). The study concluded that when permeability declined, the angular dependence of frequency-dependent attenuation reduced, resulting in iso-porous curves. The study also discovered that the matrix porosity largely affected the angular dependency of the viscous losses in the range of 1.4 × 10-14 m2 permeability.


Subject(s)
Models, Theoretical , Water , Sound , Acoustics , Permeability
6.
Sensors (Basel) ; 23(7)2023 Mar 29.
Article in English | MEDLINE | ID: mdl-37050627

ABSTRACT

In recent decades, falls have posed multiple critical health issues, especially for the older population, with their emerging growth. Recent research has shown that a wrist-based fall detection system offers an accessory-like comfortable solution for Internet of Things (IoT)-based monitoring. Nevertheless, an autonomous device for anywhere-anytime may present an energy consumption concern. Hence, this paper proposes a novel energy-aware IoT-based architecture for Message Queuing Telemetry Transport (MQTT)-based gateway-less monitoring for wearable fall detection. Accordingly, a hybrid double prediction technique based on Supervised Dictionary Learning was implemented to reinforce the detection efficiency of our previous works. A controlled dataset was collected for training (offline), while a real set of measurements of the proposed system was used for validation (online). It achieved a noteworthy offline and online detection performance of 99.8% and 91%, respectively, overpassing most of the related works using only an accelerometer. In the worst case, the system showed a battery consumption optimization by a minimum of 27.32 working hours, significantly higher than other research prototypes. The approach presented here proves to be promising for real applications, which require a reliable and long-term anywhere-anytime solution.

7.
Comput Intell Neurosci ; 2022: 9414567, 2022.
Article in English | MEDLINE | ID: mdl-35720905

ABSTRACT

COVID-19 has remained a threat to world life despite a recent reduction in cases. There is still a possibility that the virus will evolve and become more contagious. If such a situation occurs, the resulting calamity will be worse than in the past if we act irresponsibly. COVID-19 must be widely screened and recognized early to avert a global epidemic. Positive individuals should be quarantined immediately, as this is the only effective way to prevent a global tragedy that has occurred previously. No positive case should go unrecognized. However, current COVID-19 detection procedures require a significant amount of time during human examination based on genetic and imaging techniques. Apart from RT-PCR and antigen-based tests, CXR and CT imaging techniques aid in the rapid and cost-effective identification of COVID. However, discriminating between diseased and normal X-rays is a time-consuming and challenging task requiring an expert's skill. In such a case, the only solution was an automatic diagnosis strategy for identifying COVID-19 instances from chest X-ray images. This article utilized a deep convolutional neural network, ResNet, which has been demonstrated to be the most effective for image classification. The present model is trained using pretrained ResNet on ImageNet weights. The versions of ResNet34, ResNet50, and ResNet101 were implemented and validated against the dataset. With a more extensive network, the accuracy appeared to improve. Nonetheless, our objective was to balance accuracy and training time on a larger dataset. By comparing the prediction outcomes of the three models, we concluded that ResNet34 is a more likely candidate for COVID-19 detection from chest X-rays. The highest accuracy level reached 98.34%, which was higher than the accuracy achieved by other state-of-the-art approaches examined in earlier studies. Subsequent analysis indicated that the incorrect predictions occurred with approximately 100% certainty. This uncovered a severe weakness in CNN, particularly in the medical area, where critical decisions are made. However, this can be addressed further in a future study by developing a modified model to incorporate uncertainty into the predictions, allowing medical personnel to manually review the incorrect predictions.


Subject(s)
COVID-19 , Deep Learning , Humans , Neural Networks, Computer , SARS-CoV-2 , X-Rays
8.
PeerJ Comput Sci ; 8: e959, 2022.
Article in English | MEDLINE | ID: mdl-35634103

ABSTRACT

The discovery of a new form of corona-viruses in December 2019, SARS-CoV-2, commonly named COVID-19, has reshaped the world. With health and economic issues at stake, scientists have been focusing on understanding the dynamics of the disease, in order to provide the governments with the best policies and strategies allowing them to reduce the span of the virus. The world has been waiting for the vaccine for more than one year. The World Health Organization (WHO) is advertising the vaccine as a safe and effective measure to fight off the virus. Saudi Arabia was the fourth country in the world to start to vaccinate its population. Even with the new simplified COVID-19 rules, the third dose is still mandatory. COVID-19 vaccines have raised many questions regarding in its efficiency and its role to reduce the number of infections. In this work, we try to answer these question and propose a new mathematical model with five compartments, including susceptible, vaccinated, infectious, asymptotic and recovered individuals. We provide theoretical results regarding the effective reproduction number, the stability of endemic equilibrium and disease free equilibrium. We provide numerical analysis of the model based on the Saudi case. Our developed model shows that the vaccine reduces the transmission rate and provides an explanation to the rise in the number of new infections immediately after the start of the vaccination campaign in Saudi Arabia.

9.
Sensors (Basel) ; 22(9)2022 May 09.
Article in English | MEDLINE | ID: mdl-35591282

ABSTRACT

Recently, there has been an increasing need for new applications and services such as big data, blockchains, vehicle-to-everything (V2X), the Internet of things, 5G, and beyond. Therefore, to maintain quality of service (QoS), accurate network resource planning and forecasting are essential steps for resource allocation. This study proposes a reliable hybrid dynamic bandwidth slice forecasting framework that combines the long short-term memory (LSTM) neural network and local smoothing methods to improve the network forecasting model. Moreover, the proposed framework can dynamically react to all the changes occurring in the data series. Backbone traffic was used to validate the proposed method. As a result, the forecasting accuracy improved significantly with the proposed framework and with minimal data loss from the smoothing process. The results showed that the hybrid moving average LSTM (MLSTM) achieved the most remarkable improvement in the training and testing forecasts, with 28% and 24% for long-term evolution (LTE) time series and with 35% and 32% for the multiprotocol label switching (MPLS) time series, respectively, while robust locally weighted scatter plot smoothing and LSTM (RLWLSTM) achieved the most significant improvement for upstream traffic with 45%; moreover, the dynamic learning framework achieved improvement percentages that can reach up to 100%.


Subject(s)
Machine Learning , Neural Networks, Computer , Big Data , Forecasting , Memory, Long-Term
SELECTION OF CITATIONS
SEARCH DETAIL
...