Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(9)2023 Apr 28.
Article in English | MEDLINE | ID: mdl-37177560

ABSTRACT

Fifth Generation (5G) signals using the millimeter wave (mmWave) spectrums are highly vulnerable to blockage due to rapid variations in channel link quality. This can cause the devices or User Equipment (UE) to suffer from connection failure. In a dual connectivity (DC) network, the channel's intermittency issues were partially solved by maintaining the UE's connectivity to primary (LTE advanced stations) and secondary (5G mmWave stations) simultaneously. Even though the dual-connected network performs excellently in maintaining connectivity, its performance drops significantly due to the inefficient handover from one 5G mmWave station to another. The situation worsens when UE travels a long distance in a highly dense obstacle environment, which requires multiple ineffective handovers that eventually lead to performance degradation. This research aimed to propose an Adaptive TTT Handover (ATH) mechanism that deals with unpredictable 5G mmWave wireless channel behaviors that are highly intermittent. An adaptive algorithm was developed to automatically adjust the handover control parameters, such as Time-to-Trigger (TTT), based on the current state of channel condition measured by the Signal-to-Interference-Noise Ratio (SINR). The developed algorithm was tested under a 5G mmWave statistical channel model to represent a time-varying channel matrix that includes fading and the Doppler effect. The performance of the proposed handover mechanism was analyzed and evaluated in terms of handover probability, latency, and throughput by using the Network Simulator 3 tool. The comparative simulation result shows that the proposed adaptive handover mechanism performs excellently compared to conventional handovers and other enhancement techniques.

2.
Sensors (Basel) ; 23(4)2023 Feb 13.
Article in English | MEDLINE | ID: mdl-36850701

ABSTRACT

Blockchain introduces challenges related to the reliability of user identity and identity management systems; this includes detecting unfalsified identities linked to IoT applications. This study focuses on optimizing user identity verification time by employing an efficient encryption algorithm for the user signature in a peer-to-peer decentralized IoT blockchain network. To achieve this, a user signature-based identity management framework is examined by using various encryption techniques and contrasting various hash functions built on top of the Modified Merkle Hash Tree (MMHT) data structure algorithm. The paper presents the execution of varying dataset sizes based on transactions between nodes to test the scalability of the proposed design for secure blockchain communication. The results show that the MMHT data structure algorithm using SHA3 and AES-128 encryption algorithm gives the lowest execution time, offering a minimum of 36% gain in time optimization compared to other algorithms. This work shows that using the AES-128 encryption algorithm with the MMHT algorithm and SHA3 hash function not only identifies malicious codes but also improves user integrity check performance in a blockchain network, while ensuring network scalability. Therefore, this study presents the performance evaluation of a blockchain network considering its distinct types, properties, components, and algorithms' taxonomy.

3.
Sensors (Basel) ; 22(16)2022 Aug 18.
Article in English | MEDLINE | ID: mdl-36015959

ABSTRACT

Mobility management is an essential process in mobile networks to ensure a high quality of service (QoS) for mobile user equipment (UE) during their movements. In fifth generation (5G) and beyond (B5G) mobile networks, mobility management becomes more critical due to several key factors, such as the use of Millimeter Wave (mmWave) and Terahertz, a higher number of deployed small cells, massive growth of connected devices, the requirements of a higher data rate, and the necessities for ultra-low latency with high reliability. Therefore, providing robust mobility techniques that enable seamless connections through the UE's mobility has become critical and challenging. One of the crucial handover (HO) techniques is known as mobility robustness optimization (MRO), which mainly aims to adjust HO control parameters (HCPs) (time-to-trigger (TTT) and handover margin (HOM)). Although this function has been introduced in 4G and developed further in 5G, it must be more efficient with future mobile networks due to several key challenges, as previously illustrated. This paper proposes a Robust Handover Optimization Technique with a Fuzzy Logic Controller (RHOT-FLC). The proposed technique aims to automatically configure HCPs by exploiting the information on Reference Signal Received Power (RSRP), Reference Signal Received Quality (RSRQ), and UE velocity as input parameters for the proposed technique. The technique is validated through various mobility scenarios in B5G networks. Additionally, it is evaluated using a number of major HO performance metrics, such as HO probability (HOP), HO failure (HOF), HO ping-pong (HOPP), HO latency (HOL), and HO interruption time (HIT). The obtained results have also been compared with other competitive algorithms from the literature. The results show that RHOT-FLC has achieved considerably better performance than other techniques. Furthermore, the RHOT-FLC technique obtains up to 95% HOP reduction, 95.8% in HOF, 97% in HOPP, 94.7% in HOL, and 95% in HIT compared to the competitive algorithms. Overall, RHOT-FLC obtained a substantial improvement of up to 95.5% using the considered HO performance metrics.


Subject(s)
Computer Communication Networks , Fuzzy Logic , Algorithms , Reproducibility of Results , Wireless Technology
4.
Sensors (Basel) ; 22(3)2022 Jan 20.
Article in English | MEDLINE | ID: mdl-35161509

ABSTRACT

Ever since the introduction of fifth generation (5G) mobile communications, the mobile telecommunications industry has been debating whether 5G is an "evolution" or "revolution" from the previous legacy mobile networks, but now that 5G has been commercially available for the past few years, the research direction has recently shifted towards the upcoming generation of mobile communication system, known as the sixth generation (6G), which is expected to drastically provide significant and evolutionary, if not revolutionary, improvements in mobile networks. The promise of extremely high data rates (in terabits), artificial intelligence (AI), ultra-low latency, near-zero/low energy, and immense connected devices is expected to enhance the connectivity, sustainability, and trustworthiness and provide some new services, such as truly immersive "extended reality" (XR), high-fidelity mobile hologram, and a new generation of entertainment. Sixth generation and its vision are still under research and open for developers and researchers to establish and develop their directions to realize future 6G technology, which is expected to be ready as early as 2028. This paper reviews 6G mobile technology, including its vision, requirements, enabling technologies, and challenges. Meanwhile, a total of 11 communication technologies, including terahertz (THz) communication, visible light communication (VLC), multiple access, coding, cell-free massive multiple-input multiple-output (CF-mMIMO) zero-energy interface, intelligent reflecting surface (IRS), and infusion of AI/machine learning (ML) in wireless transmission techniques, are presented. Moreover, this paper compares 5G and 6G in terms of services, key technologies, and enabling communications techniques. Finally, it discusses the crucial future directions and technology developments in 6G.


Subject(s)
Artificial Intelligence , Communication , Machine Learning , Technology , Wireless Technology
5.
J Diabetes Sci Technol ; 16(5): 1208-1219, 2022 09.
Article in English | MEDLINE | ID: mdl-34078114

ABSTRACT

BACKGROUND: Critically ill ICU patients frequently experience acute insulin resistance and increased endogenous glucose production, manifesting as stress-induced hyperglycemia and hyperinsulinemia. STAR (Stochastic TARgeted) is a glycemic control protocol, which directly manages inter- and intra- patient variability using model-based insulin sensitivity (SI). The model behind STAR assumes a population constant for endogenous glucose production (EGP), which is not otherwise identifiable. OBJECTIVE: This study analyses the effect of estimating EGP for ICU patients with very low SI (severe insulin resistance) and its impact on identified, model-based insulin sensitivity identification, modeling accuracy, and model-based glycemic clinical control. METHODS: Using clinical data from 717 STAR patients in 3 independent cohorts (Hungary, New Zealand, and Malaysia), insulin sensitivity, time of insulin resistance, and EGP values are analyzed. A method is presented to estimate EGP in the presence of non-physiologically low SI. Performance is assessed via model accuracy. RESULTS: Results show 22%-62% of patients experience 1+ episodes of severe insulin resistance, representing 0.87%-9.00% of hours. Episodes primarily occur in the first 24 h, matching clinical expectations. The Malaysian cohort is most affected. In this subset of hours, constant model-based EGP values can bias identified SI and increase blood glucose (BG) fitting error. Using the EGP estimation method presented in these constrained hours significantly reduced BG fitting errors. CONCLUSIONS: Patients early in ICU stay may have significantly increased EGP. Increasing modeled EGP in model-based glycemic control can improve control accuracy in these hours. The results provide new insight into the frequency and level of significantly increased EGP in critical illness.


Subject(s)
Hyperglycemia , Insulin Resistance , Blood Glucose/analysis , Critical Care/methods , Critical Illness , Glucose , Humans , Insulin , Intensive Care Units
6.
Sensors (Basel) ; 21(22)2021 Nov 17.
Article in English | MEDLINE | ID: mdl-34833705

ABSTRACT

As nuclear technology evolves, and continues to be used in various fields since its discovery less than a century ago, radiation safety has become a major concern to humans and the environment. Radiation monitoring plays a significant role in preventive radiological nuclear detection in nuclear facilities, hospitals, or in any activities associated with radioactive materials by acting as a tool to measure the risk of being exposed to radiation while reaping its benefit. Apart from in occupational settings, radiation monitoring is required in emergency responses to radiation incidents as well as outdoor radiation zones. Several radiation sensors have been developed, ranging from as simple as a Geiger-Muller counter to bulkier radiation systems such as the High Purity Germanium detector, with different functionality for use in different settings, but the inability to provide real-time data makes radiation monitoring activities less effective. The deployment of manned vehicles equipped with these radiation sensors reduces the scope of radiation monitoring operations significantly, but the safety of radiation monitoring operators is still compromised. Recently, the Internet of Things (IoT) technology has been introduced to the world and offered solutions to these limitations. This review elucidates a systematic understanding of the fundamental usage of the Internet of Drones for radiation monitoring purposes. The extension of essential functional blocks in IoT can be expanded across radiation monitoring industries, presenting several emerging research opportunities and challenges. This article offers a comprehensive review of the evolutionary application of IoT technology in nuclear and radiation monitoring. Finally, the security of the nuclear industry is discussed.


Subject(s)
Internet of Things , Radiation Monitoring , Humans , Remote Sensing Technology , Technology , Wireless Technology
7.
Sensors (Basel) ; 22(1)2021 Dec 30.
Article in English | MEDLINE | ID: mdl-35009820

ABSTRACT

The most effective methods of preventing COVID-19 infection include maintaining physical distancing and wearing a face mask while in close contact with people in public places. However, densely populated areas have a greater incidence of COVID-19 dissemination, which is caused by people who do not comply with standard operating procedures (SOPs). This paper presents a prototype called PADDIE-C19 (Physical Distancing Device with Edge Computing for COVID-19) to implement the physical distancing monitoring based on a low-cost edge computing device. The PADDIE-C19 provides real-time results and responses, as well as notifications and warnings to anyone who violates the 1-m physical distance rule. In addition, PADDIE-C19 includes temperature screening using an MLX90614 thermometer and ultrasonic sensors to restrict the number of people on specified premises. The Neural Network Processor (KPU) in Grove Artificial Intelligence Hardware Attached on Top (AI HAT), an edge computing unit, is used to accelerate the neural network model on person detection and achieve up to 18 frames per second (FPS). The results show that the accuracy of person detection with Grove AI HAT could achieve 74.65% and the average absolute error between measured and actual physical distance is 8.95 cm. Furthermore, the accuracy of the MLX90614 thermometer is guaranteed to have less than 0.5 °C value difference from the more common Fluke 59 thermometer. Experimental results also proved that when cloud computing is compared to edge computing, the Grove AI HAT achieves the average performance of 18 FPS for a person detector (kmodel) with an average 56 ms execution time in different networks, regardless of the network connection type or speed.


Subject(s)
COVID-19 , Physical Distancing , Artificial Intelligence , Humans , Masks , SARS-CoV-2
8.
Med Devices (Auckl) ; 13: 139-149, 2020.
Article in English | MEDLINE | ID: mdl-32607009

ABSTRACT

PURPOSE: This paper presents an assessment of an automated and personalized stochastic targeted (STAR) glycemic control protocol compliance in Malaysian intensive care unit (ICU) patients to ensure an optimized usage. PATIENTS AND METHODS: STAR proposes 1-3 hours treatment based on individual insulin sensitivity variation and history of blood glucose, insulin, and nutrition. A total of 136 patients recorded data from STAR pilot trial in Malaysia (2017-quarter of 2019*) were used in the study to identify the gap between chosen administered insulin and nutrition intervention as recommended by STAR, and the real intervention performed. RESULTS: The results show the percentage of insulin compliance increased from 2017 to first quarter of 2019* and fluctuated in feed administrations. Overall compliance amounted to 98.8% and 97.7% for administered insulin and feed, respectively. There was higher average of 17 blood glucose measurements per day than in other centres that have been using STAR, but longer intervals were selected when recommended. Control safety and performance were similar for all periods showing no obvious correlation to compliance. CONCLUSION: The results indicate that STAR, an automated model-based protocol is positively accepted among the Malaysian ICU clinicians to automate glycemic control and the usage can be extended to other hospitals already. Performance could be improved with several propositions.

9.
Med Devices (Auckl) ; 12: 215-226, 2019.
Article in English | MEDLINE | ID: mdl-31239792

ABSTRACT

Background: Stress-induced hyperglycemia is common in critically ill patients. A few forms of model-based glycemic control have been introduced to reduce this phenomena and among them is the automated STAR protocol which has been used in the Christchurch and Gyulá hospitals' intensive care units (ICUs) since 2010. Methods: This article presents the pilot trial assessment of STAR protocol which has been implemented in the International Islamic University Malaysia Medical Centre (IIUMMC) Hospital ICU since December 2017. One hundred and forty-two patients who received STAR treatment for more than 20 hours were used in the assessment. The initial results are presented to discuss the ability to adopt and adapt the model-based control framework in a Malaysian environment by analyzing its performance and safety. Results: Overall, 60.7% of blood glucose measurements were in the target band. Only 0.78% and 0.02% of cohort measurements were below 4.0 mmol/L and 2.2 mmol/L (the limitsfor mild and severe hypoglycemia, respectively). Treatment preference-wise, the clinical staff were favorable of longer intervention options when available. However, 1 hourly treatments were still used in 73.7% of cases. Conclusion: The protocol succeeded in achieving patient-specific glycemic control while maintaining safety and was trusted by nurses to reduce workload. Its lower performance results, however, give the indication for modification in some of the control settings to better fit the Malaysian environment.

SELECTION OF CITATIONS
SEARCH DETAIL
...