Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
PeerJ Comput Sci ; 9: e1682, 2023.
Article in English | MEDLINE | ID: mdl-38077549

ABSTRACT

The integration of Internet of Things (IoT) technologies, particularly the Internet of Medical Things (IoMT), with wireless sensor networks (WSNs) has revolutionized the healthcare industry. However, despite the undeniable benefits of WSNs, their limited communication capabilities and network congestion have emerged as critical challenges in the context of healthcare applications. This research addresses these challenges through a dynamic and on-demand route-finding protocol called P2P-IoMT, based on LOADng for point-to-point routing in IoMT. To reduce congestion, dynamic composite routing metrics allow nodes to select the optimal parent based on the application requirements during the routing discovery phase. Nodes running the proposed routing protocol use the multi-criteria decision-making Skyline technique for parent selection. Experimental evaluation results show that P2P-IoMT protocol outperforms its best rivals in the literature in terms of residual network energy and packet delivery ratio. The network lifetime is extended by 4% while achieving a comparable packet delivery ratio and communication delay compared to LRRE. These performances are offered on top of the dynamic path selection and configurable route metrics capabilities of P2P-IoMT.

2.
ISA Trans ; 132: 69-79, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36435643

ABSTRACT

Correct environmental perception of objects on the road is vital for the safety of autonomous driving. Making appropriate decisions by the autonomous driving algorithm could be hindered by data perturbations and more recently, by adversarial attacks. We propose an adversarial test input generation approach based on uncertainty to make the machine learning (ML) model more robust against data perturbations and adversarial attacks. Adversarial attacks and uncertain inputs can affect the ML model's performance, which can have severe consequences such as the misclassification of objects on the road by autonomous vehicles, leading to incorrect decision-making. We show that we can obtain more robust ML models for autonomous driving by making a dataset that includes highly-uncertain adversarial test inputs during the re-training phase. We demonstrate an improvement in the accuracy of the robust model by more than 12%, with a notable drop in the uncertainty of the decisions returned by the model. We believe our approach will assist in further developing risk-aware autonomous systems.

3.
Sensors (Basel) ; 22(21)2022 Oct 28.
Article in English | MEDLINE | ID: mdl-36365978

ABSTRACT

Smart health presents an ever-expanding attack surface due to the continuous adoption of a broad variety of Internet of Medical Things (IoMT) devices and applications. IoMT is a common approach to smart city solutions that deliver long-term benefits to critical infrastructures, such as smart healthcare. Many of the IoMT devices in smart cities use Bluetooth technology for short-range communication due to its flexibility, low resource consumption, and flexibility. As smart healthcare applications rely on distributed control optimization, artificial intelligence (AI) and deep learning (DL) offer effective approaches to mitigate cyber-attacks. This paper presents a decentralized, predictive, DL-based process to autonomously detect and block malicious traffic and provide an end-to-end defense against network attacks in IoMT devices. Furthermore, we provide the BlueTack dataset for Bluetooth-based attacks against IoMT networks. To the best of our knowledge, this is the first intrusion detection dataset for Bluetooth classic and Bluetooth low energy (BLE). Using the BlueTack dataset, we devised a multi-layer intrusion detection method that uses deep-learning techniques. We propose a decentralized architecture for deploying this intrusion detection system on the edge nodes of a smart healthcare system that may be deployed in a smart city. The presented multi-layer intrusion detection models achieve performances in the range of 97-99.5% based on the F1 scores.


Subject(s)
Artificial Intelligence , Internet of Things , Delivery of Health Care , Communication
4.
Qatar Med J ; 2022(3): 24, 2022.
Article in English | MEDLINE | ID: mdl-35813704

ABSTRACT

BACKGROUND: It remains unclear whether patients with autoimmune rheumatic diseases (ARDs) are at a higher risk of poor outcomes from a SARS-CoV-2 infection. We evaluated whether patients with an ARDs infected with SARS-CoV-2 were at a higher risk of a poorer outcome than those without an ARDs. METHODS: Patients with an ARDs infected with SARS-CoV-2 were matched to control patients without a known ARDs. Matching was performed according to age ( ± 6 years) and sex at a case-to-control ratio of 1:3. Demographic and clinical data were extracted from the databases and were compared between the two groups. Severe SARS-CoV-2 infection was the primary outcome and was defined as the requirement for oxygen therapy support, the need for invasive or noninvasive mechanical ventilation, or the use of glucocorticoids. RESULTS: A total of 141 patients with an ARDs were matched to 398 patients who formed the control group. The mean ages (SD) of the ARDs and non-ARDs groups were 44.4 years (11.4) and 43.4 years (12.2). Women accounted for 58.8% of the ARDs group and 56.3% of the control group (p = 0.59). Demographics and comorbidities were balanced between the groups. ARDs included connective tissue disease in 43 (30.3%) patients, inflammatory arthritis in 92 (65.2%), and other ARDs in 8 (5.7%). ARDs medications included biological/targeted synthetic disease-modifying antirheumatic drugs (b/ts-DMARDs) in 28 (15.6%) patients, conventional synthetic DMARDs in 95 (67.4%), and immunosuppressive antimetabolites in 13 (9.2%). The ARDs group had more respiratory and gastrointestinal symptoms related to SARS-CoV-2 infection than the control group (24.8% and 20.6% vs. 10% and 5.3%, respectively; p <  0.001 for both). Severe SARS-CoV-2 infection was more common in the ARDs group than in the control group (14.9% vs. 5.8%; p <  0.001). CONCLUSIONS: In this single-center matched cohort study, patients with an ARDs experienced more respiratory and gastrointestinal symptoms related to SARS-CoV-2 infection and had more severe infection than those from the control group. Therefore, patients with an ARDs require close observation during the coronavirus disease 2019 pandemic.

5.
Sensors (Basel) ; 22(11)2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35684918

ABSTRACT

Deep learning models have been used in several domains, however, adjusting is still required to be applied in sensitive areas such as medical imaging. As the use of technology in the medical domain is needed because of the time limit, the level of accuracy assures trustworthiness. Because of privacy concerns, machine learning applications in the medical field are unable to use medical data. For example, the lack of brain MRI images makes it difficult to classify brain tumors using image-based classification. The solution to this challenge was achieved through the application of Generative Adversarial Network (GAN)-based augmentation techniques. Deep Convolutional GAN (DCGAN) and Vanilla GAN are two examples of GAN architectures used for image generation. In this paper, a framework, denoted as BrainGAN, for generating and classifying brain MRI images using GAN architectures and deep learning models was proposed. Consequently, this study proposed an automatic way to check that generated images are satisfactory. It uses three models: CNN, MobileNetV2, and ResNet152V2. Training the deep transfer models with images made by Vanilla GAN and DCGAN, and then evaluating their performance on a test set composed of real brain MRI images. From the results of the experiment, it was found that the ResNet152V2 model outperformed the other two models. The ResNet152V2 achieved 99.09% accuracy, 99.12% precision, 99.08% recall, 99.51% area under the curve (AUC), and 0.196 loss based on the brain MRI images generated by DCGAN architecture.


Subject(s)
Brain Neoplasms , Magnetic Resonance Imaging , Brain/diagnostic imaging , Humans , Machine Learning , Magnetic Resonance Imaging/methods , Neuroimaging
6.
Sensors (Basel) ; 21(9)2021 Apr 24.
Article in English | MEDLINE | ID: mdl-33923151

ABSTRACT

Nowadays, hackers take illegal advantage of distributed resources in a network of computing devices (i.e., botnet) to launch cyberattacks against the Internet of Things (IoT). Recently, diverse Machine Learning (ML) and Deep Learning (DL) methods were proposed to detect botnet attacks in IoT networks. However, highly imbalanced network traffic data in the training set often degrade the classification performance of state-of-the-art ML and DL models, especially in classes with relatively few samples. In this paper, we propose an efficient DL-based botnet attack detection algorithm that can handle highly imbalanced network traffic data. Specifically, Synthetic Minority Oversampling Technique (SMOTE) generates additional minority samples to achieve class balance, while Deep Recurrent Neural Network (DRNN) learns hierarchical feature representations from the balanced network traffic data to perform discriminative classification. We develop DRNN and SMOTE-DRNN models with the Bot-IoT dataset, and the simulation results show that high-class imbalance in the training data adversely affects the precision, recall, F1 score, area under the receiver operating characteristic curve (AUC), geometric mean (GM) and Matthews correlation coefficient (MCC) of the DRNN model. On the other hand, the SMOTE-DRNN model achieved better classification performance with 99.50% precision, 99.75% recall, 99.62% F1 score, 99.87% AUC, 99.74% GM and 99.62% MCC. Additionally, the SMOTE-DRNN model outperformed state-of-the-art ML and DL models.

7.
Big Data ; 9(4): 265-278, 2021 08.
Article in English | MEDLINE | ID: mdl-33656352

ABSTRACT

The Internet of Things (IoT) is permeating our daily lives through continuous environmental monitoring and data collection. The promise of low latency communication, enhanced security, and efficient bandwidth utilization lead to the shift from mobile cloud computing to mobile edge computing. In this study, we propose an advanced deep reinforcement resource allocation and security-aware data offloading model that considers the constrained computation and radio resources of industrial IoT devices to guarantee efficient sharing of resources between multiple users. This model is formulated as an optimization problem with the goal of decreasing energy consumption and computation delay. This type of problem is non-deterministic polynomial time-hard due to the curse-of-dimensionality challenge, thus, a deep learning optimization approach is presented to find an optimal solution. In addition, a 128-bit Advanced Encryption Standard-based cryptographic approach is proposed to satisfy the data security requirements. Experimental evaluation results show that the proposed model can reduce offloading overhead in terms of energy and time by up to 64.7% in comparison with the local execution approach. It also outperforms the full offloading scenario by up to 13.2%, where it can select some computation tasks to be offloaded while optimally rejecting others. Finally, it is adaptable and scalable for a large number of mobile devices.


Subject(s)
Deep Learning , Algorithms , Cloud Computing , Computer Security , Resource Allocation
8.
Neural Comput Appl ; : 1-15, 2020 Jul 04.
Article in English | MEDLINE | ID: mdl-32836901

ABSTRACT

Bitcoin is a decentralized cryptocurrency, which is a type of digital asset that provides the basis for peer-to-peer financial transactions based on blockchain technology. One of the main problems with decentralized cryptocurrencies is price volatility, which indicates the need for studying the underlying price model. Moreover, Bitcoin prices exhibit non-stationary behavior, where the statistical distribution of data changes over time. This paper demonstrates high-performance machine learning-based classification and regression models for predicting Bitcoin price movements and prices in short and medium terms. In previous works, machine learning-based classification has been studied for an only one-day time frame, while this work goes beyond that by using machine learning-based models for one, seven, thirty and ninety days. The developed models are feasible and have high performance, with the classification models scoring up to 65% accuracy for next-day forecast and scoring from 62 to 64% accuracy for seventh-ninetieth-day forecast. For daily price forecast, the error percentage is as low as 1.44%, while it varies from 2.88 to 4.10% for horizons of seven to ninety days. These results indicate that the presented models outperform the existing models in the literature.

9.
Sensors (Basel) ; 20(9)2020 Apr 27.
Article in English | MEDLINE | ID: mdl-32349242

ABSTRACT

Over the last few decades, the proliferation of the Internet of Things (IoT) has produced an overwhelming flow of data and services, which has shifted the access control paradigm from a fixed desktop environment to dynamic cloud environments. Fog computing is associated with a new access control paradigm to reduce the overhead costs by moving the execution of application logic from the centre of the cloud data sources to the periphery of the IoT-oriented sensor networks. Indeed, accessing information and data resources from a variety of IoT sources has been plagued with inherent problems such as data heterogeneity, privacy, security and computational overheads. This paper presents an extensive survey of security, privacy and access control research, while highlighting several specific concerns in a wide range of contextual conditions (e.g., spatial, temporal and environmental contexts) which are gaining a lot of momentum in the area of industrial sensor and cloud networks. We present different taxonomies, such as contextual conditions and authorization models, based on the key issues in this area and discuss the existing context-sensitive access control approaches to tackle the aforementioned issues. With the aim of reducing administrative and computational overheads in the IoT sensor networks, we propose a new generation of Fog-Based Context-Aware Access Control (FB-CAAC) framework, combining the benefits of the cloud, IoT and context-aware computing; and ensuring proper access control and security at the edge of the end-devices. Our goal is not only to control context-sensitive access to data resources in the cloud, but also to move the execution of an application logic from the cloud-level to an intermediary-level where necessary, through adding computational nodes at the edge of the IoT sensor network. A discussion of some open research issues pertaining to context-sensitive access control to data resources is provided, including several real-world case studies. We conclude the paper with an in-depth analysis of the research challenges that have not been adequately addressed in the literature and highlight directions for future work that has not been well aligned with currently available research.

10.
Sci Justice ; 59(3): 337-348, 2019 05.
Article in English | MEDLINE | ID: mdl-31054823

ABSTRACT

Minecraft, a Massively Multiplayer Online Game (MMOG), has reportedly millions of players from different age groups worldwide. With Minecraft being so popular, particularly with younger audiences, it is no surprise that the interactive nature of Minecraft has facilitated the commission of criminal activities such as denial of service attacks against gamers, cyberbullying, swatting, sexual communication, and online child grooming. In this research, there is a simulated scenario of a typical Minecraft setting, using a Linux Ubuntu 16.04.3 machine (acting as the MMOG server) and Windows client devices running Minecraft. Server and client devices are then examined to reveal the type and extent of evidential artefacts that can be extracted.

11.
Sensors (Basel) ; 19(8)2019 Apr 14.
Article in English | MEDLINE | ID: mdl-31013993

ABSTRACT

The proliferation of inter-connected devices in critical industries, such as healthcare and power grid, is changing the perception of what constitutes critical infrastructure. The rising interconnectedness of new critical industries is driven by the growing demand for seamless access to information as the world becomes more mobile and connected and as the Internet of Things (IoT) grows. Critical industries are essential to the foundation of today's society, and interruption of service in any of these sectors can reverberate through other sectors and even around the globe. In today's hyper-connected world, the critical infrastructure is more vulnerable than ever to cyber threats, whether state sponsored, criminal groups or individuals. As the number of interconnected devices increases, the number of potential access points for hackers to disrupt critical infrastructure grows. This new attack surface emerges from fundamental changes in the critical infrastructure of organizations technology systems. This paper aims to improve understanding the challenges to secure future digital infrastructure while it is still evolving. After introducing the infrastructure generating big data, the functionality-based fog architecture is defined. In addition, a comprehensive review of security requirements in fog-enabled IoT systems is presented. Then, an in-depth analysis of the fog computing security challenges and big data privacy and trust concerns in relation to fog-enabled IoT are given. We also discuss blockchain as a key enabler to address many security related issues in IoT and consider closely the complementary interrelationships between blockchain and fog computing. In this context, this work formalizes the task of securing big data and its scope, provides a taxonomy to categories threats to fog-based IoT systems, presents a comprehensive comparison of state-of-the-art contributions in the field according to their security service and recommends promising research directions for future investigations.


Subject(s)
Big Data , Computer Security , Delivery of Health Care , Internet , Humans , Privacy
12.
Case Rep Rheumatol ; 2018: 7657982, 2018.
Article in English | MEDLINE | ID: mdl-29670797

ABSTRACT

Transient bone marrow edema (TBME) is a self-limiting disease characterized by joint pain with localized bone marrow edema by MRI and has been reported in many case series and case reports. It is well known that joints of the lower extremity including hips, knees, ankles, and feet are the classical sites for TBME. Many theories have been proposed for the pathogenesis of TBME. Systemic osteopenia and vitamin D deficiency is one of the theories that have been suggested in the last few years. In this case report, we present a middle-aged male patient, who presented with 4 attacks of TBME in both knees between September 2016 and August 2017. The patient was found to have persistently low vitamin D and osteopenic T score in DXA scan of the lumbar spine and hips. Patients of TBME usually present with joint pain that is provoked by weight-bearing physical activity. The aim of this case report is to raise the awareness that TBME can be the initial presentation of systemic loss of bone mineral density.

13.
Sensors (Basel) ; 15(9): 22970-3003, 2015 Sep 11.
Article in English | MEDLINE | ID: mdl-26378539

ABSTRACT

This paper presents a distributed information extraction and visualisation service, called the mapping service, for maximising information return from large-scale wireless sensor networks. Such a service would greatly simplify the production of higher-level, information-rich, representations suitable for informing other network services and the delivery of field information visualisations. The mapping service utilises a blend of inductive and deductive models to map sense data accurately using externally available knowledge. It utilises the special characteristics of the application domain to render visualisations in a map format that are a precise reflection of the concrete reality. This service is suitable for visualising an arbitrary number of sense modalities. It is capable of visualising from multiple independent types of the sense data to overcome the limitations of generating visualisations from a single type of sense modality. Furthermore, the mapping service responds dynamically to changes in the environmental conditions, which may affect the visualisation performance by continuously updating the application domain model in a distributed manner. Finally, a distributed self-adaptation function is proposed with the goal of saving more power and generating more accurate data visualisation. We conduct comprehensive experimentation to evaluate the performance of our mapping service and show that it achieves low communication overhead, produces maps of high fidelity, and further minimises the mapping predictive error dynamically through integrating the application domain model in the mapping service.


Subject(s)
Computer Communication Networks , Information Storage and Retrieval/methods , Wireless Technology , Algorithms , Geographic Mapping , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...