Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Sensors (Basel) ; 24(8)2024 Apr 12.
Article in English | MEDLINE | ID: mdl-38676101

ABSTRACT

ECG classification or heartbeat classification is an extremely valuable tool in cardiology. Deep learning-based techniques for the analysis of ECG signals assist human experts in the timely diagnosis of cardiac diseases and help save precious lives. This research aims at digitizing a dataset of images of ECG records into time series signals and then applying deep learning (DL) techniques on the digitized dataset. State-of-the-art DL techniques are proposed for the classification of the ECG signals into different cardiac classes. Multiple DL models, including a convolutional neural network (CNN), a long short-term memory (LSTM) network, and a self-supervised learning (SSL)-based model using autoencoders are explored and compared in this study. The models are trained on the dataset generated from ECG plots of patients from various healthcare institutes in Pakistan. First, the ECG images are digitized, segmenting the lead II heartbeats, and then the digitized signals are passed to the proposed deep learning models for classification. Among the different DL models used in this study, the proposed CNN model achieves the highest accuracy of ∼92%. The proposed model is highly accurate and provides fast inference for real-time and direct monitoring of ECG signals that are captured from the electrodes (sensors) placed on different parts of the body. Using the digitized form of ECG signals instead of images for the classification of cardiac arrhythmia allows cardiologists to utilize DL models directly on ECG signals from an ECG machine for the real-time and accurate monitoring of ECGs.


Subject(s)
Arrhythmias, Cardiac , Deep Learning , Electrocardiography , Neural Networks, Computer , Humans , Electrocardiography/methods , Arrhythmias, Cardiac/diagnosis , Arrhythmias, Cardiac/physiopathology , Arrhythmias, Cardiac/classification , Signal Processing, Computer-Assisted , Algorithms , Heart Rate/physiology
2.
Sensors (Basel) ; 21(23)2021 Dec 02.
Article in English | MEDLINE | ID: mdl-34884055

ABSTRACT

With a constant increase in the number of deployed satellites, it is expected that the current fixed spectrum allocation in satellite communications (SATCOM) will migrate towards more dynamic and flexible spectrum sharing rules. This migration is accelerated due to the introduction of new terrestrial services in bands used by satellite services. Therefore, it is important to design dynamic spectrum sharing (DSS) solutions that can maximize spectrum utilization and support coexistence between a high number of satellite and terrestrial networks operating in the same spectrum bands. Several DSS solutions for SATCOM exist, however, they are mainly centralized solutions and might lead to scalability issues with increasing satellite density. This paper describes two distributed DSS techniques for efficient spectrum sharing across multiple satellite systems (geostationary and non-geostationary satellites with earth stations in motion) and terrestrial networks, with a focus on increasing spectrum utilization and minimizing the impact of interference between satellite and terrestrial segments. Two relevant SATCOM use cases have been selected for dynamic spectrum sharing: the opportunistic sharing of satellite and terrestrial systems in (i) downlink Ka-band and (ii) uplink Ka-band. For the two selected use cases, the performance of proposed DSS techniques has been analyzed and compared to static spectrum allocation. Notable performance gains have been obtained.

3.
Sensors (Basel) ; 21(21)2021 Oct 21.
Article in English | MEDLINE | ID: mdl-34770284

ABSTRACT

Nowadays, broadband applications that use the licensed spectrum of the cellular network are growing fast. For this reason, Long-Term Evolution-Unlicensed (LTE-U) technology is expected to offload its traffic to the unlicensed spectrum. However, LTE-U transmissions have to coexist with the existing WiFi networks. Most existing coexistence schemes consider coordinated LTE-U and WiFi networks where there is a central coordinator that communicates traffic demand of the co-located networks. However, such a method of WiFi traffic estimation raises the complexity, traffic overhead, and reaction time of the coexistence schemes. In this article, we propose Experience Replay (ER) and Reward selective Experience Replay (RER) based Q-learning techniques as a solution for the coexistence of uncoordinated LTE-U and WiFi networks. In the proposed schemes, the LTE-U deploys a WiFi saturation sensing model to estimate the traffic demand of co-located WiFi networks. We also made a performance comparison between the proposed schemes and other rule-based and Q-learning based coexistence schemes implemented in non-coordinated LTE-U and WiFi networks. The simulation results show that the RER Q-learning scheme converges faster than the ER Q-learning scheme. The RER Q-learning scheme also gives 19.1% and 5.2% enhancement in aggregated throughput and 16.4% and 10.9% enhancement in fairness performance as compared to the rule-based and Q-learning coexistence schemes, respectively.


Subject(s)
Learning , Wireless Technology , Computer Simulation , Problem-Based Learning , Technology
4.
Sensors (Basel) ; 20(4)2020 Feb 17.
Article in English | MEDLINE | ID: mdl-32079365

ABSTRACT

The next generation of wireless and mobile networks will have to handle a significant increase in traffic load compared to the current ones. This situation calls for novel ways to increase the spectral efficiency. Therefore, in this paper, we propose a wireless spectrum hypervisor architecture that abstracts a radio frequency (RF) front-end into a configurable number of virtual RF front ends. The proposed architecture has the ability to enable flexible spectrum access in existing wireless and mobile networks, which is a challenging task due to the limited spectrum programmability, i.e., the capability a system has to change the spectral properties of a given signal to fit an arbitrary frequency allocation. The proposed architecture is a non-intrusive and highly optimized wireless hypervisor that multiplexes the signals of several different and concurrent multi-carrier-based radio access technologies with numerologies that are multiple integers of one another, which are also referred in our work as radio access technologies with correlated numerology. For example, the proposed architecture can multiplex the signals of several Wi-Fi access points, several LTE base stations, several WiMAX base stations, etc. As it able to multiplex the signals of radio access technologies with correlated numerology, it can, for instance, multiplex the signals of LTE, 5G-NR and NB-IoT base stations. It abstracts a radio frequency front-end into a configurable number of virtual RF front ends, making it possible for such different technologies to share the same RF front-end and consequently reduce the costs and increasing the spectral efficiency by employing densification, once several networks share the same infrastructure or by dynamically accessing free chunks of spectrum. Therefore, the main goal of the proposed approach is to improve spectral efficiency by efficiently using vacant gaps in congested spectrum bandwidths or adopting network densification through infrastructure sharing. We demonstrate mathematically how our proposed approach works and present several simulation results proving its functionality and efficiency. Additionally, we designed and implemented an open-source and free proof of concept prototype of the proposed architecture, which can be used by researchers and developers to run experiments or extend the concept to other applications. We present several experimental results used to validate the proposed prototype. We demonstrate that the prototype can easily handle up to 12 concurrent physical layers.

5.
Sensors (Basel) ; 18(11)2018 Nov 16.
Article in English | MEDLINE | ID: mdl-30453524

ABSTRACT

LoRaWAN is one of the low power wide area network (LPWAN) technologies that have received significant attention by the research community in the recent years. It offers low-power, low-data rate communication over a wide range of covered area. In the past years, the number of publications regarding LoRa and LoRaWAN has grown tremendously. This paper provides an overview of research work that has been published from 2015 to September 2018 and that is accessible via Google Scholar and IEEE Explore databases. First, a detailed description of the technology is given, including existing security and reliability mechanisms. This literature overview is structured by categorizing papers according to the following topics: (i) physical layer aspects; (ii) network layer aspects; (iii) possible improvements; and (iv) extensions to the standard. Finally, a strengths, weaknesses, opportunities and threats (SWOT) analysis is presented along with the challenges that LoRa and LoRaWAN still face.

6.
Sensors (Basel) ; 18(7)2018 Jul 03.
Article in English | MEDLINE | ID: mdl-29970856

ABSTRACT

As the ideas and technologies behind the Internet of Things (IoT) take root, a vast array of new possibilities and applications is emerging with the significantly increased number of devices connected to the Internet. Moreover, we are also witnessing the fast emergence of location-based services with an abundant number of localization technologies and solutions with varying capabilities and limitations. We believe that, at this moment in time, the successful integration of these two diverse technologies is mutually beneficial and even essential for both fields. IoT is one of the major fields that can benefit from localization services, and so, the integration of localization systems in the IoT ecosystem would enable numerous new IoT applications. Further, the use of standardized IoT architectures, interaction and information models will permit multiple localization systems to communicate and interoperate with each other in order to obtain better context information and resolve positioning errors or conflicts. Therefore, in this work, we investigate the semantic interoperation and integration of positioning systems in order to obtain the full potential of the localization ecosystem in the context of IoT. Additionally, we also validate the proposed design by means of an industrial case study, which targets fully-automated warehouses utilizing location-aware and interconnected IoT products and systems.

7.
Sensors (Basel) ; 18(2)2018 Jan 23.
Article in English | MEDLINE | ID: mdl-29360798

ABSTRACT

So far, existing sub-GHz wireless communication technologies focused on low-bandwidth, long-range communication with large numbers of constrained devices. Although these characteristics are fine for many Internet of Things (IoT) applications, more demanding application requirements could not be met and legacy Internet technologies such as Transmission Control Protocol/Internet Protocol (TCP/IP) could not be used. This has changed with the advent of the new IEEE 802.11ah Wi-Fi standard, which is much more suitable for reliable bidirectional communication and high-throughput applications over a wide area (up to 1 km). The standard offers great possibilities for network performance optimization through a number of physical- and link-layer configurable features. However, given that the optimal configuration parameters depend on traffic patterns, the standard does not dictate how to determine them. Such a large number of configuration options can lead to sub-optimal or even incorrect configurations. Therefore, we investigated how two key mechanisms, Restricted Access Window (RAW) grouping and Traffic Indication Map (TIM) segmentation, influence scalability, throughput, latency and energy efficiency in the presence of bidirectional TCP/IP traffic. We considered both high-throughput video streaming traffic and large-scale reliable sensing traffic and investigated TCP behavior in both scenarios when the link layer introduces long delays. This article presents the relations between attainable throughput per station and attainable number of stations, as well as the influence of RAW, TIM and TCP parameters on both. We found that up to 20 continuously streaming IP-cameras can be reliably connected via IEEE 802.11ah with a maximum average data rate of 160 kbps, whereas 10 IP-cameras can achieve average data rates of up to 255 kbps over 200 m. Up to 6960 stations transmitting every 60 s can be connected over 1 km with no lost packets. The presented results enable the fine tuning of RAW and TIM parameters for throughput-demanding reliable applications (i.e., video streaming, firmware updates) on one hand, and very dense low-throughput reliable networks with bidirectional traffic on the other hand.

8.
Sensors (Basel) ; 17(9)2017 Aug 31.
Article in English | MEDLINE | ID: mdl-28858243

ABSTRACT

On the road towards 5G, a proliferation of Heterogeneous Networks (HetNets) is expected. Sensor networks are of great importance in this new wireless era, as they allow interaction with the environment. Additionally, the establishment of the Internet of Things (IoT) has incredibly increased the number of interconnected devices and consequently the already massive wirelessly transmitted traffic. The exponential growth of wireless traffic is pushing the wireless community to investigate solutions that maximally exploit the available spectrum. Recently, 3rd Generation Partnership Project (3GPP) announced standards that permit the operation of Long Term Evolution (LTE) in the unlicensed spectrum in addition to the exclusive use of the licensed spectrum owned by a mobile operator. Alternatively, leading wireless technology developers examine standalone LTE operation in the unlicensed spectrum without any involvement of a mobile operator. In this article, we present a classification of different techniques that can be applied on co-located LTE and Wi-Fi networks. Up to today, Wi-Fi is the most widely-used wireless technology in the unlicensed spectrum. A review of the current state of the art further reveals the lack of cooperation schemes among co-located networks that can lead to more optimal usage of the available spectrum. This article fills this gap in the literature by conceptually describing different classes of cooperation between LTE and Wi-Fi. For each class, we provide a detailed presentation of possible cooperation techniques that can provide spectral efficiency in a fair manner.

9.
Sensors (Basel) ; 17(9)2017 Sep 12.
Article in English | MEDLINE | ID: mdl-28895879

ABSTRACT

Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

10.
Sensors (Basel) ; 17(7)2017 Jul 11.
Article in English | MEDLINE | ID: mdl-28696393

ABSTRACT

As the IoT continues to grow over the coming years, resource-constrained devices and networks will see an increase in traffic as everything is connected in an open Web of Things. The performance- and function-enhancing features are difficult to provide in resource-constrained environments, but will gain importance if the WoT is to be scaled up successfully. For example, scalable open standards-based authentication and authorization will be important to manage access to the limited resources of constrained devices and networks. Additionally, features such as caching and virtualization may help further reduce the load on these constrained systems. This work presents the Secure Service Proxy (SSP): a constrained-network edge proxy with the goal of improving the performance and functionality of constrained RESTful environments. Our evaluations show that the proposed design reaches its goal by reducing the load on constrained devices while implementing a wide range of features as different adapters. Specifically, the results show that the SSP leads to significant savings in processing, network traffic, network delay and packet loss rates for constrained devices. As a result, the SSP helps to guarantee the proper operation of constrained networks as these networks form an ever-expanding Web of Things.

11.
Sensors (Basel) ; 17(6)2017 May 23.
Article in English | MEDLINE | ID: mdl-28545239

ABSTRACT

LoRa is a long-range, low power, low bit rate and single-hop wireless communication technology. It is intended to be used in Internet of Things (IoT) applications involving battery-powered devices with low throughput requirements. A LoRaWAN network consists of multiple end nodes that communicate with one or more gateways. These gateways act like a transparent bridge towards a common network server. The amount of end devices and their throughput requirements will have an impact on the performance of the LoRaWAN network. This study investigates the scalability in terms of the number of end devices per gateway of single-gateway LoRaWAN deployments. First, we determine the intra-technology interference behavior with two physical end nodes, by checking the impact of an interfering node on a transmitting node. Measurements show that even under concurrent transmission, one of the packets can be received under certain conditions. Based on these measurements, we create a simulation model for assessing the scalability of a single gateway LoRaWAN network. We show that when the number of nodes increases up to 1000 per gateway, the losses will be up to 32%. In such a case, pure Aloha will have around 90% losses. However, when the duty cycle of the application layer becomes lower than the allowed radio duty cycle of 1%, losses will be even lower. We also show network scalability simulation results for some IoT use cases based on real data.

12.
Radiat Prot Dosimetry ; 175(3): 394-405, 2017 Jul 01.
Article in English | MEDLINE | ID: mdl-28096315

ABSTRACT

This paper presents the first real-life optimization of the Exposure Index (EI). A genetic optimization algorithm is developed and applied to three real-life Wireless Local Area Network scenarios in an experimental testbed. The optimization accounts for downlink, uplink and uplink of other users, for realistic duty cycles, and ensures a sufficient Quality of Service to all users. EI reductions up to 97.5% compared to a reference configuration can be achieved in a downlink-only scenario, in combination with an improved Quality of Service. Due to the dominance of uplink exposure and the lack of WiFi power control, no optimizations are possible in scenarios that also consider uplink traffic. However, future deployments that do implement WiFi power control can be successfully optimized, with EI reductions up to 86% compared to a reference configuration and an EI that is 278 times lower than optimized configurations under the absence of power control.


Subject(s)
Algorithms , Radiation Exposure , Humans
13.
Sensors (Basel) ; 16(8)2016 Aug 02.
Article in English | MEDLINE | ID: mdl-27490554

ABSTRACT

Sensors and actuators are becoming important components of Internet of Things (IoT) applications. Today, several approaches exist to facilitate communication of sensors and actuators in IoT applications. Most communications go through often proprietary gateways requiring availability of the gateway for each and every interaction between sensors and actuators. Sometimes, the gateway does some processing of the sensor data before triggering actuators. Other approaches put this processing logic further in the cloud. These approaches introduce significant latencies and increased number of packets. In this paper, we introduce a CoAP-based mechanism for direct binding of sensors and actuators. This flexible binding solution is utilized further to build IoT applications through RESTlets. RESTlets are defined to accept inputs and produce outputs after performing some processing tasks. Sensors and actuators could be associated with RESTlets (which can be hosted on any device) through the flexible binding mechanism we introduced. This approach facilitates decentralized IoT application development by placing all or part of the processing logic in Low power and Lossy Networks (LLNs). We run several tests to compare the performance of our solution with existing solutions and found out that our solution reduces communication delay and number of packets in the LLN.

14.
Sensors (Basel) ; 16(7)2016 Jul 21.
Article in English | MEDLINE | ID: mdl-27455262

ABSTRACT

The Internet of Things (IoT) is expanding rapidly to new domains in which embedded devices play a key role and gradually outnumber traditionally-connected devices. These devices are often constrained in their resources and are thus unable to run standard Internet protocols. The Constrained Application Protocol (CoAP) is a new alternative standard protocol that implements the same principals as the Hypertext Transfer Protocol (HTTP), but is tailored towards constrained devices. In many IoT application domains, devices need to be addressed in groups in addition to being addressable individually. Two main approaches are currently being proposed in the IoT community for CoAP-based group communication. The main difference between the two approaches lies in the underlying communication type: multicast versus unicast. In this article, we experimentally evaluate those two approaches using two wireless sensor testbeds and under different test conditions. We highlight the pros and cons of each of them and propose combining these approaches in a hybrid solution to better suit certain use case requirements. Additionally, we provide a solution for multicast-based group membership management using CoAP.

15.
Sensors (Basel) ; 16(6)2016 Jun 01.
Article in English | MEDLINE | ID: mdl-27258286

ABSTRACT

Data science or "data-driven research" is a research approach that uses real-life data to gain insight about the behavior of systems. It enables the analysis of small, simple as well as large and more complex systems in order to assess whether they function according to the intended design and as seen in simulation. Data science approaches have been successfully applied to analyze networked interactions in several research areas such as large-scale social networks, advanced business and healthcare processes. Wireless networks can exhibit unpredictable interactions between algorithms from multiple protocol layers, interactions between multiple devices, and hardware specific influences. These interactions can lead to a difference between real-world functioning and design time functioning. Data science methods can help to detect the actual behavior and possibly help to correct it. Data science is increasingly used in wireless research. To support data-driven research in wireless networks, this paper illustrates the step-by-step methodology that has to be applied to extract knowledge from raw data traces. To this end, the paper (i) clarifies when, why and how to use data science in wireless network research; (ii) provides a generic framework for applying data science in wireless networks; (iii) gives an overview of existing research papers that utilized data science approaches in wireless networks; (iv) illustrates the overall knowledge discovery process through an extensive example in which device types are identified based on their traffic patterns; (v) provides the reader the necessary datasets and scripts to go through the tutorial steps themselves.

16.
Int J Health Geogr ; 15: 7, 2016 Feb 03.
Article in English | MEDLINE | ID: mdl-26842830

ABSTRACT

BACKGROUND: The combination of an aging population and nursing staff shortages implies the need for more advanced systems in the healthcare industry. Many key enablers for the optimization of healthcare systems require provisioning of location awareness for patients (e.g. with dementia), nurses, doctors, assets, etc. Therefore, many Indoor Positioning Systems (IPSs) will be indispensable in healthcare systems. However, although many IPSs have been proposed in literature, most of these have been evaluated in non-representative environments such as office buildings rather than in a hospital. METHODS: To remedy this, the paper evaluates the performance of existing IPSs in an operational modern healthcare environment: the "Sint-Jozefs kliniek Izegem" hospital in Belgium. The evaluation (data-collecting and data-processing) is executed using a standardized methodology and evaluates the point accuracy, room accuracy and latency of multiple IPSs. To evaluate the solutions, the position of a stationary device was requested at 73 evaluation locations. By using the same evaluation locations for all IPSs the performance of all systems could objectively be compared. RESULTS: Several trends can be identified such as the fact that Wi-Fi based fingerprinting solutions have the best accuracy result (point accuracy of 1.21 m and room accuracy of 98%) however it requires calibration before use and needs 5.43 s to estimate the location. On the other hand, proximity based solutions (based on sensor nodes) are significantly cheaper to install, do not require calibration and still obtain acceptable room accuracy results. CONCLUSION: As a conclusion of this paper, Wi-Fi based solutions have the most potential for an indoor positioning service in case when accuracy is the most important metric. Applying the fingerprinting approach with an anchor installed in every two rooms is the preferred solution for a hospital environment.


Subject(s)
Delivery of Health Care/standards , Environment , Geographic Information Systems/standards , Hospitals/standards , Signal Processing, Computer-Assisted , Wireless Technology/standards , Aged, 80 and over , Algorithms , Belgium , Computer Systems/standards , Delivery of Health Care/methods , Humans
17.
Sensors (Basel) ; 14(6): 9833-77, 2014 Jun 04.
Article in English | MEDLINE | ID: mdl-24901978

ABSTRACT

Smart embedded objects will become an important part of what is called the Internet of Things. Applications often require concurrent interactions with several of these objects and their resources. Existing solutions have several limitations in terms of reliability, flexibility and manageability of such groups of objects. To overcome these limitations we propose an intermediately level of intelligence to easily manipulate a group of resources across multiple smart objects, building upon the Constrained Application Protocol (CoAP). We describe the design of our solution to create and manipulate a group of CoAP resources using a single client request. Furthermore we introduce the concept of profiles for the created groups. The use of profiles allows the client to specify in more detail how the group should behave. We have implemented our solution and demonstrate that it covers the complete group life-cycle, i.e., creation, validation, flexible usage and deletion. Finally, we quantitatively analyze the performance of our solution and compare it against multicast-based CoAP group communication. The results show that our solution improves reliability and flexibility with a trade-off in increased communication overhead.

18.
Prog Biophys Mol Biol ; 111(1): 30-6, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23085070

ABSTRACT

Wireless Local Area Networks (WLANs) are commonly deployed in various environments. The WLAN data packets are not transmitted continuously but often worst-case exposure of WLAN is assessed, assuming 100% activity and leading to huge overestimations. Actual duty cycles of WLAN are thus of importance for time-averaging of exposure when checking compliance with international guidelines on limiting adverse health effects. In this paper, duty cycles of WLAN using Wi-Fi technology are determined for exposure assessment on large scale at 179 locations for different environments and activities (file transfer, video streaming, audio, surfing on the internet, etc.). The median duty cycle equals 1.4% and the 95th percentile is 10.4% (standard deviation SD = 6.4%). Largest duty cycles are observed in urban and industrial environments. For actual applications, the theoretical upper limit for the WLAN duty cycle is 69.8% and 94.7% for maximum and minimum physical data rate, respectively. For lower data rates, higher duty cycles will occur. Although counterintuitive at first sight, poor WLAN connections result in higher possible exposures. File transfer at maximum data rate results in median duty cycles of 47.6% (SD = 16%), while it results in median values of 91.5% (SD = 18%) at minimum data rate. Surfing and audio streaming are less intensively using the wireless medium and therefore have median duty cycles lower than 3.2% (SD = 0.5-7.5%). For a specific example, overestimations up to a factor 8 for electric fields occur, when considering 100% activity compared to realistic duty cycles.


Subject(s)
Electromagnetic Fields , Environmental Exposure/analysis , Environmental Exposure/statistics & numerical data , Models, Theoretical , Radiation Monitoring/methods , Radio Waves , Wireless Technology/statistics & numerical data , Body Burden , Computer Simulation , Humans
19.
J Med Syst ; 36(3): 1065-94, 2012 Jun.
Article in English | MEDLINE | ID: mdl-20721685

ABSTRACT

Recent advances in microelectronics and integrated circuits, system-on-chip design, wireless communication and intelligent low-power sensors have allowed the realization of a Wireless Body Area Network (WBAN). A WBAN is a collection of low-power, miniaturized, invasive/non-invasive lightweight wireless sensor nodes that monitor the human body functions and the surrounding environment. In addition, it supports a number of innovative and interesting applications such as ubiquitous healthcare, entertainment, interactive gaming, and military applications. In this paper, the fundamental mechanisms of WBAN including architecture and topology, wireless implant communication, low-power Medium Access Control (MAC) and routing protocols are reviewed. A comprehensive study of the proposed technologies for WBAN at Physical (PHY), MAC, and Network layers is presented and many useful solutions are discussed for each layer. Finally, numerous WBAN applications are highlighted.


Subject(s)
Computer Communication Networks/organization & administration , Remote Sensing Technology , Wireless Technology , Equipment Design , Human Body , Humans , Monitoring, Physiologic , Prostheses and Implants
20.
IEEE Trans Inf Technol Biomed ; 13(6): 933-45, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19789118

ABSTRACT

Wireless body area networks (WBANs) offer many promising new applications in the area of remote health monitoring. An important element in the development of a WBAN is the characterization of the physical layer of the network, including an estimation of the delay spread and the path loss between two nodes on the body. This paper discusses the propagation channel between two half-wavelength dipoles at 2.45 GHz, placed near a human body and presents an application for cross-layer design in order to optimize the energy consumption of different topologies. Propagation measurements are performed on real humans in a multipath environment, considering different parts of the body separately. In addition, path loss has been numerically investigated with an anatomically correct model of the human body in free space using a 3-D electromagnetic solver. Path loss parameters and time-domain channel characteristics are extracted from the measurement and simulation data. A semi-empirical path loss model is presented for an antenna height above the body of 5 mm and antenna separations from 5 cm up to 40 cm. A time-domain analysis is performed and models are presented for the mean excess delay and the delay spread. As a cross-layer application, the proposed path loss models are used to evaluate the energy efficiency of single-hop and multihop network topologies.


Subject(s)
Models, Biological , Monitoring, Physiologic/instrumentation , Telemetry/instrumentation , Algorithms , Computer Simulation , Electrodes , Electronics, Medical , Humans , Phantoms, Imaging , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...