Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 9584, 2024 Apr 26.
Article in English | MEDLINE | ID: mdl-38671012

ABSTRACT

The rapid advancement of modern communication technologies necessitates the development of generalized multi-access frameworks and the continuous implementation of rate splitting, augmented with semantic awareness. This trend, coupled with the mounting pressure on wireless services, underscores the need for intelligent approaches to radio signal propagation. In response to these challenges, intelligent reflecting surfaces (IRS) have garnered significant attention for their ability to control data transmission systems in a goal-oriented and dynamic manner. This innovation is largely attributed to equitable resource allocation and the dynamic enhancement of network performance. However, the integration of rate-splitting multi-access (RSMA) architecture with semantic considerations imposes stringent requirements on IRS platforms to ensure seamless connectivity and broad coverage for a diverse user base without interference. Semantic communications hinge on a knowledge base-a centralized repository of integrated information related to the transmitted data-which becomes critically important in multi-antenna scenarios. This article proposes a novel set of design strategies for RSMA-IRS systems, enabled by reconfigurable intelligent surface synergizing with semantic communication principles. An experimental analysis is presented, demonstrating the effectiveness of these design guidelines in the context of Beyond 5G/6G communication systems. The RSMA-IRS model, infused with semantic communication, offers a promising solution for future wireless networks. Performance evaluations of the proposed approach reveal that, despite an increase in the number of users, the delay in the RSMA-IRS framework incorporating semantics is 2.94% less than that of a RSMA-IRS system without semantic integration.

2.
Ultrasonics ; 132: 107017, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37148701

ABSTRACT

Ultrasound imaging is a valuable tool for assessing the development of the fetal during pregnancy. However, interpreting ultrasound images manually can be time-consuming and subject to variability. Automated image categorization using machine learning algorithms can streamline the interpretation process by identifying stages of fetal development present in ultrasound images. In particular, deep learning architectures have shown promise in medical image analysis, enabling accurate automated diagnosis. The objective of this research is to identify fetal planes from ultrasound images with higher precision. To achieve this, we trained several convolutional neural network (CNN) architectures on a dataset of 12400 images. Our study focuses on the impact of enhanced image quality by adopting Histogram Equalization and Fuzzy Logic-based contrast enhancement on fetal plane detection using the Evidential Dempster-Shafer Based CNN Architecture, PReLU-Net, SqueezeNET, and Swin Transformer. The results of each classifier were noteworthy, with PreLUNet achieving an accuracy of 91.03%, SqueezeNET reaching 91.03% accuracy, Swin Transformer reaching an accuracy of 88.90%, and the Evidential classifier achieving an accuracy of 83.54%. We evaluated the results in terms of both training and testing accuracies. Additionally, we used LIME and GradCam to examine the decision-making process of the classifiers, providing explainability for their outputs. Our findings demonstrate the potential for automated image categorization in large-scale retrospective assessments of fetal development using ultrasound imaging.


Subject(s)
Algorithms , Neural Networks, Computer , Pregnancy , Female , Humans , Retrospective Studies , Machine Learning , Ultrasonography
3.
Heliyon ; 9(2): e13636, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36852018

ABSTRACT

Convolutional neural networks (CNNs) have demonstrated exceptional results in the analysis of time- series data when used for Human Activity Recognition (HAR). The manual design of such neural architectures is an error-prone and time-consuming process. The search for optimal CNN architectures is considered a revolution in the design of neural networks. By means of Neural Architecture Search (NAS), network architectures can be designed and optimized automatically. Thus, the optimal CNN architecture representation can be found automatically because of its ability to overcome the limitations of human experience and thinking modes. Evolution algorithms, which are derived from evolutionary mechanisms such as natural selection and genetics, have been widely employed to develop and optimize NAS because they can handle a blackbox optimization process for designing appropriate solution representations and search paradigms without explicit mathematical formulations or gradient information. The Genetic optimization algorithm (GA) is widely used to find optimal or near-optimal solutions for difficult problems. Considering these characteristics, an efficient human activity recognition architecture (AUTO-HAR) is presented in this study. Using the evolutionary GA to select the optimal CNN architecture, the current study proposes a novel encoding schema structure and a novel search space with a much broader range of operations to effectively search for the best architectures for HAR tasks. In addition, the proposed search space provides a reasonable degree of depth because it does not limit the maximum length of the devised task architecture. To test the effectiveness of the proposed framework for HAR tasks, three datasets were utilized: UCI-HAR, Opportunity, and DAPHNET. Based on the results of this study, it has been found that the proposed method can efficiently recognize human activity with an average accuracy of 98.5% (∓1.1), 98.3%, and 99.14% (∓0.8) for UCI-HAR, Opportunity, and DAPHNET, respectively.

4.
Sensors (Basel) ; 23(3)2023 Jan 26.
Article in English | MEDLINE | ID: mdl-36772430

ABSTRACT

The early, valid prediction of heart problems would minimize life threats and save lives, while lack of prediction and false diagnosis can be fatal. Addressing a single dataset alone to build a machine learning model for the identification of heart problems is not practical because each country and hospital has its own data schema, structure, and quality. On this basis, a generic framework has been built for heart problem diagnosis. This framework is a hybrid framework that employs multiple machine learning and deep learning techniques and votes for the best outcome based on a novel voting technique with the intention to remove bias from the model. The framework contains two consequent layers. The first layer contains simultaneous machine learning models running over a given dataset. The second layer consolidates the outputs of the first layer and classifies them as a second classification layer based on novel voting techniques. Prior to the classification process, the framework selects the top features using a proposed feature selection framework. It starts by filtering the columns using multiple feature selection methods and considers the top common features selected. Results from the proposed framework, with 95.6% accuracy, show its superiority over the single machine learning model, classical stacking technique, and traditional voting technique. The main contribution of this work is to demonstrate how the prediction probabilities of multiple models can be exploited for the purpose of creating another layer for final output; this step neutralizes any model bias. Another experimental contribution is proving the complete pipeline's ability to be retrained and used for other datasets collected using different measurements and with different distributions.


Subject(s)
Machine Learning , Probability
5.
Healthcare (Basel) ; 11(3)2023 Jan 31.
Article in English | MEDLINE | ID: mdl-36766986

ABSTRACT

The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.

7.
Neural Comput Appl ; : 1-14, 2022 Nov 17.
Article in English | MEDLINE | ID: mdl-36415284

ABSTRACT

The COVID-19 pandemic has devastated the entire globe since its first appearance at the end of 2019. Although vaccines are now in production, the number of contaminations remains high, thus increasing the number of specialized personnel that can analyze clinical exams and points out the final diagnosis. Computed tomography and X-ray images are the primary sources for computer-aided COVID-19 diagnosis, but we still lack better interpretability of such automated decision-making mechanisms. This manuscript presents an insightful comparison of three approaches based on explainable artificial intelligence (XAI) to light up interpretability in the context of COVID-19 diagnosis using deep networks: Composite Layer-wise Propagation, Single Taylor Decomposition, and Deep Taylor Decomposition. Two deep networks have been used as the backbones to assess the explanation skills of the XAI approaches mentioned above: VGG11 and VGG16. We hope that such work can be used as a basis for further research on XAI and COVID-19 diagnosis for each approach figures its own positive and negative points.

8.
Sensors (Basel) ; 22(17)2022 Aug 29.
Article in English | MEDLINE | ID: mdl-36080971

ABSTRACT

The correlations between smartphone sensors, algorithms, and relevant techniques are major components facilitating indoor localization and tracking in the absence of communication and localization standards. A major research gap can be noted in terms of explaining the connections between these components to clarify the impacts and issues of models meant for indoor localization and tracking. In this paper, we comprehensively study the smartphone sensors, algorithms, and techniques that can support indoor localization and tracking without the need for any additional hardware or specific infrastructure. Reviews and comparisons detail the strengths and limitations of each component, following which we propose a handheld-device-based indoor localization with zero infrastructure (HDIZI) approach to connect the abovementioned components in a balanced manner. The sensors are the input source, while the algorithms are used as engines in an optimal manner, in order to produce a robust localizing and tracking model without requiring any further infrastructure. The proposed framework makes indoor and outdoor navigation more user-friendly, and is cost-effective for researchers working with embedded sensors in handheld devices, enabling technologies for Industry 4.0 and beyond. We conducted experiments using data collected from two different sites with five smartphones as an initial work. The data were sampled at 10 Hz for a duration of five seconds at fixed locations; furthermore, data were also collected while moving, allowing for analysis based on user stepping behavior and speed across multiple paths. We leveraged the capabilities of smartphones, through efficient implementation and the optimal integration of algorithms, in order to overcome the inherent limitations. Hence, the proposed HDIZI is expected to outperform approaches proposed in previous studies, helping researchers to deal with sensors for the purposes of indoor navigation-in terms of either positioning or tracking-for use in various fields, such as healthcare, transportation, environmental monitoring, or disaster situations.


Subject(s)
Algorithms , Smartphone , Computers , Transportation
9.
J Supercomput ; 78(7): 10250-10274, 2022.
Article in English | MEDLINE | ID: mdl-35079199

ABSTRACT

This paper designs and develops a computational intelligence-based framework using convolutional neural network (CNN) and genetic algorithm (GA) to detect COVID-19 cases. The framework utilizes a multi-access edge computing technology such that end-user can access available resources as well the CNN on the cloud. Early detection of COVID-19 can improve treatment and mitigate transmission. During peaks of infection, hospitals worldwide have suffered from heavy patient loads, bed shortages, inadequate testing kits and short-staffing problems. Due to the time-consuming nature of the standard RT-PCR test, the lack of expert radiologists, and evaluation issues relating to poor quality images, patients with severe conditions are sometimes unable to receive timely treatment. It is thus recommended to incorporate computational intelligence methodologies, which provides highly accurate detection in a matter of minutes, alongside traditional testing as an emergency measure. CNN has achieved extraordinary performance in numerous computational intelligence tasks. However, finding a systematic, automatic and optimal set of hyperparameters for building an efficient CNN for complex tasks remains challenging. Moreover, due to advancement of technology, data are collected at sparse location and hence accumulation of data from such a diverse sparse location poses a challenge. In this article, we propose a framework of computational intelligence-based algorithm that utilize the recent 5G mobile technology of multi-access edge computing along with a new CNN-model for automatic COVID-19 detection using raw chest X-ray images. This algorithm suggests that anyone having a 5G device (e.g., 5G mobile phone) should be able to use the CNN-based automatic COVID-19 detection tool. As part of the proposed automated model, the model introduces a novel CNN structure with the genetic algorithm (GA) for hyperparameter tuning. One such combination of GA and CNN is new in the application of COVID-19 detection/classification. The experimental results show that the developed framework could classify COVID-19 X-ray images with 98.48% accuracy which is higher than any of the performances achieved by other studies.

10.
J Real Time Image Process ; 18(4): 1099-1114, 2021.
Article in English | MEDLINE | ID: mdl-33747237

ABSTRACT

Pneumonia is responsible for high infant morbidity and mortality. This disease affects the small air sacs (alveoli) in the lung and requires prompt diagnosis and appropriate treatment. Chest X-rays are one of the most common tests used to detect pneumonia. In this work, we propose a real-time Internet of Things (IoT) system to detect pneumonia in chest X-ray images. The dataset used has 6000 chest X-ray images of children, and three medical specialists performed the validations. In this work, twelve different architectures of Convolutional Neural Networks (CNNs) trained on ImageNet were adapted to operate as the resource extractors. Subsequently, the CNNs were combined with consolidated learning methods, such as k-Nearest Neighbor (kNN), Naive Bayes, Random Forest, Multilayer Perceptron (MLP), and Support Vector Machine (SVM). The results showed that the VGG19 architecture with the SVM classifier using the RBF kernel was the best model to detect pneumonia in these chest radiographs. This combination reached 96.47%, 96.46%, and 96.46% for Accuracy, F1 score, and Precision values, respectively. Compared to other works in the literature, the proposed approach had better results for the metrics used. These results show that this approach for the detection of pneumonia in children using a real-time IoT system is efficient and is, therefore, a potential tool to aid in medical diagnoses. This approach will allow specialists to obtain faster and more accurate results and thus provide the appropriate treatment.

11.
Sensors (Basel) ; 20(23)2020 Nov 24.
Article in English | MEDLINE | ID: mdl-33255308

ABSTRACT

Several pathologies have a direct impact on society, causing public health problems. Pulmonary diseases such as Chronic obstructive pulmonary disease (COPD) are already the third leading cause of death in the world, leaving tuberculosis at ninth with 1.7 million deaths and over 10.4 million new occurrences. The detection of lung regions in images is a classic medical challenge. Studies show that computational methods contribute significantly to the medical diagnosis of lung pathologies by Computerized Tomography (CT), as well as through Internet of Things (IoT) methods based in the context on the health of things. The present work proposes a new model based on IoT for classification and segmentation of pulmonary CT images, applying the transfer learning technique in deep learning methods combined with Parzen's probability density. The proposed model uses an Application Programming Interface (API) based on the Internet of Medical Things to classify lung images. The approach was very effective, with results above 98% accuracy for classification in pulmonary images. Then the model proceeds to the lung segmentation stage using the Mask R-CNN network to create a pulmonary map and use fine-tuning to find the pulmonary borders on the CT image. The experiment was a success, the proposed method performed better than other works in the literature, reaching high segmentation metrics values such as accuracy of 98.34%. Besides reaching 5.43 s in segmentation time and overcoming other transfer learning models, our methodology stands out among the others because it is fully automatic. The proposed approach has simplified the segmentation process using transfer learning. It has introduced a faster and more effective method for better-performing lung segmentation, making our model fully automatic and robust.


Subject(s)
Deep Learning , Internet of Things , Tomography, X-Ray Computed , Image Processing, Computer-Assisted , Lung/diagnostic imaging
12.
Article in English | MEDLINE | ID: mdl-33327468

ABSTRACT

In recent years, the widespread deployment of the Internet of Things (IoT) applications has contributed to the development of smart cities. A smart city utilizes IoT-enabled technologies, communications and applications to maximize operational efficiency and enhance both the service providers' quality of services and people's wellbeing and quality of life. With the growth of smart city networks, however, comes the increased risk of cybersecurity threats and attacks. IoT devices within a smart city network are connected to sensors linked to large cloud servers and are exposed to malicious attacks and threats. Thus, it is important to devise approaches to prevent such attacks and protect IoT devices from failure. In this paper, we explore an attack and anomaly detection technique based on machine learning algorithms (LR, SVM, DT, RF, ANN and KNN) to defend against and mitigate IoT cybersecurity threats in a smart city. Contrary to existing works that have focused on single classifiers, we also explore ensemble methods such as bagging, boosting and stacking to enhance the performance of the detection system. Additionally, we consider an integration of feature selection, cross-validation and multi-class classification for the discussed domain, which has not been well considered in the existing literature. Experimental results with the recent attack dataset demonstrate that the proposed technique can effectively identify cyberattacks and the stacking ensemble model outperforms comparable models in terms of accuracy, precision, recall and F1-Score, implying the promise of stacking in this domain.


Subject(s)
Computer Security , Machine Learning , Terrorism , Algorithms , Cities , Computer Security/standards , Humans , Quality of Life , Terrorism/prevention & control
13.
Sensors (Basel) ; 20(20)2020 Oct 15.
Article in English | MEDLINE | ID: mdl-33076436

ABSTRACT

In this paper, we propose a pen device capable of detecting specific features from dynamic handwriting tests for aiding on automatic Parkinson's disease identification. The method used in this work uses machine learning to compare the raw signals from different sensors in the device coupled to a pen and extract relevant information such as tremors and hand acceleration to diagnose the patient clinically. Additionally, the datasets composed of raw signals from healthy and Parkinson's disease patients acquired here are made available to further contribute to research related to this topic.


Subject(s)
Handwriting , Monitoring, Physiologic/instrumentation , Parkinson Disease , Acceleration , Humans , Machine Learning , Parkinson Disease/diagnosis , Tremor
14.
Sensors (Basel) ; 20(8)2020 Apr 22.
Article in English | MEDLINE | ID: mdl-32331260

ABSTRACT

The IEEE 802.15.6 standard has the potential to provide cost-effective and unobtrusive medical services to individuals with chronic health conditions. It is a low-power standard developed for wireless body area networks and enables wireless communication inside or near a human body. This standard utilizes a Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol to improve network performance under different channel access priorities. However, the CSMA/CA proposed in the IEEE 802.15.6 standard has poor throughput performance and link reliability when some of the nodes deployed on a human body are hidden from each other. We employ the RTS/CTS scheme to solve hidden node problems in IEEE 802.15.6 networks over a lossy channel. To improve performance of the RTS/CTS scheme, we adjust transmission power levels of the nodes according to transmission failures. We estimate throughput and energy consumption of the proposed model by differentiating several parameters, such as contention window size, values of bit error ratios, number of nodes in different priority classes. The performance results are obtained through analytical approximations and simulations. We observe that the proposed model significantly improves performance of the IEEE 802.15.6 CSMA/CA by resolving hidden node problems.


Subject(s)
Computer Communication Networks , Wireless Technology , Delivery of Health Care
15.
Data Brief ; 20: 1039-1043, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30225319

ABSTRACT

This paper contains data on Performance Prediction for Cloud Service Selection. To measure the performance metrics of any system you need to analyze the features that affect these performance, these features are called " workload parameters". The data described here is collected from the KSA Ministry of Finance that contains 28,147 instances from 13 cloud nodes. It was recorded during the period from March 1, 2016, to February 20, 2017, in continuous time slots. In this article we selected 9 workload parameters: Number of Jobs in a Minute, Number of Jobs in 5 min, Number of Jobs in 15 min, Memory Capacity, Disk Capacity,: Number of CPU Cores, CPU Speed per Core, Average Receive for Network Bandwidth in Kbps and Average Transmit for Network Bandwidth in Kbps. Moreover, we selected 3 performance metrics: Memory utilization, CPU utilization and response time in milliseconds. This data article is related to the research article titled "An Automated Performance Prediction Model for Cloud Service Selection from Smart Data" (Al-Faifi et al., 2018) [1].

16.
J Med Syst ; 42(6): 99, 2018 Apr 16.
Article in English | MEDLINE | ID: mdl-29663090

ABSTRACT

In recent years, human activity recognition from body sensor data or wearable sensor data has become a considerable research attention from academia and health industry. This research can be useful for various e-health applications such as monitoring elderly and physical impaired people at Smart home to improve their rehabilitation processes. However, it is not easy to accurately and automatically recognize physical human activity through wearable sensors due to the complexity and variety of body activities. In this paper, we address the human activity recognition problem as a classification problem using wearable body sensor data. In particular, we propose to utilize a Deep Belief Network (DBN) model for successful human activity recognition. First, we extract the important initial features from the raw body sensor data. Then, a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) are performed to further process the features and make them more robust to be useful for fast activity recognition. Finally, the DBN is trained by these features. Various experiments were performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN outperformed other algorithms and achieves satisfactory activity recognition performance.


Subject(s)
Machine Learning , Monitoring, Ambulatory/methods , Movement/physiology , Remote Sensing Technology/methods , Algorithms , Exercise Test , Humans , Neural Networks, Computer , Reproducibility of Results
17.
Sensors (Basel) ; 17(12)2017 Dec 07.
Article in English | MEDLINE | ID: mdl-29215591

ABSTRACT

Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.

18.
Sensors (Basel) ; 17(7)2017 Jul 10.
Article in English | MEDLINE | ID: mdl-28698501

ABSTRACT

Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device's battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive.

19.
Sensors (Basel) ; 17(5)2017 Apr 26.
Article in English | MEDLINE | ID: mdl-28445441

ABSTRACT

The understanding of various health-oriented vital sign data generated from body sensor networks (BSNs) and discovery of the associations between the generated parameters is an important task that may assist and promote important decision making in healthcare. For example, in a smart home scenario where occupants' health status is continuously monitored remotely, it is essential to provide the required assistance when an unusual or critical situation is detected in their vital sign data. In this paper, we present an efficient approach for mining the periodic patterns obtained from BSN data. In addition, we employ a correlation test on the generated patterns and introduce productive-associated periodic-frequent patterns as the set of correlated periodic-frequent items. The combination of these measures has the advantage of empowering healthcare providers and patients to raise the quality of diagnosis as well as improve treatment and smart care, especially for elderly people in smart homes. We develop an efficient algorithm named PPFP-growth (Productive Periodic-Frequent Pattern-growth) to discover all productive-associated periodic frequent patterns using these measures. PPFP-growth is efficient and the productiveness measure removes uncorrelated periodic items. An experimental evaluation on synthetic and real datasets shows the efficiency of the proposed PPFP-growth algorithm, which can filter a huge number of periodic patterns to reveal only the correlated ones.


Subject(s)
Home Care Services , Algorithms , Data Mining , Delivery of Health Care , Monitoring, Physiologic
20.
J Med Syst ; 39(12): 192, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26490150

ABSTRACT

With the advances in wearable computing and various wireless technologies, there is an increasing trend to outsource body signals from wireless body area network (WBAN) to outside world including cyber space, healthcare big data clouds, etc. Since the environmental and physiological data collected by multimodal sensors have different importance, the provisioning of quality of service (QoS) for the sensory data in WBAN is a critical issue. This paper proposes multiple level-based QoS design at WBAN media access control layer in terms of user level, data level and time level. In the proposed QoS provisioning scheme, different users have different priorities, various sensory data collected by different sensor nodes have different importance, while data priority for the same sensor node varies over time. The experimental results show that the proposed multi-level based QoS provisioning solution in WBAN yields better performance for meeting QoS requirements of personalized healthcare applications while achieving energy saving.


Subject(s)
Computer Communication Networks/instrumentation , Remote Sensing Technology/instrumentation , Telemedicine/instrumentation , Wireless Technology/instrumentation , Awareness , Computer Simulation , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...