Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Sensors (Basel) ; 24(8)2024 Apr 19.
Article in English | MEDLINE | ID: mdl-38676241

ABSTRACT

Recently, Machine Learning (ML)-based solutions have been widely adopted to tackle the wide range of security challenges that have affected the progress of the Internet of Things (IoT) in various domains. Despite the reported promising results, the ML-based Intrusion Detection System (IDS) proved to be vulnerable to adversarial examples, which pose an increasing threat. In fact, attackers employ Adversarial Machine Learning (AML) to cause severe performance degradation and thereby evade detection systems. This promoted the need for reliable defense strategies to handle performance and ensure secure networks. This work introduces RobEns, a robust ensemble framework that aims at: (i) exploiting state-of-the-art ML-based models alongside ensemble models for IDSs in the IoT network; (ii) investigating the impact of evasion AML attacks against the provided models within a black-box scenario; and (iii) evaluating the robustness of the considered models after deploying relevant defense methods. In particular, four typical AML attacks are considered to investigate six ML-based IDSs using three benchmarking datasets. Moreover, multi-class classification scenarios are designed to assess the performance of each attack type. The experiments indicated a drastic drop in detection accuracy for some attempts. To harden the IDS even further, two defense mechanisms were derived from both data-based and model-based methods. Specifically, these methods relied on feature squeezing as well as adversarial training defense strategies. They yielded promising results, enhanced robustness, and maintained standard accuracy in the presence or absence of adversaries. The obtained results proved the efficiency of the proposed framework in robustifying IDS performance within the IoT context. In particular, the accuracy reached 100% for black-box attack scenarios while preserving the accuracy in the absence of attacks as well.

2.
Sci Rep ; 14(1): 4431, 2024 02 23.
Article in English | MEDLINE | ID: mdl-38396036

ABSTRACT

Over the past decade, the use of biometrics in security systems and other applications has grown in popularity. ECG signals in particular are attracting increased attention due to their characteristics, which are required for a trustworthy identification system. The majority of ECG-based person identification systems are evaluated without considering the health-state of the individuals. Few person identification systems consider person-by-person health-state annotation. This paper proposes a person identification system considering the health-state annotated ECG signals where each person's beats overlap among variant arrhythmia classes. This overlapping between the normal class and other arrhythmia classes grants the ability to isolate normal beats in the train set from the Arrhythmic beats in the test set. Therefore, this paper investigates the effect of arrhythmic heartbeats on biometric recognition. An effective lightweight CNN based on depth-wise separable convolution (DWSC) is proposed to enhance the performance of person identification for several common arrhythmia types using the MITBIH dataset. The proposed methodology has been tested on nine arrhythmia types and presents how different types of arrhythmia affect ECG-based biometric systems differently. The experimental results show excellent recognition performance (99.28%) on normal heartbeats and (93.81%) on arrhythmic heartbeats, outperforming other models in terms of mean accuracy.


Subject(s)
Electrocardiography , Neural Networks, Computer , Humans , Arrhythmias, Cardiac/diagnosis , Biometry , Heart Rate , Algorithms , Signal Processing, Computer-Assisted
3.
Heliyon ; 10(1): e23151, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38223736

ABSTRACT

Dengue is one of Pakistan's major health concerns. In this study, we aimed to advance our understanding of the levels of knowledge, attitudes, and practices (KAPs) in Pakistan's Dengue Fever (DF) hotspots. Initially, at-risk communities were systematically identified via a well-known spatial modeling technique, named, Kernel Density Estimation, which was later targeted for a household-based cross-sectional survey of KAPs. To collect data on sociodemographic and KAPs, random sampling was utilized (n = 385, 5 % margin of error). Later, the association of different demographics (characteristics), knowledge, and attitude factors-potentially related to poor preventive practices was assessed using bivariate (individual) and multivariable (model) logistic regression analyses. Most respondents (>90 %) identified fever as a sign of DF; headache (73.8 %), joint pain (64.4 %), muscular pain (50.9 %), pain behind the eyes (41.8 %), bleeding (34.3 %), and skin rash (36.1 %) were identified relatively less. Regression results showed significant associations of poor knowledge/attitude with poor preventive practices; dengue vector (odds ratio [OR] = 3.733, 95 % confidence interval [CI ] = 2.377-5.861; P < 0.001), DF symptoms (OR = 3.088, 95 % CI = 1.949-4.894; P < 0.001), dengue transmission (OR = 1.933, 95 % CI = 1.265-2.956; P = 0.002), and attitude (OR = 3.813, 95 % CI = 1.548-9.395; P = 0.004). Moreover, education level was stronger in bivariate analysis and the strongest independent factor of poor preventive practices in multivariable analysis (illiterate: adjusted OR = 6.833, 95 % CI = 2.979-15.672; P < 0.001) and primary education (adjusted OR = 4.046, 95 % CI = 1.997-8.199; P < 0.001). This situation highlights knowledge gaps within urban communities, particularly in understanding dengue transmission and signs/symptoms. The level of education in urban communities also plays a substantial role in dengue control, as observed in this study, where poor preventive practices were more prevalent among illiterate and less educated respondents.

4.
Sensors (Basel) ; 23(18)2023 Sep 15.
Article in English | MEDLINE | ID: mdl-37765980

ABSTRACT

Scoring polysomnography for obstructive sleep apnea diagnosis is a laborious, long, and costly process. Machine learning approaches, such as deep neural networks, can reduce scoring time and costs. However, most methods require prior filtering and preprocessing of the raw signal. Our work presents a novel method for diagnosing obstructive sleep apnea using a transformer neural network with learnable positional encoding, which outperforms existing state-of-the-art solutions. This approach has the potential to improve the diagnostic performance of oximetry for obstructive sleep apnea and reduce the time and costs associated with traditional polysomnography. Contrary to existing approaches, our approach performs annotations at one-second granularity. Allowing physicians to interpret the model's outcome. In addition, we tested different positional encoding designs as the first layer of the model, and the best results were achieved using a learnable positional encoding based on an autoencoder with structural novelty. In addition, we tried different temporal resolutions with various granularity levels from 1 to 360 s. All experiments were carried out on an independent test set from the public OSASUD dataset and showed that our approach outperforms current state-of-the-art solutions with a satisfactory AUC of 0.89, accuracy of 0.80, and F1-score of 0.79.


Subject(s)
Labor, Obstetric , Sleep Apnea, Obstructive , Pregnancy , Female , Humans , Oximetry , Electric Power Supplies , Neural Networks, Computer , Sleep Apnea, Obstructive/diagnosis
5.
Diagnostics (Basel) ; 13(16)2023 Aug 08.
Article in English | MEDLINE | ID: mdl-37627883

ABSTRACT

EEG-based emotion recognition has numerous real-world applications in fields such as affective computing, human-computer interaction, and mental health monitoring. This offers the potential for developing IOT-based, emotion-aware systems and personalized interventions using real-time EEG data. This study focused on unique EEG channel selection and feature selection methods to remove unnecessary data from high-quality features. This helped improve the overall efficiency of a deep learning model in terms of memory, time, and accuracy. Moreover, this work utilized a lightweight deep learning method, specifically one-dimensional convolutional neural networks (1D-CNN), to analyze EEG signals and classify emotional states. By capturing intricate patterns and relationships within the data, the 1D-CNN model accurately distinguished between emotional states (HV/LV and HA/LA). Moreover, an efficient method for data augmentation was used to increase the sample size and observe the performance deep learning model using additional data. The study conducted EEG-based emotion recognition tests on SEED, DEAP, and MAHNOB-HCI datasets. Consequently, this approach achieved mean accuracies of 97.6, 95.3, and 89.0 on MAHNOB-HCI, SEED, and DEAP datasets, respectively. The results have demonstrated significant potential for the implementation of a cost-effective IoT device to collect EEG signals, thereby enhancing the feasibility and applicability of the data.

6.
Diagnostics (Basel) ; 13(7)2023 Mar 24.
Article in English | MEDLINE | ID: mdl-37046446

ABSTRACT

Brain tumors are nonlinear and present with variations in their size, form, and textural variation; this might make it difficult to diagnose them and perform surgical excision using magnetic resonance imaging (MRI) scans. The procedures that are currently available are conducted by radiologists, brain surgeons, and clinical specialists. Studying brain MRIs is laborious, error-prone, and time-consuming, but they nonetheless show high positional accuracy in the case of brain cells. The proposed convolutional neural network model, an existing blockchain-based method, is used to secure the network for the precise prediction of brain tumors, such as pituitary tumors, meningioma tumors, and glioma tumors. MRI scans of the brain are first put into pre-trained deep models after being normalized in a fixed dimension. These structures are altered at each layer, increasing their security and safety. To guard against potential layer deletions, modification attacks, and tempering, each layer has an additional block that stores specific information. Multiple blocks are used to store information, including blocks related to each layer, cloud ledger blocks kept in cloud storage, and ledger blocks connected to the network. Later, the features are retrieved, merged, and optimized utilizing a Genetic Algorithm and have attained a competitive performance compared with the state-of-the-art (SOTA) methods using different ML classifiers.

7.
Bioengineering (Basel) ; 9(9)2022 Sep 16.
Article in English | MEDLINE | ID: mdl-36135025

ABSTRACT

Atrial fibrillation (AF) is one of the most common cardiac arrhythmias, and it is an indication of high-risk factors for stroke, myocardial ischemia, and other malignant cardiovascular diseases. Most of the existing AF detection methods typically convert one-dimensional time-series electrocardiogram (ECG) signals into two-dimensional representations to train a deep and complex AF detection system, which results in heavy training computation and high implementation costs. In this paper, a multiscale signal encoding scheme is proposed to improve feature representation and detection performance without the need for using any transformation or handcrafted feature engineering techniques. The proposed scheme uses different kernel sizes to produce the encoded signal by using multiple streams that are passed into a one-dimensional sequence of blocks of a residual convolutional neural network (ResNet) to extract representative features from the input ECG signal. This also allows networks to grow in breadth rather than in depth, thus reducing the computing time by using the parallel processing capability of deep learning networks. We investigated the effects of the use of a different number of streams with different kernel sizes on the performance. Experiments were carried out for a performance evaluation using the publicly available PhysioNet CinC Challenge 2017 dataset. The proposed multiscale encoding scheme outperformed existing deep learning-based methods with an average F1 score of 98.54%, but with a lower network complexity.

8.
Comput Biol Med ; 147: 105671, 2022 08.
Article in English | MEDLINE | ID: mdl-35660327

ABSTRACT

A stable predictive model is essential for forecasting the chances of cesarean or C-section (CS) delivery, as unnecessary CS delivery can adversely affect neonatal, maternal, and pediatric morbidity and mortality, and can incur significant financial burdens. Limited state-of-the-art machine learning models have been applied in this area in recent years, and the current models are insufficient to correctly predict the probability of CS delivery. To alleviate this drawback, we have proposed a Henry gas solubility optimization (HGSO)-based random forest (RF), with an improved objective function, called HGSORF, for the classification of CS and non-CS classes. Real-world CS datasets can be noisy, such as the Pakistan Demographic and Health Survey (PDHS) dataset used in this study. The HGSO can provide fine-tuned hyperparameters of RF by avoiding local minima points. To compare performance, Gaussian Naive Bayes (GNB), linear discriminant analysis (LDA), K-nearest neighbors (KNN), gradient boosting classifier (GBC), and logistic regression (LR) have been considered in this research. The ADAptive SYNthetic (ADASYN) algorithm has been used to balance the model, and the proposed HGSORF has been compared with other classifiers as well as with other studies. The superior performance was achieved by HGSORF with an accuracy of 98.33% for the PDHS dataset. The hyperparameters of RF have also been optimized by using commonly used hyperparameter-optimization algorithms, and the proposed HGSORF provided comparatively better performance. Additionally, to analyze the causes of CS and their significance, the HGSORF is explained locally and globally using eXplainable artificial intelligence (XAI)-based tools such as SHapely Additive exPlanation (SHAP) and Local Interpretable Model-Agnostic Explanations (LIME). A decision support system has been developed as a potential application to support clinical staffs. All pre-trained models and relevant codes are available on: https://github.com/MIrazul29/HGSORF_CSection.


Subject(s)
Artificial Intelligence , Machine Learning , Algorithms , Bayes Theorem , Child , Humans , Infant, Newborn , Solubility
9.
Diagnostics (Basel) ; 12(5)2022 Apr 19.
Article in English | MEDLINE | ID: mdl-35626179

ABSTRACT

A healthcare monitoring system needs the support of recent technologies such as artificial intelligence (AI), machine learning (ML), and big data, especially during the COVID-19 pandemic. This global pandemic has already taken millions of lives. Both infected and uninfected people have generated big data where AI and ML can use to combat and detect COVID-19 at an early stage. Motivated by this, an improved ML framework for the early detection of this disease is proposed in this paper. The state-of-the-art Harris hawks optimization (HHO) algorithm with an improved objective function is proposed and applied to optimize the hyperparameters of the ML algorithms, namely HHO-based eXtreme gradient boosting (HHOXGB), light gradient boosting (HHOLGB), categorical boosting (HHOCAT), random forest (HHORF) and support vector classifier (HHOSVC). An ensemble technique was applied to these optimized ML models to improve the prediction performance. Our proposed method was applied to publicly available big COVID-19 data and yielded a prediction accuracy of 92.38% using the ensemble model. In contrast, HHOXGB provided the highest accuracy of 92.23% as a single optimized model. The performance of the proposed method was compared with the traditional algorithms and other ML-based methods. In both cases, our proposed method performed better. Furthermore, not only the classification improvement, but also the features are analyzed in terms of feature importance calculated by SHapely adaptive exPlanations (SHAP) values. A graphical user interface is also discussed as a potential tool for nonspecialist users such as clinical staff and nurses. The processed data, trained model, and codes related to this study are available at GitHub.

10.
Healthcare (Basel) ; 10(3)2022 Mar 16.
Article in English | MEDLINE | ID: mdl-35327025

ABSTRACT

Recent research indicates that Photoplethysmography (PPG) signals carry more information than oxygen saturation level (SpO2) and can be utilized for affordable, fast, and noninvasive healthcare applications. All these encourage the researchers to estimate its feasibility as an alternative to many expansive, time-wasting, and invasive methods. This systematic review discusses the current literature on diagnostic features of PPG signal and their applications that might present a potential venue to be adapted into many health and fitness aspects of human life. The research methodology is based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines 2020. To this aim, papers from 1981 to date are reviewed and categorized in terms of the healthcare application domain. Along with consolidated research areas, recent topics that are growing in popularity are also discovered. We also highlight the potential impact of using PPG signals on an individual's quality of life and public health. The state-of-the-art studies suggest that in the years to come PPG wearables will become pervasive in many fields of medical practices, and the main domains include cardiology, respiratory, neurology, and fitness. Main operation challenges, including performance and robustness obstacles, are identified.

11.
PLoS One ; 17(3): e0265679, 2022.
Article in English | MEDLINE | ID: mdl-35303027

ABSTRACT

Human anxiety is a grave mental health concern that needs to be addressed in the appropriate manner in order to develop a healthy society. In this study, an objective human anxiety assessment framework is developed by using physiological signals of electroencephalography (EEG) and recorded in response to exposure therapy. The EEG signals of twenty-three subjects from an existing database called "A Database for Anxious States which is based on a Psychological Stimulation (DASPS)" are used for anxiety quantification into two and four levels. The EEG signals are pre-processed using appropriate noise filtering techniques to remove unwanted ocular and muscular artifacts. Channel selection is performed to select the significantly different electrodes using statistical analysis techniques for binary and four-level classification of human anxiety, respectively. Features are extracted from the data of selected EEG channels in the frequency domain. Frequency band selection is applied to select the appropriate combination of EEG frequency bands, which in this study are theta and beta bands. Feature selection is applied to the features of the selected EEG frequency bands. Finally, the selected subset of features from the appropriate frequency bands of the statistically significant EEG channels were classified using multiple machine learning algorithms. An accuracy of 94.90% and 92.74% is attained for two and four-level anxiety classification using a random forest classifier with 9 and 10 features, respectively. The proposed state anxiety classification framework outperforms the existing anxiety detection framework in terms of accuracy with a smaller number of features which reduces the computational complexity of the algorithm.


Subject(s)
Implosive Therapy , Signal Processing, Computer-Assisted , Algorithms , Anxiety , Electroencephalography/methods , Humans , Support Vector Machine
12.
Sensors (Basel) ; 21(16)2021 Aug 17.
Article in English | MEDLINE | ID: mdl-34450982

ABSTRACT

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.

13.
Sensors (Basel) ; 21(11)2021 May 26.
Article in English | MEDLINE | ID: mdl-34073546

ABSTRACT

Visible light communications (VLC) is gaining interest as one of the enablers of short-distance, high-data-rate applications, in future beyond 5G networks. Moreover, non-orthogonal multiple-access (NOMA)-enabled schemes have recently emerged as a promising multiple-access scheme for these networks that would allow realization of the target spectral efficiency and user fairness requirements. The integration of NOMA in the widely adopted orthogonal frequency-division multiplexing (OFDM)-based VLC networks would require an optimal resource allocation for the pair or the cluster of users sharing the same subcarrier(s). In this paper, the max-min rate of a multi-cell indoor centralized VLC network is maximized through optimizing user pairing, subcarrier allocation, and power allocation. The joint complex optimization problem is tackled using a low-complexity solution. At first, the user pairing is assumed to follow the divide-and-next-largest-difference user-pairing algorithm (D-NLUPA) that can ensure fairness among the different clusters. Then, subcarrier allocation and power allocation are solved iteratively through both the Simulated Annealing (SA) meta-heuristic algorithm and the bisection method. The obtained results quantify the achievable max-min user rates for the different relevant variants of NOMA-enabled schemes and shed new light on both the performance and design of multi-user multi-carrier NOMA-enabled centralized VLC networks.

14.
Comput Intell Neurosci ; 2021: 8811147, 2021.
Article in English | MEDLINE | ID: mdl-33763125

ABSTRACT

Noise in training data increases the tendency of many machine learning methods to overfit the training data, which undermines the performance. Outliers occur in big data as a result of various factors, including human errors. In this work, we present a novel discriminator model for the identification of outliers in the training data. We propose a systematic approach for creating training datasets to train the discriminator based on a small number of genuine instances (trusted data). The noise discriminator is a convolutional neural network (CNN). We evaluate the discriminator's performance using several benchmark datasets and with different noise ratios. We inserted random noise in each dataset and trained discriminators to clean them. Different discriminators were trained using different numbers of genuine instances with and without data augmentation. We compare the performance of the proposed noise-discriminator method with seven other methods proposed in the literature using several benchmark datasets. Our empirical results indicate that the proposed method is very competitive to the other methods. It actually outperforms them for pair noise.


Subject(s)
Machine Learning , Neural Networks, Computer , Humans
15.
Entropy (Basel) ; 20(11)2018 Nov 07.
Article in English | MEDLINE | ID: mdl-33266581

ABSTRACT

Text classification is one domain in which the naive Bayesian (NB) learning algorithm performs remarkably well. However, making further improvement in performance using ensemble-building techniques proved to be a challenge because NB is a stable algorithm. This work shows that, while an ensemble of NB classifiers achieves little or no improvement in terms of classification accuracy, an ensemble of fine-tuned NB classifiers can achieve a remarkable improvement in accuracy. We propose a fine-tuning algorithm for text classification that is both more accurate and less stable than the NB algorithm and the fine-tuning NB (FTNB) algorithm. This improvement makes it more suitable than the FTNB algorithm for building ensembles of classifiers using bagging. Our empirical experiments, using 16-benchmark text-classification data sets, show significant improvement for most data sets.

SELECTION OF CITATIONS
SEARCH DETAIL
...