Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 7054, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38528084

RESUMO

Many intrusion detection techniques have been developed to ensure that the target system can function properly under the established rules. With the booming Internet of Things (IoT) applications, the resource-constrained nature of its devices makes it urgent to explore lightweight and high-performance intrusion detection models. Recent years have seen a particularly active application of deep learning (DL) techniques. The spiking neural network (SNN), a type of artificial intelligence that is associated with sparse computations and inherent temporal dynamics, has been viewed as a potential candidate for the next generation of DL. It should be noted, however, that current research into SNNs has largely focused on scenarios where limited computational resources and insufficient power sources are not considered. Consequently, even state-of-the-art SNN solutions tend to be inefficient. In this paper, a lightweight and effective detection model is proposed. With the help of rational algorithm design, the model integrates the advantages of SNNs as well as convolutional neural networks (CNNs). In addition to reducing resource usage, it maintains a high level of classification accuracy. The proposed model was evaluated against some current state-of-the-art models using a comprehensive set of metrics. Based on the experimental results, the model demonstrated improved adaptability to environments with limited computational resources and energy sources.

2.
PeerJ Comput Sci ; 9: e1492, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37810364

RESUMO

Background: Malware, malicious software, is the major security concern of the digital realm. Conventional cyber-security solutions are challenged by sophisticated malicious behaviors. Currently, an overlap between malicious and legitimate behaviors causes more difficulties in characterizing those behaviors as malicious or legitimate activities. For instance, evasive malware often mimics legitimate behaviors, and evasion techniques are utilized by legitimate and malicious software. Problem: Most of the existing solutions use the traditional term of frequency-inverse document frequency (TF-IDF) technique or its concept to represent malware behaviors. However, the traditional TF-IDF and the developed techniques represent the features, especially the shared ones, inaccurately because those techniques calculate a weight for each feature without considering its distribution in each class; instead, the generated weight is generated based on the distribution of the feature among all the documents. Such presumption can reduce the meaning of those features, and when those features are used to classify malware, they lead to a high false alarms. Method: This study proposes a Kullback-Liebler Divergence-based Term Frequency-Probability Class Distribution (KLD-based TF-PCD) algorithm to represent the extracted features based on the differences between the probability distributions of the terms in malware and benign classes. Unlike the existing solution, the proposed algorithm increases the weights of the important features by using the Kullback-Liebler Divergence tool to measure the differences between their probability distributions in malware and benign classes. Results: The experimental results show that the proposed KLD-based TF-PCD algorithm achieved an accuracy of 0.972, the false positive rate of 0.037, and the F-measure of 0.978. Such results were significant compared to the related work studies. Thus, the proposed KLD-based TF-PCD algorithm contributes to improving the security of cyberspace. Conclusion: New meaningful characteristics have been added by the proposed algorithm to promote the learned knowledge of the classifiers, and thus increase their ability to classify malicious behaviors accurately.

3.
Sensors (Basel) ; 22(21)2022 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-36366261

RESUMO

Smart home technologies have attracted more users in recent years due to significant advancements in their underlying enabler components, such as sensors, actuators, and processors, which are spreading in various domains and have become more affordable. However, these IoT-based solutions are prone to data leakage; this privacy issue has motivated researchers to seek a secure solution to overcome this challenge. In this regard, wireless signal eavesdropping is one of the most severe threats that enables attackers to obtain residents' sensitive information. Even if the system encrypts all communications, some cyber attacks can still steal information by interpreting the contextual data related to the transmitted signals. For example, a "fingerprint and timing-based snooping (FATS)" attack is a side-channel attack (SCA) developed to infer in-home activities passively from a remote location near the targeted house. An SCA is a sort of cyber attack that extracts valuable information from smart systems without accessing the content of data packets. This paper reviews the SCAs associated with cyber-physical systems, focusing on the proposed solutions to protect the privacy of smart homes against FATS attacks in detail. Moreover, this work clarifies shortcomings and future opportunities by analyzing the existing gaps in the reviewed methods.


Assuntos
Segurança Computacional , Privacidade , Confidencialidade , Tecnologia sem Fio , Tecnologia
4.
Sensors (Basel) ; 22(18)2022 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-36146319

RESUMO

Recently, fake news has been widely spread through the Internet due to the increased use of social media for communication. Fake news has become a significant concern due to its harmful impact on individual attitudes and the community's behavior. Researchers and social media service providers have commonly utilized artificial intelligence techniques in the recent few years to rein in fake news propagation. However, fake news detection is challenging due to the use of political language and the high linguistic similarities between real and fake news. In addition, most news sentences are short, therefore finding valuable representative features that machine learning classifiers can use to distinguish between fake and authentic news is difficult because both false and legitimate news have comparable language traits. Existing fake news solutions suffer from low detection performance due to improper representation and model design. This study aims at improving the detection accuracy by proposing a deep ensemble fake news detection model using the sequential deep learning technique. The proposed model was constructed in three phases. In the first phase, features were extracted from news contents, preprocessed using natural language processing techniques, enriched using n-gram, and represented using the term frequency-inverse term frequency technique. In the second phase, an ensemble model based on deep learning was constructed as follows. Multiple binary classifiers were trained using sequential deep learning networks to extract the representative hidden features that could accurately classify news types. In the third phase, a multi-class classifier was constructed based on multilayer perceptron (MLP) and trained using the features extracted from the aggregated outputs of the deep learning-based binary classifiers for final classification. The two popular and well-known datasets (LIAR and ISOT) were used with different classifiers to benchmark the proposed model. Compared with the state-of-the-art models, which use deep contextualized representation with convolutional neural network (CNN), the proposed model shows significant improvements (2.41%) in the overall performance in terms of the F1score for the LIAR dataset, which is more challenging than other datasets. Meanwhile, the proposed model achieves 100% accuracy with ISOT. The study demonstrates that traditional features extracted from news content with proper model design outperform the existing models that were constructed based on text embedding techniques.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Desinformação , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
5.
Sensors (Basel) ; 22(9)2022 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-35590801

RESUMO

Data streaming applications such as the Internet of Things (IoT) require processing or predicting from sequential data from various sensors. However, most of the data are unlabeled, making applying fully supervised learning algorithms impossible. The online manifold regularization approach allows sequential learning from partially labeled data, which is useful for sequential learning in environments with scarcely labeled data. Unfortunately, the manifold regularization technique does not work out of the box as it requires determining the radial basis function (RBF) kernel width parameter. The RBF kernel width parameter directly impacts the performance as it is used to inform the model to which class each piece of data most likely belongs. The width parameter is often determined off-line via hyperparameter search, where a vast amount of labeled data is required. Therefore, it limits its utility in applications where it is difficult to collect a great deal of labeled data, such as data stream mining. To address this issue, we proposed eliminating the RBF kernel from the manifold regularization technique altogether by combining the manifold regularization technique with a prototype learning method, which uses a finite set of prototypes to approximate the entire data set. Compared to other manifold regularization approaches, this approach instead queries the prototype-based learner to find the most similar samples for each sample instead of relying on the RBF kernel. Thus, it no longer necessitates the RBF kernel, which improves its practicality. The proposed approach can learn faster and achieve a higher classification performance than other manifold regularization techniques based on experiments on benchmark data sets. Results showed that the proposed approach can perform well even without using the RBF kernel, which improves the practicality of manifold regularization techniques for semi-supervised learning.


Assuntos
Internet das Coisas , Aprendizado de Máquina Supervisionado , Algoritmos , Benchmarking , Mineração de Dados
6.
Sensors (Basel) ; 22(9)2022 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-35591061

RESUMO

Web applications have become ubiquitous for many business sectors due to their platform independence and low operation cost. Billions of users are visiting these applications to accomplish their daily tasks. However, many of these applications are either vulnerable to web defacement attacks or created and managed by hackers such as fraudulent and phishing websites. Detecting malicious websites is essential to prevent the spreading of malware and protect end-users from being victims. However, most existing solutions rely on extracting features from the website's content which can be harmful to the detection machines themselves and subject to obfuscations. Detecting malicious Uniform Resource Locators (URLs) is safer and more efficient than content analysis. However, the detection of malicious URLs is still not well addressed due to insufficient features and inaccurate classification. This study aims at improving the detection accuracy of malicious URL detection by designing and developing a cyber threat intelligence-based malicious URL detection model using two-stage ensemble learning. The cyber threat intelligence-based features are extracted from web searches to improve detection accuracy. Cybersecurity analysts and users reports around the globe can provide important information regarding malicious websites. Therefore, cyber threat intelligence-based (CTI) features extracted from Google searches and Whois websites are used to improve detection performance. The study also proposed a two-stage ensemble learning model that combines the random forest (RF) algorithm for preclassification with multilayer perceptron (MLP) for final decision making. The trained MLP classifier has replaced the majority voting scheme of the three trained random forest classifiers for decision making. The probabilistic output of the weak classifiers of the random forest was aggregated and used as input for the MLP classifier for adequate classification. Results show that the extracted CTI-based features with the two-stage classification outperform other studies' detection models. The proposed CTI-based detection model achieved a 7.8% accuracy improvement and 6.7% reduction in false-positive rates compared with the traditional URL-based model.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Algoritmos , Segurança Computacional , Inteligência
7.
Sensors (Basel) ; 22(7)2022 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-35408423

RESUMO

A vehicular ad hoc network (VANET) is an emerging technology that improves road safety, traffic efficiency, and passenger comfort. VANETs' applications rely on co-operativeness among vehicles by periodically sharing their context information, such as position speed and acceleration, among others, at a high rate due to high vehicles mobility. However, rogue nodes, which exploit the co-operativeness feature and share false messages, can disrupt the fundamental operations of any potential application and cause the loss of people's lives and properties. Unfortunately, most of the current solutions cannot effectively detect rogue nodes due to the continuous context change and the inconsideration of dynamic data uncertainty during the identification. Although there are few context-aware solutions proposed for VANET, most of these solutions are data-centric. A vehicle is considered malicious if it shares false or inaccurate messages. Such a rule is fuzzy and not consistently accurate due to the dynamic uncertainty of the vehicular context, which leads to a poor detection rate. To this end, this study proposed a fuzzy-based context-aware detection model to improve the overall detection performance. A fuzzy inference system is constructed to evaluate the vehicles based on their generated information. The output of the proposed fuzzy inference system is used to build a dynamic context reference based on the proposed fuzzy inference system. Vehicles are classified into either honest or rogue nodes based on the deviation of their evaluation scores calculated using the proposed fuzzy inference system from the context reference. Extensive experiments were carried out to evaluate the proposed model. Results show that the proposed model outperforms the state-of-the-art models. It achieves a 7.88% improvement in the overall performance, while a 16.46% improvement is attained for detection rate compared to the state-of-the-art model. The proposed model can be used to evict the rogue nodes, and thus improve the safety and traffic efficiency of crewed or uncrewed vehicles designed for different environments, land, naval, or air.

8.
Sensors (Basel) ; 21(23)2021 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-34884022

RESUMO

Wireless Sensors Networks have been the focus of significant attention from research and development due to their applications of collecting data from various fields such as smart cities, power grids, transportation systems, medical sectors, military, and rural areas. Accurate and reliable measurements for insightful data analysis and decision-making are the ultimate goals of sensor networks for critical domains. However, the raw data collected by WSNs usually are not reliable and inaccurate due to the imperfect nature of WSNs. Identifying misbehaviours or anomalies in the network is important for providing reliable and secure functioning of the network. However, due to resource constraints, a lightweight detection scheme is a major design challenge in sensor networks. This paper aims at designing and developing a lightweight anomaly detection scheme to improve efficiency in terms of reducing the computational complexity and communication and improving memory utilization overhead while maintaining high accuracy. To achieve this aim, one-class learning and dimension reduction concepts were used in the design. The One-Class Support Vector Machine (OCSVM) with hyper-ellipsoid variance was used for anomaly detection due to its advantage in classifying unlabelled and multivariate data. Various One-Class Support Vector Machine formulations have been investigated and Centred-Ellipsoid has been adopted in this study due to its effectiveness. Centred-Ellipsoid is the most effective kernel among studies formulations. To decrease the computational complexity and improve memory utilization, the dimensions of the data were reduced using the Candid Covariance-Free Incremental Principal Component Analysis (CCIPCA) algorithm. Extensive experiments were conducted to evaluate the proposed lightweight anomaly detection scheme. Results in terms of detection accuracy, memory utilization, computational complexity, and communication overhead show that the proposed scheme is effective and efficient compared few existing schemes evaluated. The proposed anomaly detection scheme achieved the accuracy higher than 98%, with O(nd) memory utilization and no communication overhead.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Algoritmos , Análise de Componente Principal , Máquina de Vetores de Suporte
9.
Comput Intell Neurosci ; 2021: 2977954, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34413885

RESUMO

Wireless mesh networks (WMNs) have emerged as a scalable, reliable, and agile wireless network that supports many types of innovative technologies such as the Internet of Things (IoT), Wireless Sensor Networks (WSN), and Internet of Vehicles (IoV). Due to the limited number of orthogonal channels, interference between channels adversely affects the fair distribution of bandwidth among mesh clients, causing node starvation in terms of insufficient bandwidth distribution, which impedes the adoption of WMN as an efficient access technology. Therefore, a fair channel assignment is crucial for the mesh clients to utilize the available resources. However, the node starvation problem due to unfair channel distribution has been vastly overlooked during channel assignment by the extant research. Instead, existing channel assignment algorithms equally distribute the interference reduction on the links to achieve fairness which neither guarantees a fair distribution of the network bandwidth nor eliminates node starvation. In addition, the metaheuristic-based solutions such as genetic algorithm, which is commonly used for WMN, use randomness in creating initial population and selecting the new generation usually leading the search to local minima. To this end, this study proposes a Fairness-Oriented Semichaotic Genetic Algorithm-Based Channel Assignment Technique (FA-SCGA-CAA) to solve node starvation problem in wireless mesh networks. FA-SCGA-CAA maximizes link fairness while minimizing link interference using a genetic algorithm (GA) with a novel nonlinear fairness-oriented fitness function. The primary chromosome with powerful genes is created based on multicriterion links ranking channel assignment algorithm. Such a chromosome was used with a proposed semichaotic technique to create a strong population that directs the search towards the global minima effectively and efficiently. The proposed semichaotic technique was also used during the mutation and parent selection of the new genes. Extensive experiments were conducted to evaluate the proposed algorithm. A comparison with related work shows that the proposed FA-SCGA-CAA reduced the potential node starvation by 22% and improved network capacity utilization by 23%. It can be concluded that the proposed FA-SCGA-CAA is reliable to maintain high node-level fairness while maximizing the utilization of the network resources, which is the ultimate goal of many wireless networks.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Algoritmos , Humanos
10.
PLoS One ; 13(11): e0207176, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30457996

RESUMO

The presence of motion artefacts in ECG signals can cause misleading interpretation of cardiovascular status. Recently, reducing the motion artefact from ECG signal has gained the interest of many researchers. Due to the overlapping nature of the motion artefact with the ECG signal, it is difficult to reduce motion artefact without distorting the original ECG signal. However, the application of an adaptive noise canceler has shown that it is effective in reducing motion artefacts if the appropriate noise reference that is correlated with the noise in the ECG signal is available. Unfortunately, the noise reference is not always correlated with motion artefact. Consequently, filtering with such a noise reference may lead to contaminating the ECG signal. In this paper, a two-stage filtering motion artefact reduction algorithm is proposed. In the algorithm, two methods are proposed, each of which works in one stage. The weighted adaptive noise filtering method (WAF) is proposed for the first stage. The acceleration derivative is used as motion artefact reference and the Pearson correlation coefficient between acceleration and ECG signal is used as a weighting factor. In the second stage, a recursive Hampel filter-based estimation method (RHFBE) is proposed for estimating the ECG signal segments, based on the spatial correlation of the ECG segment component that is obtained from successive ECG signals. Real-World dataset is used to evaluate the effectiveness of the proposed methods compared to the conventional adaptive filter. The results show a promising enhancement in terms of reducing motion artefacts from the ECG signals recorded by a cost-effective single lead ECG sensor during several activities of different subjects.


Assuntos
Algoritmos , Eletrocardiografia/estatística & dados numéricos , Processamento de Sinais Assistido por Computador , Aceleração , Artefatos , Bases de Dados Factuais/estatística & dados numéricos , Exercício Físico , Humanos , Movimento (Física) , Razão Sinal-Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...