Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 124
Filtrar
1.
Sensors (Basel) ; 24(18)2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39338780

RESUMO

To address the class imbalance issue in network intrusion detection, which degrades performance of intrusion detection models, this paper proposes a novel generative model called VAE-WACGAN to generate minority class samples and balance the dataset. This model extends the Variational Autoencoder Generative Adversarial Network (VAEGAN) by integrating key features from the Auxiliary Classifier Generative Adversarial Network (ACGAN) and the Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP). These enhancements significantly improve both the quality of generated samples and the stability of the training process. By utilizing the VAE-WACGAN model to oversample anomalous data, more realistic synthetic anomalies that closely mirror the actual network traffic distribution can be generated. This approach effectively balances the network traffic dataset and enhances the overall performance of the intrusion detection model. Experimental validation was conducted using two widely utilized intrusion detection datasets, UNSW-NB15 and CIC-IDS2017. The results demonstrate that the VAE-WACGAN method effectively enhances the performance metrics of the intrusion detection model. Furthermore, the VAE-WACGAN-based intrusion detection approach surpasses several other advanced methods, underscoring its effectiveness in tackling network security challenges.

2.
Entropy (Basel) ; 26(8)2024 Jul 30.
Artigo em Inglês | MEDLINE | ID: mdl-39202118

RESUMO

With the popularity of the Internet and the increase in the level of information technology, cyber attacks have become an increasingly serious problem. They pose a great threat to the security of individuals, enterprises, and the state. This has made network intrusion detection technology critically important. In this paper, a malicious traffic detection model is constructed based on a decision tree classifier of entropy and a proximal policy optimisation algorithm (PPO) of deep reinforcement learning. Firstly, the decision tree idea in machine learning is used to make a preliminary classification judgement on the dataset based on the information entropy. The importance score of each feature in the classification work is calculated and the features with lower contributions are removed. Then, it is handed over to the PPO algorithm model for detection. An entropy regularity term is introduced in the process of the PPO algorithm update. Finally, the deep reinforcement learning algorithm is used to continuously train and update the parameters during the detection process, and finally, the detection model with higher accuracy is obtained. Experiments show that the binary classification accuracy of the malicious traffic detection model based on the deep reinforcement learning PPO algorithm can reach 99.17% under the CIC-IDS2017 dataset used in this paper.

3.
Sci Rep ; 14(1): 18967, 2024 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-39152172

RESUMO

Recent sensor, communication, and computing technological advancements facilitate smart grid use. The heavy reliance on developed data and communication technology increases the exposure of smart grids to cyberattacks. Existing mitigation in the electricity grid focuses on protecting primary or redundant measurements. These approaches make certain assumptions regarding false data injection (FDI) attacks, which are inadequate and restrictive to cope with cyberattacks. The reliance on communication technology has emphasized the exposure of power systems to FDI assaults that can bypass the current bad data detection (BDD) mechanism. The current study on unobservable FDI attacks (FDIA) reveals the severe threat of secured system operation because these attacks can avoid the BDD method. Thus, a Data-driven learning-based approach helps detect unobservable FDIAs in distribution systems to mitigate these risks. This study presents a new Hybrid Metaheuristics-based Dimensionality Reduction with Deep Learning for FDIA (HMDR-DLFDIA) Detection technique for Enhanced Network Security. The primary objective of the HMDR-DLFDIA technique is to recognize and classify FDIA attacks in the distribution systems. In the HMDR-DLFDIA technique, the min-max scalar is primarily used for the data normalization process. Besides, a hybrid Harris Hawks optimizer with a sine cosine algorithm (hybrid HHO-SCA) is applied for feature selection. For FDIA detection, the HMDR-DLFDIA technique utilizes the stacked autoencoder (SAE) method. To improve the detection outcomes of the SAE model, the gazelle optimization algorithm (GOA) is exploited. A complete set of experiments was organized to highlight the supremacy of the HMDR-DLFDIA method. The comprehensive result analysis stated that the HMDR-DLFDIA technique performed better than existing DL models.

4.
Heliyon ; 10(9): e29582, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38699015

RESUMO

The advent of the Internet of Things (IoT) has accelerated the pace of economic development across all sectors. However, it has also brought significant challenges to traditional human resource management, revealing an increasing number of problems and making it unable to meet the needs of contemporary enterprise management. The IoT has brought numerous conveniences to human society, but it has also led to security issues in communication networks. To ensure the security of these networks, it is necessary to integrate data-driven technologies to address this issue. In response to the current state of human resource management, this paper proposes the application of IoT technology in enterprise human resource management and combines it with radial basis function neural networks to construct a model for predicting enterprise human resource needs. The model was also experimentally analyzed. The results show that under this algorithm, the average prediction accuracy for the number of employees over five years is 90.2 %, and the average prediction accuracy for sales revenue is 93.9 %. These data indicate that the prediction accuracy of the model under this study's algorithm has significantly improved. This paper also conducted evaluation experiments on a wireless communication network security risk prediction model. The average prediction accuracy of four tests is 91.21 %, indicating that the model has high prediction accuracy. By introducing data-driven technology and IoT applications, this study provides new solutions for human resource management and communication network security, promoting technological innovation in the fields of traditional human resource management and information security management. The research not only improves the accuracy of the prediction models but also provides strong support for decision-making and risk management in related fields, demonstrating the great potential of big data and artificial intelligence technology in the future of enterprise management and security.

5.
Sensors (Basel) ; 24(9)2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38732852

RESUMO

Our increasingly connected world continues to face an ever-growing number of network-based attacks. An Intrusion Detection System (IDS) is an essential security technology used for detecting these attacks. Although numerous Machine Learning-based IDSs have been proposed for the detection of malicious network traffic, the majority have difficulty properly detecting and classifying the more uncommon attack types. In this paper, we implement a novel hybrid technique using synthetic data produced by a Generative Adversarial Network (GAN) to use as input for training a Deep Reinforcement Learning (DRL) model. Our GAN model is trained on the NSL-KDD dataset, a publicly available collection of labeled network traffic data specifically designed to support the evaluation and benchmarking of IDSs. Ultimately, our findings demonstrate that training the DRL model on synthetic datasets generated by specific GAN models can result in better performance in correctly classifying minority classes over training on the true imbalanced dataset.

6.
Entropy (Basel) ; 26(4)2024 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-38667869

RESUMO

Network security situational awareness (NSSA) aims to capture, understand, and display security elements in large-scale network environments in order to predict security trends in the relevant network environment. With the internet's increasingly large scale, increasingly complex structure, and gradual diversification of components, the traditional single-layer network topology model can no longer meet the needs of network security analysis. Therefore, we conduct research based on a multi-layer network model for network security situational awareness, which is characterized by the three-layer network structure of a physical device network, a business application network, and a user role network. Its network characteristics require new assessment methods, so we propose a multi-layer network link importance assessment metric: the multi-layer-dependent link entropy (MDLE). On the one hand, the MDLE comprehensively evaluates the connectivity importance of links by fitting the link-local betweenness centrality and mapping entropy. On the other hand, it relies on the link-dependent mechanism to better aggregate the link importance contributions in each network layer. The experimental results show that the MDLE has better ordering monotonicity during critical link discovery and a higher destruction efficacy in destruction simulations compared to classical link importance metrics, thus better adapting to the critical link discovery requirements of a multi-layer network topology.

7.
Sci Rep ; 14(1): 8629, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38622228

RESUMO

One of the biggest problems with Internet of Things (IoT) applications in the real world is ensuring data integrity. This problem becomes increasingly significant as IoT expands quickly across a variety of industries. This study presents a brand-new data integrity methodology for Internet of Things applications. The "sequence sharing" and "data exchange" stages of the suggested protocol are divided into two parts. During the first phase, each pair of nodes uses a new chaotic model for securely exchanging their identity information to generate a common sequence. This phase's objectives include user authentication and timing calculations for the second phase of the recommended method's packet validation phase. The recommended approach was tested in numerous settings, and various analyses were taken into account to guarantee its effectiveness. Also, the results were compared with the conventional data integrity control protocol of IoT. According to the results, the proposed method is an efficient and cost-effective integrity-ensuring mechanism with eliminates the need for third-party auditors and leads to reducing energy consumption and packet overhead. The results also show that the suggested approach is safe against a variety of threats and may be used as a successful integrity control mechanism in practical applications.

8.
Sci Rep ; 14(1): 5111, 2024 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-38429324

RESUMO

Low-rate distributed denial of service attacks, as known as LDDoS attacks, pose the notorious security risks in cloud computing network. They overload the cloud servers and degrade network service quality with the stealthy strategy. Furthermore, this kind of small ratio and pulse-like abnormal traffic leads to a serious data scale problem. As a result, the existing models for detecting minority and adversary LDDoS attacks are insufficient in both detection accuracy and time consumption. This paper proposes a novel multi-scale Convolutional Neural Networks (CNN) and bidirectional Long-short Term Memory (bi-LSTM) arbitration dense network model (called MSCBL-ADN) for learning and detecting LDDoS attack behaviors under the condition of limited dataset and time consumption. The MSCBL-ADN incorporates CNN for preliminary spatial feature extraction and embedding-based bi-LSTM for time relationship extraction. And then, it employs arbitration network to re-weigh feature importance for higher accuracy. At last, it uses 2-block dense connection network to perform final classification. The experimental results conducted on popular ISCX-2016-SlowDos dataset have demonstrated that the proposed MSCBL-ADN model has a significant improvement with high detection accuracy and superior time performance over the state-of-the-art models.

9.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38543979

RESUMO

The dynamic and evolving nature of mobile networks necessitates a proactive approach to security, one that goes beyond traditional methods and embraces innovative strategies such as anomaly detection and prediction. This study delves into the realm of mobile network security and reliability enhancement through the lens of anomaly detection and prediction, leveraging K-means clustering on call detail records (CDRs). By analyzing CDRs, which encapsulate comprehensive information about call activities, messaging, and data usage, this research aimed to unveil hidden patterns indicative of anomalous behavior within mobile networks and security breaches. We utilized 14 million one-year CDR records. The mobile network used had deployed the latest network generation, 5G, with various sources of network elements. Through a systematic analysis of historical CDR data, this study offers insights into the underlying trends and anomalies prevalent in mobile network traffic. Furthermore, by harnessing the predictive capabilities of the K-means algorithm, the proposed framework facilitates the anticipation of future anomalies based on learned patterns, thereby enhancing proactive security measures. The findings of this research can contribute to the advancement of mobile network security by providing a deeper understanding of anomalous behavior and effective prediction mechanisms. The utilization of K-means clustering on CDR data offers a scalable and efficient approach to anomaly detection, with 96% accuracy, making it well suited for network reliability and security applications in large-scale mobile networks for 5G networks and beyond.

10.
Sensors (Basel) ; 24(4)2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38400486

RESUMO

The Zero Trust safety architecture emerged as an intriguing approach for overcoming the shortcomings of standard network security solutions. This extensive survey study provides a meticulous explanation of the underlying principles of Zero Trust, as well as an assessment of the many strategies and possibilities for effective implementation. The survey begins by examining the role of authentication and access control within Zero Trust Architectures, and subsequently investigates innovative authentication, as well as access control solutions across different scenarios. It more deeply explores traditional techniques for encryption, micro-segmentation, and security automation, emphasizing their importance in achieving a secure Zero Trust environment. Zero Trust Architecture is explained in brief, along with the Taxonomy of Zero Trust Network Features. This review article provides useful insights into the Zero Trust paradigm, its approaches, problems, and future research objectives for scholars, practitioners, and policymakers. This survey contributes to the growth and implementation of secure network architectures in critical infrastructures by developing a deeper knowledge of Zero Trust.

11.
Heliyon ; 10(4): e26317, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38404775

RESUMO

Within both the cyber kill chain and MITRE ATT&CK frameworks, Lateral Movement (LM) is defined as any activity that allows adversaries to progressively move deeper into a system in seek of high-value assets. Although this timely subject has been studied in the cybersecurity literature to a significant degree, so far, no work provides a comprehensive survey regarding the identification of LM from mainly an Intrusion Detection System (IDS) viewpoint. To cover this noticeable gap, this work provides a systematic, holistic overview of the topic, not neglecting new communication paradigms, such as the Internet of Things (IoT). The survey part, spanning a time window of eight years and 53 articles, is split into three focus areas, namely, Endpoint Detection and Response (EDR) schemes, machine learning oriented solutions, and graph-based strategies. On top of that, we bring to light interrelations, mapping the progress in this field over time, and offer key observations that may propel LM research forward.

12.
PeerJ Comput Sci ; 10: e1777, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38259877

RESUMO

In order to understand consumer perception, reduce risks in online shopping, and maintain online security, this study employs data envelopment analysis (DEA) to confirm the relationship between evaluation and stimuli. It establishes a model of stimuli-organism response and uses regression analysis to explore the relationships among negative online shopping evaluations, consumer perception of risk, and consumer behavior. This study employs attribution theory to analyze the impact of evaluations on consumer behavior and assesses the role of perceived risk as a mediator. The independent variable is negative comments, the dependent variable is consumer behavior, and logistic regression is used to empirically analyze the factors influencing online shopping security. The results indicate a positive correlation between the number of negative comments and consumers' delayed purchase behavior, with a correlation coefficient of 41%. The intensity of negative comments significantly impacts consumers' refusal to make a purchase, with a correlation coefficient of 38%. The length of negative comments substantially influences consumers' opposition to purchasing, also with a correlation coefficient of 38%. There is a close relationship between perceived risk and consumers' delayed shopping behavior and the number of negative comments, with 41% and 4% correlation coefficients, respectively. Perceived risk has a relatively smaller impact on consumers' opposition to purchase behavior, with a correlation coefficient of 27%. The length, intensity, and number of negative comments are correlated with consumers' opposition, refusal, and delayed consumption, negatively affecting consumer intent. Additionally, negative comments are related to perceived risk and consumer behavior. Perceived risk causally influences consumer behavior, while the convenience of shopping has a relatively minor impact on online shopping security. Factors like delivery speed, buyer reviews, brand, price, and consumer perception are significantly related to online shopping security. Consumer perception has the most significant impact on online shopping security, balancing secure and fast consumption under the guarantee of user experience. Strengthening consumer perception enhances consumers' ability to process risk information, helping them better identify risks and avoid using hazardous network software, tools, or technologies, thereby reducing potential online security risks.

13.
Entropy (Basel) ; 25(12)2023 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-38136475

RESUMO

As cross-border access becomes more frequent, traditional perimeter-based network security models can no longer cope with evolving security requirements. Zero trust is a novel paradigm for cybersecurity based on the core concept of "never trust, always verify". It attempts to protect against security risks related to internal threats by eliminating the demarcations between the internal and external network of traditional network perimeters. Nevertheless, research on the theory and application of zero trust is still in its infancy, and more extensive research is necessary to facilitate a deeper understanding of the paradigm in academia and the industry. In this paper, trust in cybersecurity is discussed, following which the origin, concepts, and principles related to zero trust are elaborated on. The characteristics, strengths, and weaknesses of the existing research are analysed in the context of zero trust achievements and their technical applications in Cloud and IoT environments. Finally, to support the development and application of zero trust in the future, the concept and its current challenges are analysed.

14.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37896456

RESUMO

Intrusion detection systems, also known as IDSs, are widely regarded as one of the most essential components of an organization's network security. This is because IDSs serve as the organization's first line of defense against several cyberattacks and are accountable for accurately detecting any possible network intrusions. Several implementations of IDSs accomplish the detection of potential threats throughout flow-based network traffic analysis. Traditional IDSs frequently struggle to provide accurate real-time intrusion detection while keeping up with the changing landscape of threat. Innovative methods used to improve IDSs' performance in network traffic analysis are urgently needed to overcome these drawbacks. In this study, we introduced a model called a deep neural decision forest (DNDF), which allows the enhancement of classification trees with the power of deep networks to learn data representations. We essentially utilized the CICIDS 2017 dataset for network traffic analysis and extended our experiments to evaluate the DNDF model's performance on two additional datasets: CICIDS 2018 and a custom network traffic dataset. Our findings showed that DNDF, a combination of deep neural networks and decision forests, outperformed reference approaches with a remarkable precision of 99.96% by using the CICIDS 2017 dataset while creating latent representations in deep layers. This success can be attributed to improved feature representation, model optimization, and resilience to noisy and unbalanced input data, emphasizing DNDF's capabilities in intrusion detection and network security solutions.

15.
Sensors (Basel) ; 23(20)2023 Oct 23.
Artigo em Inglês | MEDLINE | ID: mdl-37896735

RESUMO

Internet security is a major concern these days due to the increasing demand for information technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, code injection attacks, and spoofing are the most common types of attacks in the modern era. Due to the expansion of IT, the volume and severity of network attacks have been increasing lately. DoS and DDoS are the most frequently reported network traffic attacks. Traditional solutions such as intrusion detection systems and firewalls cannot detect complex DDoS and DoS attacks. With the integration of artificial intelligence-based machine learning and deep learning methods, several novel approaches have been presented for DoS and DDoS detection. In particular, deep learning models have played a crucial role in detecting DDoS attacks due to their exceptional performance. This study adopts deep learning models including recurrent neural network (RNN), long short-term memory (LSTM), and gradient recurrent unit (GRU) to detect DDoS attacks on the most recent dataset, CICDDoS2019, and a comparative analysis is conducted with the CICIDS2017 dataset. The comparative analysis contributes to the development of a competent and accurate method for detecting DDoS attacks with reduced execution time and complexity. The experimental results demonstrate that models perform equally well on the CICDDoS2019 dataset with an accuracy score of 0.99, but there is a difference in execution time, with GRU showing less execution time than those of RNN and LSTM.

16.
Sensors (Basel) ; 23(19)2023 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-37836874

RESUMO

The Internet of Things (IoT) has significantly benefited several businesses, but because of the volume and complexity of IoT systems, there are also new security issues. Intrusion detection systems (IDSs) guarantee both the security posture and defense against intrusions of IoT devices. IoT systems have recently utilized machine learning (ML) techniques widely for IDSs. The primary deficiencies in existing IoT security frameworks are their inadequate intrusion detection capabilities, significant latency, and prolonged processing time, leading to undesirable delays. To address these issues, this work proposes a novel range-optimized attention convolutional scattered technique (ROAST-IoT) to protect IoT networks from modern threats and intrusions. This system uses the scattered range feature selection (SRFS) model to choose the most crucial and trustworthy properties from the supplied intrusion data. After that, the attention-based convolutional feed-forward network (ACFN) technique is used to recognize the intrusion class. In addition, the loss function is estimated using the modified dingo optimization (MDO) algorithm to ensure the maximum accuracy of classifier. To evaluate and compare the performance of the proposed ROAST-IoT system, we have utilized popular intrusion datasets such as ToN-IoT, IoT-23, UNSW-NB 15, and Edge-IIoT. The analysis of the results shows that the proposed ROAST technique did better than all existing cutting-edge intrusion detection systems, with an accuracy of 99.15% on the IoT-23 dataset, 99.78% on the ToN-IoT dataset, 99.88% on the UNSW-NB 15 dataset, and 99.45% on the Edge-IIoT dataset. On average, the ROAST-IoT system achieved a high AUC-ROC of 0.998, demonstrating its capacity to distinguish between legitimate data and attack traffic. These results indicate that the ROAST-IoT algorithm effectively and reliably detects intrusion attacks mechanism against cyberattacks on IoT systems.

17.
Sensors (Basel) ; 23(17)2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37687997

RESUMO

Network security is paramount in today's digital landscape, where cyberthreats continue to evolve and pose significant risks. We propose a DPDK-based scanner based on a study on advanced port scanning techniques to improve network visibility and security. The traditional port scanning methods suffer from speed, accuracy, and efficiency limitations, hindering effective threat detection and mitigation. In this paper, we develop and implement advanced techniques such as protocol-specific probes and evasive scan techniques to enhance the visibility and security of networks. We also evaluate network scanning performance and scalability using programmable hardware, including smart NICs and DPDK-based frameworks, along with in-network processing, data parallelization, and hardware acceleration. Additionally, we leverage application-level protocol parsing to accelerate network discovery and mapping, analyzing protocol-specific information. In our experimental evaluation, our proposed DPDK-based scanner demonstrated a significant improvement in target scanning speed, achieving a 2× speedup compared to other scanners in a target scanning environment. Furthermore, our scanner achieved a high accuracy rate of 99.5% in identifying open ports. Notably, our solution also exhibited a lower CPU and memory utilization, with an approximately 40% reduction compared to alternative scanners. These results highlight the effectiveness and efficiency of our proposed scanning techniques in enhancing network visibility and security. The outcomes of this research contribute to the field by providing insights and innovations to improve network security, identify vulnerabilities, and optimize network performance.

18.
Sensors (Basel) ; 23(16)2023 Aug 10.
Artigo em Inglês | MEDLINE | ID: mdl-37631627

RESUMO

Traffic management is a critical task in software-defined IoT networks (SDN-IoTs) to efficiently manage network resources and ensure Quality of Service (QoS) for end-users. However, traditional traffic management approaches based on queuing theory or static policies may not be effective due to the dynamic and unpredictable nature of network traffic. In this paper, we propose a novel approach that leverages Graph Neural Networks (GNNs) and multi-arm bandit algorithms to dynamically optimize traffic management policies based on real-time network traffic patterns. Specifically, our approach uses a GNN model to learn and predict network traffic patterns and a multi-arm bandit algorithm to optimize traffic management policies based on these predictions. We evaluate the proposed approach on three different datasets, including a simulated corporate network (KDD Cup 1999), a collection of network traffic traces (CAIDA), and a simulated network environment with both normal and malicious traffic (NSL-KDD). The results demonstrate that our approach outperforms other state-of-the-art traffic management methods, achieving higher throughput, lower packet loss, and lower delay, while effectively detecting anomalous traffic patterns. The proposed approach offers a promising solution to traffic management in SDNs, enabling efficient resource management and QoS assurance.

19.
Sensors (Basel) ; 23(16)2023 Aug 17.
Artigo em Inglês | MEDLINE | ID: mdl-37631752

RESUMO

As the demand for Internet access increases, malicious traffic on the Internet has soared also. In view of the fact that the existing malicious-traffic-identification methods suffer from low accuracy, this paper proposes a malicious-traffic-identification method based on contrastive learning. The proposed method is able to overcome the shortcomings of traditional methods that rely on labeled samples and is able to learn data feature representations carrying semantic information from unlabeled data, thus improving the model accuracy. In this paper, a new malicious traffic feature extraction model based on a Transformer is proposed. Employing a self-attention mechanism, the proposed feature extraction model can extract the bytes features of malicious traffic by performing calculations on the malicious traffic, thereby realizing the efficient identification of malicious traffic. In addition, a bidirectional GLSTM is introduced to extract the timing features of malicious traffic. The experimental results show that the proposed method is superior to the latest published methods in terms of accuracy and F1 score.

20.
Sensors (Basel) ; 23(13)2023 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-37447678

RESUMO

The advancements and reliance on digital data necessitates dependence on information technology. The growing amount of digital data and their availability over the Internet have given rise to the problem of information security. With the increase in connectivity among devices and networks, maintaining the information security of an asset has now become essential for an organization. Intrusion detection systems (IDS) are widely used in networks for protection against different network attacks. Several machine-learning-based techniques have been used among researchers for the implementation of anomaly-based IDS (AIDS). In the past, the focus primarily remained on the improvement of the accuracy of the system. Efficiency with respect to time is an important aspect of an IDS, which most of the research has thus far somewhat overlooked. For this purpose, we propose a multi-layered filtration framework (MLFF) for feature reduction using a statistical approach. The proposed framework helps reduce the detection time without affecting the accuracy. We use the CIC-IDS2017 dataset for experiments. The proposed framework contains three filters and is connected in sequential order. The accuracy, precision, recall and F1 score are calculated against the selected machine learning models. In addition, the training time and the detection time are also calculated because these parameters are considered important in measuring the performance of a detection system. Generally, decision tree models, random forest methods, and artificial neural networks show better results in the detection of network attacks with minimum detection time.


Assuntos
Filtração , Tecnologia da Informação , Internet , Aprendizado de Máquina , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA