Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 15591, 2024 Jul 06.
Article in English | MEDLINE | ID: mdl-38971840

ABSTRACT

Microgrids are small-scale energy system that supplies power to homes, businesses, and industries. Microgrids can be considered as a trending technology in energy fields due to their power to supply reliable and sustainable energy. Microgrids have a mode called the island, in this mode, microgrids are disconnected from the major grid and keep providing energy in the situation of an energy outage. Therefore, they help the main grid during peak energy demand times. The microgrids can be connected to the network, which is called networked microgrids. It is possible to have flexible energy resources by using their enhanced energy management systems. However, connection microgrid systems to the communication network introduces various challenges, including increased in systems complicity and noise interference. Integrating network communication into a microgrid system causes the system to be susceptible to noise, potentially disrupting the critical control signals that ensure smooth operation. Therefore, there is a need for predicting noise caused by communication network to ensure the operation stability of microgrids. In addition, there is a need for a simulation model that includes communication network and can generate noise to simulate real scenarios. This paper proposes a classifying model named Noise Classification Simulation Model (NCSM) that exploits the potential of deep learning to predict noise levels by classifying the values of signal-to-noise ratio (SNR) in real-time network traffic of microgrid system. This is accomplished by initially applying Gaussian white noise into the data that is generated by microgrid model. Then, the data has noise and data without noise is transmitted through serial communication to simulate real world scenario. At the end, a Gated Recurrent Unit (GRU) model is implemented to predict SNR values for the network traffic data. Our findings show that the proposed model produced promising results in predicting noise. In addition, the classification performance of the proposed model is compared with well-known machine learning models and according to the experimental results, our proposed model has noticeable performance, which achieved 99.96% classification accuracy.

2.
Front Big Data ; 7: 1369895, 2024.
Article in English | MEDLINE | ID: mdl-38784675

ABSTRACT

Introduction: The cryptocurrency market is captivating the attention of both retail and institutional investors. While this highly volatile market offers investors substantial profit opportunities, it also entails risks due to its sensitivity to speculative news and the erratic behavior of major investors, both of which can provoke unexpected price fluctuations. Methods: In this study, we contend that extreme and sudden price changes and atypical patterns might compromise the performance of technical signals utilized as the basis for feature extraction in a machine learning-based trading system by either augmenting or diminishing the model's generalization capability. To address this issue, this research uses a bagged tree (BT) model to forecast the buy signal for the cryptocurrency market. To achieve this, traders must acquire knowledge about the cryptocurrency market and modify their strategies accordingly. Results and discussion: To make an informed decision, we depended on the most prevalently utilized oscillators, namely, the buy signal in the cryptocurrency market, comprising the Relative Strength Index (RSI), Bollinger Bands (BB), and the Moving Average Convergence/Divergence (MACD) indicator. Also, the research evaluates how accurately a model can predict the performance of different cryptocurrencies such as Bitcoin (BTC), Ethereum (ETH), Cardano (ADA), and Binance Coin (BNB). Furthermore, the efficacy of the most popular machine learning model in precisely forecasting outcomes within the cryptocurrency market is examined. Notably, predicting buy signal values using a BT model provides promising results.

4.
Front Artif Intell ; 7: 1345445, 2024.
Article in English | MEDLINE | ID: mdl-38444962

ABSTRACT

Hate Speech Detection in Arabic presents a multifaceted challenge due to the broad and diverse linguistic terrain. With its multiple dialects and rich cultural subtleties, Arabic requires particular measures to address hate speech online successfully. To address this issue, academics and developers have used natural language processing (NLP) methods and machine learning algorithms adapted to the complexities of Arabic text. However, many proposed methods were hampered by a lack of a comprehensive dataset/corpus of Arabic hate speech. In this research, we propose a novel multi-class public Arabic dataset comprised of 403,688 annotated tweets categorized as extremely positive, positive, neutral, or negative based on the presence of hate speech. Using our developed dataset, we additionally characterize the performance of multiple machine learning models for Hate speech identification in Arabic Jordanian dialect tweets. Specifically, the Word2Vec, TF-IDF, and AraBert text representation models have been applied to produce word vectors. With the help of these models, we can provide classification models with vectors representing text. After that, seven machine learning classifiers have been evaluated: Support Vector Machine (SVM), Logistic Regression (LR), Naive Bays (NB), Random Forest (RF), AdaBoost (Ada), XGBoost (XGB), and CatBoost (CatB). In light of this, the experimental evaluation revealed that, in this challenging and unstructured setting, our gathered and annotated datasets were rather efficient and generated encouraging assessment outcomes. This will enable academics to delve further into this crucial field of study.

5.
J Imaging ; 9(9)2023 Aug 30.
Article in English | MEDLINE | ID: mdl-37754941

ABSTRACT

Recently, deep learning has gained significant attention as a noteworthy division of artificial intelligence (AI) due to its high accuracy and versatile applications. However, one of the major challenges of AI is the need for more interpretability, commonly referred to as the black-box problem. In this study, we introduce an explainable AI model for medical image classification to enhance the interpretability of the decision-making process. Our approach is based on segmenting the images to provide a better understanding of how the AI model arrives at its results. We evaluated our model on five datasets, including the COVID-19 and Pneumonia Chest X-ray dataset, Chest X-ray (COVID-19 and Pneumonia), COVID-19 Image Dataset (COVID-19, Viral Pneumonia, Normal), and COVID-19 Radiography Database. We achieved testing and validation accuracy of 90.6% on a relatively small dataset of 6432 images. Our proposed model improved accuracy and reduced time complexity, making it more practical for medical diagnosis. Our approach offers a more interpretable and transparent AI model that can enhance the accuracy and efficiency of medical diagnosis.

6.
Neural Comput Appl ; : 1-17, 2023 Apr 20.
Article in English | MEDLINE | ID: mdl-37362563

ABSTRACT

Uniform Resource Locator (URL) is a unique identifier composed of protocol and domain name used to locate and retrieve a resource on the Internet. Like any Internet service, URLs (also called websites) are vulnerable to compromise by attackers to develop Malicious URLs that can exploit/devastate the user's information and resources. Malicious URLs are usually designed with the intention of promoting cyber-attacks such as spam, phishing, malware, and defacement. These websites usually require action on the user's side and can reach users across emails, text messages, pop-ups, or devious advertisements. They have a potential impact that can reach, in some cases, to compromise the machine or network of the user, especially those arriving by email. Therefore, developing systems to detect malicious URLs is of great interest nowadays. This paper proposes a high-performance machine learning-based detection system to identify Malicious URLs. The proposed system provides two layers of detection. Firstly, we identify the URLs as either benign or malware using a binary classifier. Secondly, we classify the URL classes based on their feature into five classes: benign, spam, phishing, malware, and defacement. Specifically, we report on four ensemble learning approaches, viz. the ensemble of bagging trees (En_Bag) approach, the ensemble of k-nearest neighbor (En_kNN) approach, and the ensemble of boosted decision trees (En_Bos) approach, and the ensemble of subspace discriminator (En_Dsc) approach. The developed approaches have been evaluated on an inclusive and contemporary dataset for uniform resource locators (ISCX-URL2016). ISCX-URL2016 provides a lightweight dataset for detecting and categorizing malicious URLs according to their attack type and lexical analysis. Conventional machine learning evaluation measurements are used to evaluate the detection accuracy, precision, recall, F Score, and detection time. Our experiential assessment indicates that the ensemble of bagging trees (En_Bag) approach provides better performance rates than other ensemble methods. Alternatively, the ensemble of the k-nearest neighbor (En_kNN) approach provides the highest inference speed. We also contrast our En_Bag model with state-of-the-art solutions and show its superiority in binary classification and multi-classification with accuracy rates of 99.3% and 97.92%, respectively.

7.
Sensors (Basel) ; 23(7)2023 Mar 27.
Article in English | MEDLINE | ID: mdl-37050549

ABSTRACT

The Domain Name System (DNS) protocol essentially translates domain names to IP addresses, enabling browsers to load and utilize Internet resources. Despite its major role, DNS is vulnerable to various security loopholes that attackers have continually abused. Therefore, delivering secure DNS traffic has become challenging since attackers use advanced and fast malicious information-stealing approaches. To overcome DNS vulnerabilities, the DNS over HTTPS (DoH) protocol was introduced to improve the security of the DNS protocol by encrypting the DNS traffic and communicating it over a covert network channel. This paper proposes a lightweight, double-stage scheme to identify malicious DoH traffic using a hybrid learning approach. The system comprises two layers. At the first layer, the traffic is examined using random fine trees (RF) and identified as DoH traffic or non-DoH traffic. At the second layer, the DoH traffic is further investigated using Adaboost trees (ADT) and identified as benign DoH or malicious DoH. Specifically, the proposed system is lightweight since it works with the least number of features (using only six out of thirty-three features) selected using principal component analysis (PCA) and minimizes the number of samples produced using a random under-sampling (RUS) approach. The experiential evaluation reported a high-performance system with a predictive accuracy of 99.4% and 100% and a predictive overhead of 0.83 µs and 2.27 µs for layer one and layer two, respectively. Hence, the reported results are superior and surpass existing models, given that our proposed model uses only 18% of the feature set and 17% of the sample set, distributed in balanced classes.

8.
Sensors (Basel) ; 22(18)2022 Sep 08.
Article in English | MEDLINE | ID: mdl-36146163

ABSTRACT

The Internet of Things (IoT) has widely expanded due to its advantages in enhancing the business, industrial, and social ecosystems. Nevertheless, IoT infrastructure is susceptible to several cyber-attacks due to the endpoint devices' restrictions in computation, storage, and communication capacity. As such, distributed denial-of-service (DDoS) attacks pose a serious threat to the security of the IoT. Attackers can easily utilize IoT devices as part of botnets to launch DDoS attacks by taking advantage of their flaws. This paper proposes an Ethereum blockchain model to detect and prevent DDoS attacks against IoT systems. Additionally, the proposed system can be used to resolve the single points of failure (dependencies on third parties) and privacy and security in IoT systems. First, we propose implementing a decentralized platform in place of current centralized system solutions to prevent DDoS attacks on IoT devices at the application layer by authenticating and verifying these devices. Second, we suggest tracing and recording the IP address of malicious devices inside the blockchain to prevent them from connecting and communicating with the IoT networks. The system performance has been evaluated by performing 100 experiments to evaluate the time taken by the authentication process. The proposed system highlights two messages with a time of 0.012 ms: the first is the request transmitted from the IoT follower device to join the blockchain, and the second is the blockchain response. The experimental evaluation demonstrated the superiority of our system because there are fewer I/O operations in the proposed system than in other related works, and thus it runs substantially faster.


Subject(s)
Blockchain , Internet of Things , Computer Security , Delivery of Health Care , Ecosystem , Technology
9.
Gene Expr Patterns ; 45: 119263, 2022 09.
Article in English | MEDLINE | ID: mdl-35850482

ABSTRACT

Handwritten character recognition has continually been a fascinating field of study in pattern recognition due to its numerous real-life applications, such as the reading tools for blind people and the reading tools for handwritten bank cheques. Therefore, the proper and accurate conversion of handwriting into organized digital files that can be easily recognized and processed by computer algorithms is required for various applications and systems. This paper proposes an accurate and precise autonomous structure for handwriting recognition using a ShuffleNet convolutional neural network to produce a multi-class recognition for the offline handwritten characters and numbers. The developed system utilizes the transfer learning of the powerful ShuffleNet CNN to train, validate, recognize, and categorize the handwritten character/digit images dataset into 26 classes for the English characters and ten categories for the digit characters. The experimental outcomes exhibited that the proposed recognition system achieves extraordinary overall recognition accuracy peaking at 99.50% outperforming other contrasted character recognition systems reported in the state-of-art. Besides, a low computational cost has been observed for the proposed model recording an average of 2.7 (ms) for the single sample inferencing.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Automated , Algorithms , Handwriting , Humans , Machine Learning , Pattern Recognition, Automated/methods
10.
Front Big Data ; 4: 782902, 2021.
Article in English | MEDLINE | ID: mdl-35098112

ABSTRACT

With the prompt revolution and emergence of smart, self-reliant, and low-power devices, Internet of Things (IoT) has inconceivably expanded and impacted almost every real-life application. Nowadays, for example, machines and devices are now fully reliant on computer control and, instead, they have their own programmable interfaces, such as cars, unmanned aerial vehicles (UAVs), and medical devices. With this increased use of IoT, attack capabilities have increased in response, which became imperative that new methods for securing these systems be developed to detect attacks launched against IoT devices and gateways. These attacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. In this research, we present new efficient and generic top-down architecture for intrusion detection, and classification in IoT networks using non-traditional machine learning is proposed in this article. The proposed architecture can be customized and used for intrusion detection/classification incorporating any IoT cyber-attack datasets, such as CICIDS Dataset, MQTT dataset, and others. Specifically, the proposed system is composed of three subsystems: feature engineering (FE) subsystem, feature learning (FL) subsystem, and detection and classification (DC) subsystem. All subsystems have been thoroughly described and analyzed in this article. Accordingly, the proposed architecture employs deep learning models to enable the detection of slightly mutated attacks of IoT networking with high detection/classification accuracy for the IoT traffic obtained from either real-time system or a pre-collected dataset. Since this work employs the system engineering (SE) techniques, the machine learning technology, the cybersecurity of IoT systems field, and the collective corporation of the three fields have successfully yielded a systematic engineered system that can be implemented with high-performance trajectories.

11.
Sensors (Basel) ; 22(1)2021 Dec 29.
Article in English | MEDLINE | ID: mdl-35009784

ABSTRACT

Network Intrusion Detection Systems (NIDSs) are indispensable defensive tools against various cyberattacks. Lightweight, multipurpose, and anomaly-based detection NIDSs employ several methods to build profiles for normal and malicious behaviors. In this paper, we design, implement, and evaluate the performance of machine-learning-based NIDS in IoT networks. Specifically, we study six supervised learning methods that belong to three different classes: (1) ensemble methods, (2) neural network methods, and (3) kernel methods. To evaluate the developed NIDSs, we use the distilled-Kitsune-2018 and NSL-KDD datasets, both consisting of a contemporary real-world IoT network traffic subjected to different network attacks. Standard performance evaluation metrics from the machine-learning literature are used to evaluate the identification accuracy, error rates, and inference speed. Our empirical analysis indicates that ensemble methods provide better accuracy and lower error rates compared with neural network and kernel methods. On the other hand, neural network methods provide the highest inference speed which proves their suitability for high-bandwidth networks. We also provide a comparison with state-of-the-art solutions and show that our best results are better than any prior art by 1~20%.


Subject(s)
Machine Learning , Neural Networks, Computer , Benchmarking
SELECTION OF CITATIONS
SEARCH DETAIL
...