Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 19(8): e0308991, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39150937

RESUMO

Various deep learning techniques, including blockchain-based approaches, have been explored to unlock the potential of edge data processing and resultant intelligence. However, existing studies often overlook the resource requirements of blockchain consensus processing in typical Internet of Things (IoT) edge network settings. This paper presents our FLCoin approach. Specifically, we propose a novel committee-based method for consensus processing in which committee members are elected via the FL process. Additionally, we employed a two-layer blockchain architecture for federated learning (FL) processing to facilitate the seamless integration of blockchain and FL techniques. Our analysis reveals that the communication overhead remains stable as the network size increases, ensuring the scalability of our blockchain-based FL system. To assess the performance of the proposed method, experiments were conducted using the MNIST dataset to train a standard five-layer CNN model. Our evaluation demonstrated the efficiency of FLCoin. With an increasing number of nodes participating in the model training, the consensus latency remained below 3 s, resulting in a low total training time. Notably, compared with a blockchain-based FL system utilizing PBFT as the consensus protocol, our approach achieved a 90% improvement in communication overhead and a 35% reduction in training time cost. Our approach ensures an efficient and scalable solution, enabling the integration of blockchain and FL into IoT edge networks. The proposed architecture provides a solid foundation for building intelligent IoT services.


Assuntos
Blockchain , Redes Neurais de Computação , Aprendizado Profundo , Internet das Coisas , Algoritmos , Humanos
2.
PLoS One ; 17(11): e0277092, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36327278

RESUMO

Blockchain is a Byzantine fault tolerant (BFT) system wherein decentralized nodes execute consensus protocols to drive the agreement process on new blocks added to a distributed ledger. Generally, two-round communications among [Formula: see text] nodes are required to tolerate up to [Formula: see text] faults in BFT-based consensus networks. This communication pattern corresponds to the worse-case scenario of consensus achievement, even under asynchronous network conditions. Nevertheless, it is not uncommon for a network to operate under better conditions, where a consensus can be reached with a lower communication cost. Hence, with the addition of a faster optimistic path toward an agreement, the idea of dual-mode consensus has been proposed as a promising approach to enhance the performance of asynchronous BFT protocols. However, this opportunity is not completely exploited by existing dual-mode protocols as the fast path can be followed only in a nonfaulty and synchronous network. This article presents a novel dual-mode protocol consisting of fast and backup subprotocols. To create different consensus committees for fast and backup-mode operations, the network contains both active and passive nodes. A consensus can be expedited through a fast-mode operation when majority of the active nodes can communicate synchronously. Under non-ideal conditions, the backup protocol takes over the agreement process from its fast-mode counterpart without starting over the suspended round. The safety and liveness of the proposed protocol are guaranteed with lower communication costs, which balance the trade-off between protocol efficiency and availability.


Assuntos
Blockchain , Consenso
3.
Sensors (Basel) ; 22(21)2022 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-36366129

RESUMO

Software-defined networking (SDN) has gained tremendous growth and can be exploited in different network scenarios, from data centers to wide-area 5G networks. It shifts control logic from the devices to a centralized entity (programmable controller) for efficient traffic monitoring and flow management. A software-based controller enforces rules and policies on the requests sent by forwarding elements; however, it cannot detect anomalous patterns in the network traffic. Due to this, the controller may install the flow rules against the anomalies, reducing the overall network performance. These anomalies may indicate threats to the network and decrease its performance and security. Machine learning (ML) approaches can identify such traffic flow patterns and predict the systems' impending threats. We propose an ML-based service to predict traffic anomalies for software-defined networks in this work. We first create a large dataset for network traffic by modeling a programmable data center with a signature-based intrusion-detection system. The feature vectors are pre-processed and are constructed against each flow request by the forwarding element. Then, we input the feature vector of each request to a machine learning classifier for training to predict anomalies. Finally, we use the holdout cross-validation technique to evaluate the proposed approach. The evaluation results specify that the proposed approach is highly accurate. In contrast to baseline approaches (random prediction and zero rule), the performance improvement of the proposed approach in average accuracy, precision, recall, and f-measure is (54.14%, 65.30%, 81.63%, and 73.70%) and (4.61%, 11.13%, 9.45%, and 10.29%), respectively.


Assuntos
Algoritmos , Software , Aprendizado de Máquina , Lógica
4.
Sensors (Basel) ; 21(14)2021 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-34300681

RESUMO

In recent years, there is an exponential explosion of data generation, collection, and processing in computer networks. With this expansion of data, network attacks have also become a congenital problem in complex networks. The resource utilization, complexity, and false alarm rates are major challenges in current Network Intrusion Detection Systems (NIDS). The data fusion technique is an emerging technology that merges data from multiple sources to form more certain, precise, informative, and accurate data. Moreover, most of the earlier intrusion detection models suffer from overfitting problems and lack optimal detection of intrusions. In this paper, we propose a multi-source data fusion scheme for intrusion detection in networks (MIND) , where data fusion is performed by the horizontal emergence of two datasets. For this purpose, the Hadoop MapReduce tool such as, Hive is used. In addition, a machine learning ensemble classifier is used for the fused dataset with fewer parameters. Finally, the proposed model is evaluated with a 10-fold-cross validation technique. The experiments show that the average accuracy, detection rate, false positive rate, true positive rate, and F-measure are 99.80%, 99.80%, 0.29%, 99.85%, and 99.82% respectively. Moreover, the results indicate that the proposed model is significantly effective in intrusion detection compared to other state-of-the-art methods.


Assuntos
Algoritmos , Aprendizado de Máquina
5.
PLoS One ; 15(11): e0240424, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33151974

RESUMO

Cloud computing has evolved the big data technologies to a consolidated paradigm with SPaaS (Streaming processing-as-a-service). With a number of enterprises offering cloud-based solutions to end-users and other small enterprises, there has been a boom in the volume of data, creating interest of both industry and academia in big data analytics, streaming applications, and social networking applications. With the companies shifting to cloud-based solutions as a service paradigm, the competition grows in the market. Good quality of service (QoS) is a must for the enterprises, as they strive to survive in a competitive environment. However, achieving reasonable QoS goals to meet SLA agreement cost-effectively is challenging due to variation in workload over time. This problem can be solved if the system has the ability to predict the workload for the near future. In this paper, we present a novel topology-refining scheme based on a workload prediction mechanism. Predictions are made through a model based on a combination of SVR, autoregressive, and moving average model with a feedback mechanism. Our streaming system is designed to increase the overall performance by making the topology refining robust to the incoming workload on the fly, while still being able to achieve QoS goals of SLA constraints. Apache Flink distributed processing engine is used as a testbed in the paper. The result shows that the prediction scheme works well for both workloads, i.e., synthetic as well as real traces of data.


Assuntos
Big Data , Computação em Nuvem/normas , Redes de Comunicação de Computadores/normas , Controle de Qualidade , Algoritmos , Carga de Trabalho
6.
PLoS One ; 15(2): e0228086, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32069298

RESUMO

The orchestration of applications and their components over heterogeneous clouds is recognized as being critical in solving the problem of vendor lock-in with regards to distributed and cloud computing. There have been recent strides made in the area of cloud application orchestration with emergence of the TOSCA standard being a definitive one. Although orchestration by itself provides a considerable amount of benefit to consumers of cloud computing services, it remains impractical without a compelling reason to ensure its utilization by cloud computing consumers. If there is no measurable benefit in using orchestration, then it is likely that clients may opt out of using it altogether. In this paper, we present an approach to cloud orchestration that aims to combine an orchestration model with a cost and policy model in order to allow for cost-aware application orchestration across heterogeneous clouds. Our approach takes into consideration the operating cost of the application on each provider, while performing a forward projection of the operating cost over a period of time to ensure that cost constraints remain unviolated. This allows us to leverage the existing state of the art with regards to orchestration and model-driven approaches as well as tie it to the operations of cloud clients in order to improve utility. Through this study, we were able to show that our approach was capable of providing not only scaling features but also orchestration features of application components distributed across heterogeneous cloud platforms.


Assuntos
Computação em Nuvem , Modelos Teóricos , Fatores de Tempo
7.
PLoS One ; 11(8): e0160456, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27501046

RESUMO

Recently, cloud computing has drawn significant attention from both industry and academia, bringing unprecedented changes to computing and information technology. The infrastructure-as-a-Service (IaaS) model offers new abilities such as the elastic provisioning and relinquishing of computing resources in response to workload fluctuations. However, because the demand for resources dynamically changes over time, the provisioning of resources in a way that a given budget is efficiently utilized while maintaining a sufficing performance remains a key challenge. This paper addresses the problem of task scheduling and resource provisioning for a set of tasks running on IaaS clouds; it presents novel provisioning and scheduling algorithms capable of executing tasks within a given budget, while minimizing the slowdown due to the budget constraint. Our simulation study demonstrates a substantial reduction up to 70% in the overall task slowdown rate by the proposed algorithms.


Assuntos
Computação em Nuvem/economia , Algoritmos , Modelos Teóricos , Carga de Trabalho
8.
ScientificWorldJournal ; 2014: 134391, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24977171

RESUMO

With advent of various mobile devices with powerful networking and computing capabilities, the users' demand to enjoy live video streaming services such as IPTV with mobile devices has been increasing rapidly. However, it is challenging to get over the degradation of service quality due to data loss caused by the handover. Although many handover schemes were proposed at protocol layers below the application layer, they inherently suffer from data loss while the network is being disconnected during the handover. We therefore propose an efficient application-layer handover scheme to support seamless mobility for P2P live streaming. By simulation experiments, we show that the P2P live streaming system with our proposed handover scheme can improve the playback continuity significantly compared to that without our scheme.


Assuntos
Algoritmos , Armazenamento e Recuperação da Informação/métodos , Processamento de Sinais Assistido por Computador , Televisão , Gravação em Vídeo/métodos , Webcasts como Assunto , Tecnologia sem Fio , Sistemas Computacionais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA