Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 170: 111-126, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37977088

RESUMO

Federated learning (FL) provides autonomy and privacy by design to participating peers, who cooperatively build a machine learning (ML) model while keeping their private data in their devices. However, that same autonomy opens the door for malicious peers to poison the model by conducting either untargeted or targeted poisoning attacks. The label-flipping (LF) attack is a targeted poisoning attack where the attackers poison their training data by flipping the labels of some examples from one class (i.e., the source class) to another (i.e., the target class). Unfortunately, this attack is easy to perform and hard to detect, and it negatively impacts the performance of the global model. Existing defenses against LF are limited by assumptions on the distribution of the peers' data and/or do not perform well with high-dimensional models. In this paper, we deeply investigate the LF attack behavior. We find that the contradicting objectives of attackers and honest peers on the source class examples are reflected on the parameter gradients corresponding to the neurons of the source and target classes in the output layer. This makes those gradients good discriminative features for the attack detection. Accordingly, we propose LFighter, a novel defense against the LF attack that first dynamically extracts those gradients from the peers' local updates and then clusters the extracted gradients, analyzes the resulting clusters, and filters out potential bad updates before model aggregation. Extensive empirical analysis on three data sets shows the effectiveness of the proposed defense regardless of the data distribution or model dimensionality. Also, LFighter outperforms several state-of-the-art defenses by offering lower test error, higher overall accuracy, higher source class accuracy, lower attack success rate, and higher stability of the source class accuracy. Our code and data are available for reproducibility purposes at https://github.com/NajeebJebreel/LFighter.


Assuntos
Aprendizado de Máquina , Venenos , Reprodutibilidade dos Testes , Neurônios , Privacidade
2.
Data Min Knowl Discov ; : 1-26, 2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36619003

RESUMO

Reconciling machine learning with individual privacy is one of the main motivations behind federated learning (FL), a decentralized machine learning technique that aggregates partial models trained by clients on their own private data to obtain a global deep learning model. Even if FL provides stronger privacy guarantees to the participating clients than centralized learning collecting the clients' data in a central server, FL is vulnerable to some attacks whereby malicious clients submit bad updates in order to prevent the model from converging or, more subtly, to introduce artificial bias in the classification (poisoning). Poisoning detection techniques compute statistics on the updates to identify malicious clients. A downside of anti-poisoning techniques is that they might lead to discriminate minority groups whose data are significantly and legitimately different from those of the majority of clients. This would not only be unfair, but would yield poorer models that would fail to capture the knowledge in the training data, especially when data are not independent and identically distributed (non-i.i.d.). In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on three data sets shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with state-of-the-art poisoning detection techniques. Additionally, we explore the impact of our proposal on the performance of models on non-i.i.d local training data.

3.
Artigo em Inglês | MEDLINE | ID: mdl-36260588

RESUMO

In federated learning (FL), a set of participants share updates computed on their local data with an aggregator server that combines updates into a global model. However, reconciling accuracy with privacy and security is a challenge to FL. On the one hand, good updates sent by honest participants may reveal their private local information, whereas poisoned updates sent by malicious participants may compromise the model's availability and/or integrity. On the other hand, enhancing privacy via update distortion damages accuracy, whereas doing so via update aggregation damages security because it does not allow the server to filter out individual poisoned updates. To tackle the accuracy-privacy-security conflict, we propose fragmented FL (FFL), in which participants randomly exchange and mix fragments of their updates before sending them to the server. To achieve privacy, we design a lightweight protocol that allows participants to privately exchange and mix encrypted fragments of their updates so that the server can neither obtain individual updates nor link them to their originators. To achieve security, we design a reputation-based defense tailored for FFL that builds trust in participants and their mixed updates based on the quality of the fragments they exchange and the mixed updates they send. Since the exchanged fragments' parameters keep their original coordinates and attackers can be neutralized, the server can correctly reconstruct a global model from the received mixed updates without accuracy loss. Experiments on four real data sets show that FFL can prevent semi-honest servers from mounting privacy attacks, can effectively counter-poisoning attacks, and can keep the accuracy of the global model.

4.
Sci Eng Ethics ; 26(3): 1267-1285, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31571047

RESUMO

Our society is being shaped in a non-negligible way by the technological advances of recent years, especially in information and communications technologies (ICTs). The pervasiveness and democratization of ICTs have allowed people from all backgrounds to access and use them, which has resulted in new information-based assets. At the same time, this phenomenon has brought a new class of problems, in the form of activists, criminals and state actors that target the new assets to achieve their goals, legitimate or not. Cybersecurity includes the research, tools and techniques to protect information assets. However, some cybersecurity measures may clash with the ethical values of citizens. We analyze the synergies and tensions between some of these values, namely security, privacy, fairness and autonomy. From this analysis, we derive a value graph, and then we set out to identify those paths in the graph that lead to satisfying all four aforementioned values in the cybersecurity setting, by taking advantage of their synergies and avoiding their tensions. We illustrate our conceptual discussion with examples of enabling technologies. We also sketch how our methodology can be generalized to any setting where several potentially conflicting values have to be satisfied.


Assuntos
Segurança Computacional , Privacidade , Humanos , Princípios Morais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...