Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38324430

RESUMO

Federated learning has recently been applied to recommendation systems to protect user privacy. In federated learning settings, recommendation systems can train recommendation models by collecting the intermediate parameters instead of the real user data, which greatly enhances user privacy. In addition, federated recommendation systems (FedRSs) can cooperate with other data platforms to improve recommendation performance while meeting the regulation and privacy constraints. However, FedRSs face many new challenges such as privacy, security, heterogeneity, and communication costs. While significant research has been conducted in these areas, gaps in the surveying literature still exist. In this article, we: 1) summarize some common privacy mechanisms used in FedRSs and discuss the advantages and limitations of each mechanism; 2) review several novel attacks and defenses against security; 3) summarize some approaches to address heterogeneity and communication costs problems; 4) introduce some realistic applications and public benchmark datasets for FedRSs; and 5) present some prospective research directions in the future. This article can guide researchers and practitioners understand the research progress in these areas.

2.
Nat Commun ; 15(1): 349, 2024 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-38191466

RESUMO

While federated learning (FL) is promising for efficient collaborative learning without revealing local data, it remains vulnerable to white-box privacy attacks, suffers from high communication overhead, and struggles to adapt to heterogeneous models. Federated distillation (FD) emerges as an alternative paradigm to tackle these challenges, which transfers knowledge among clients instead of model parameters. Nevertheless, challenges arise due to variations in local data distributions and the absence of a well-trained teacher model, which leads to misleading and ambiguous knowledge sharing that significantly degrades model performance. To address these issues, this paper proposes a selective knowledge sharing mechanism for FD, termed Selective-FD, to identify accurate and precise knowledge from local and ensemble predictions, respectively. Empirical studies, backed by theoretical insights, demonstrate that our approach enhances the generalization capabilities of the FD framework and consistently outperforms baseline methods. We anticipate our study to enable a privacy-preserving, communication-efficient, and heterogeneity-adaptive federated training framework.

3.
Nat Commun ; 14(1): 3785, 2023 Jun 24.
Artigo em Inglês | MEDLINE | ID: mdl-37355643

RESUMO

Extracting useful knowledge from big data is important for machine learning. When data is privacy-sensitive and cannot be directly collected, federated learning is a promising option that extracts knowledge from decentralized data by learning and exchanging model parameters, rather than raw data. However, model parameters may encode not only non-private knowledge but also private information of local data, thereby transferring knowledge via model parameters is not privacy-secure. Here, we present a knowledge transfer method named PrivateKT, which uses actively selected small public data to transfer high-quality knowledge in federated learning with privacy guarantees. We verify PrivateKT on three different datasets, and results show that PrivateKT can maximally reduce 84% of the performance gap between centralized learning and existing federated learning methods under strict differential privacy restrictions. PrivateKT provides a potential direction to effective and privacy-preserving knowledge transfer in machine intelligent systems.


Assuntos
Inteligência Artificial , Big Data , Conhecimento , Aprendizado de Máquina , Privacidade
4.
Nat Commun ; 13(1): 3091, 2022 06 02.
Artigo em Inglês | MEDLINE | ID: mdl-35654792

RESUMO

Graph neural network (GNN) is effective in modeling high-order interactions and has been widely used in various personalized applications such as recommendation. However, mainstream personalization methods rely on centralized GNN learning on global graphs, which have considerable privacy risks due to the privacy-sensitive nature of user data. Here, we present a federated GNN framework named FedPerGNN for both effective and privacy-preserving personalization. Through a privacy-preserving model update method, we can collaboratively train GNN models based on decentralized graphs inferred from local data. To further exploit graph information beyond local interactions, we introduce a privacy-preserving graph expansion protocol to incorporate high-order information under privacy protection. Experimental results on six datasets for personalization in different scenarios show that FedPerGNN achieves 4.0% ~ 9.6% lower errors than the state-of-the-art federated personalization methods under good privacy protection. FedPerGNN provides a promising direction to mining decentralized graph data in a privacy-preserving manner for responsible and intelligent personalization.


Assuntos
Algoritmos , Privacidade , Redes Neurais de Computação
5.
Nat Commun ; 13(1): 2032, 2022 04 19.
Artigo em Inglês | MEDLINE | ID: mdl-35440643

RESUMO

Federated learning is a privacy-preserving machine learning technique to train intelligent models from decentralized data, which enables exploiting private data by communicating local model updates in each iteration of model learning rather than the raw data. However, model updates can be extremely large if they contain numerous parameters, and many rounds of communication are needed for model training. The huge communication cost in federated learning leads to heavy overheads on clients and high environmental burdens. Here, we present a federated learning method named FedKD that is both communication-efficient and effective, based on adaptive mutual knowledge distillation and dynamic gradient compression techniques. FedKD is validated on three different scenarios that need privacy protection, showing that it maximally can reduce 94.89% of communication cost and achieve competitive results with centralized model learning. FedKD provides a potential to efficiently deploy privacy-preserving intelligent systems in many scenarios, such as intelligent healthcare and personalization.


Assuntos
Compressão de Dados , Aprendizado de Máquina , Comunicação , Humanos , Privacidade
6.
IEEE Trans Vis Comput Graph ; 20(12): 1763-72, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26356890

RESUMO

It is important for many different applications such as government and business intelligence to analyze and explore the diffusion of public opinions on social media. However, the rapid propagation and great diversity of public opinions on social media pose great challenges to effective analysis of opinion diffusion. In this paper, we introduce a visual analysis system called OpinionFlow to empower analysts to detect opinion propagation patterns and glean insights. Inspired by the information diffusion model and the theory of selective exposure, we develop an opinion diffusion model to approximate opinion propagation among Twitter users. Accordingly, we design an opinion flow visualization that combines a Sankey graph with a tailored density map in one view to visually convey diffusion of opinions among many users. A stacked tree is used to allow analysts to select topics of interest at different levels. The stacked tree is synchronized with the opinion flow visualization to help users examine and compare diffusion patterns across topics. Experiments and case studies on Twitter data demonstrate the effectiveness and usability of OpinionFlow.


Assuntos
Gráficos por Computador , Informática/métodos , Opinião Pública , Mídias Sociais , Algoritmos , Comunicação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...