Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Entropy (Basel) ; 25(5)2023 May 22.
Article in English | MEDLINE | ID: mdl-37238580

ABSTRACT

Large corporations, government entities and institutions such as hospitals and census bureaus routinely collect our personal and sensitive information for providing services. A key technological challenge is designing algorithms for these services that provide useful results, while simultaneously maintaining the privacy of the individuals whose data are being shared. Differential privacy (DP) is a cryptographically motivated and mathematically rigorous approach for addressing this challenge. Under DP, a randomized algorithm provides privacy guarantees by approximating the desired functionality, leading to a privacy-utility trade-off. Strong (pure DP) privacy guarantees are often costly in terms of utility. Motivated by the need for a more efficient mechanism with better privacy-utility trade-off, we propose Gaussian FM, an improvement to the functional mechanism (FM) that offers higher utility at the expense of a weakened (approximate) DP guarantee. We analytically show that the proposed Gaussian FM algorithm can offer orders of magnitude smaller noise compared to the existing FM algorithms. We further extend our Gaussian FM algorithm to decentralized-data settings by incorporating the CAPE protocol and propose capeFM. Our method can offer the same level of utility as its centralized counterparts for a range of parameter choices. We empirically show that our proposed algorithms outperform existing state-of-the-art approaches on synthetic and real datasets.

2.
IEEE Trans Signal Process ; 69: 6355-6370, 2021.
Article in English | MEDLINE | ID: mdl-35755147

ABSTRACT

Blind source separation algorithms such as independent component analysis (ICA) are widely used in the analysis of neuroimaging data. To leverage larger sample sizes, different data holders/sites may wish to collaboratively learn feature representations. However, such datasets are often privacy-sensitive, precluding centralized analyses that pool the data at one site. In this work, we propose a differentially private algorithm for performing ICA in a decentralized data setting. Due to the high dimension and small sample size, conventional approaches to decentralized differentially private algorithms suffer in terms of utility. When centralizing the data is not possible, we investigate the benefit of enabling limited collaboration in the form of generating jointly distributed random noise. We show that such (anti) correlated noise improves the privacy-utility trade-off, and can reach the same level of utility as the corresponding non-private algorithm for certain parameter choices. We validate this benefit using synthetic and real neuroimaging datasets. We conclude that it is possible to achieve meaningful utility while preserving privacy, even in complex signal processing systems.

SELECTION OF CITATIONS
SEARCH DETAIL
...