Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-35867362

ABSTRACT

Decades of research have shown machine learning superiority in discovering highly nonlinear patterns embedded in electroencephalography (EEG) records compared with conventional statistical techniques. However, even the most advanced machine learning techniques require relatively large, labeled EEG repositories. EEG data collection and labeling are costly. Moreover, combining available datasets to achieve a large data volume is usually infeasible due to inconsistent experimental paradigms across trials. Self-supervised learning (SSL) solves these challenges because it enables learning from EEG records across trials with variable experimental paradigms, even when the trials explore different phenomena. It aggregates multiple EEG repositories to increase accuracy, reduce bias, and mitigate overfitting in machine learning training. In addition, SSL could be employed in situations where there is limited labeled training data, and manual labeling is costly. This article: 1) provides a brief introduction to SSL; 2) describes some SSL techniques employed in recent studies, including EEG; 3) proposes current and potential SSL techniques for future investigations in EEG studies; 4) discusses the cons and pros of different SSL techniques; and 5) proposes holistic implementation tips and potential future directions for EEG SSL practices.

2.
Neuroinformatics ; 20(1): 91-108, 2022 01.
Article in English | MEDLINE | ID: mdl-33948898

ABSTRACT

The field of neuroimaging can greatly benefit from building machine learning models to detect and predict diseases, and discover novel biomarkers, but much of the data collected at various organizations and research centers is unable to be shared due to privacy or regulatory concerns (especially for clinical data or rare disorders). In addition, aggregating data across multiple large studies results in a huge amount of duplicated technical debt and the resources required can be challenging or impossible for an individual site to build. Training on the data distributed across organizations can result in models that generalize much better than models trained on data from any of organizations alone. While there are approaches for decentralized sharing, these often do not provide the highest possible guarantees of sample privacy that only cryptography can provide. In addition, such approaches are often focused on probabilistic solutions. In this paper, we propose an approach that leverages the potential of datasets spread among a number of data collecting organizations by performing joint analyses in a secure and deterministic manner when only encrypted data is shared and manipulated. The approach is based on secure multiparty computation which refers to cryptographic protocols that enable distributed computation of a function over distributed inputs without revealing additional information about the inputs. It enables multiple organizations to train machine learning models on their joint data and apply the trained models to encrypted data without revealing their sensitive data to the other parties. In our proposed approach, organizations (or sites) securely collaborate to build a machine learning model as it would have been trained on the aggregated data of all the organizations combined. Importantly, the approach does not require a trusted party (i.e. aggregator), each contributing site plays an equal role in the process, and no site can learn individual data of any other site. We demonstrate effectiveness of the proposed approach, in a range of empirical evaluations using different machine learning algorithms including logistic regression and convolutional neural network models on human structural and functional magnetic resonance imaging datasets.


Subject(s)
Computer Security , Machine Learning , Algorithms , Humans , Neuroimaging
SELECTION OF CITATIONS
SEARCH DETAIL
...