Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38083059

ABSTRACT

Brain-computer interfaces (BCIs) employ various paradigms which afford intuitive, augmented control for users to navigate digital technologies. In this study we explore the application of these BCI concepts to predictive text systems: commonplace interactive and assistive tools with variable usage contexts and user behaviors. We conducted an experiment to analyze user neurophysiological responses under these different usage scenarios and evaluate the feasibility of a closed-loop, adaptive BCI for use with such technologies. We recorded electroencephalogram (EEG) and eye tracking (ET) data from participants while they completed a self-paced typing task in a simulated predictive text environment. Participants completed the task with different degrees of reliance on the predictive text system (completely dependent, completely independent, or their choice) and encountered both correct and incorrect text generations. Data suggest that erroneous text generations may evoke neurophysiological responses that can be measured with both EEG and pupillometry. Moreover, these responses appear to change according to users' reliance on the predictive text system. Results show promise for use in a passive, hybrid, BCI with a closed-loop, adaptive framework, and support a neurophysiological approach to the challenge of real-time human feedback on system performance.


Subject(s)
Brain-Computer Interfaces , Humans , Eye-Tracking Technology , Electroencephalography/methods
2.
Clin Sci (Lond) ; 135(5): 671-681, 2021 03 12.
Article in English | MEDLINE | ID: mdl-33599711

ABSTRACT

Citations are an important, but often overlooked, part of every scientific paper. They allow the reader to trace the flow of evidence, serving as a gateway to relevant literature. Most scientists are aware of citations' errors, but few appreciate the prevalence of these problems. The purpose of the present study was to examine how often frequently cited papers in biomedical scientific literature are cited inaccurately. The study included an active participation of the first authors of included papers; to first-hand verify the citations accuracy. Findings from feasibility study, where we reviewed 1540 articles containing 2526 citations of 14 most cited articles in which the authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4912 citations identified in 2995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. At least one inaccurate citation was found in 11 and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One-fifth of inaccurate citations were due to chains of inaccurate citations. Based on these findings, several actions to reduce citation inaccuracies have been proposed.


Subject(s)
Bibliometrics , Periodicals as Topic , Data Accuracy
3.
Article in English | MEDLINE | ID: mdl-27429443

ABSTRACT

Increased availability of Electronic Health Record (EHR) data provides unique opportunities for improving the quality of health services. In this study, we couple EHRs with the advanced machine learning tools to predict three important parameters of healthcare quality. More specifically, we describe how to learn low-dimensional vector representations of patient conditions and clinical procedures in an unsupervised manner, and generate feature vectors of hospitalized patients useful for predicting their length of stay, total incurred charges, and mortality rates. In order to learn vector representations, we propose to employ state-of-the-art language models specifically designed for modeling co-occurrence of diseases and applied clinical procedures. The proposed model is trained on a large-scale EHR database comprising more than 35 million hospitalizations in California over a period of nine years. We compared the proposed approach to several alternatives and evaluated their effectiveness by measuring accuracy of regression and classification models used for three predictive tasks considered in this study. Our model outperformed the baseline models on all tasks, indicating a strong potential of the proposed approach for advancing quality of the healthcare system.


Subject(s)
Data Mining/methods , Electronic Health Records/classification , Medical Informatics/methods , Models, Theoretical , Quality Indicators, Health Care , Hospital Costs , Humans , Machine Learning , Natural Language Processing , Regression Analysis
4.
Sci Rep ; 6: 32404, 2016 08 31.
Article in English | MEDLINE | ID: mdl-27578529

ABSTRACT

Data-driven phenotype analyses on Electronic Health Record (EHR) data have recently drawn benefits across many areas of clinical practice, uncovering new links in the medical sciences that can potentially affect the well-being of millions of patients. In this paper, EHR data is used to discover novel relationships between diseases by studying their comorbidities (co-occurrences in patients). A novel embedding model is designed to extract knowledge from disease comorbidities by learning from a large-scale EHR database comprising more than 35 million inpatient cases spanning nearly a decade, revealing significant improvements on disease phenotyping over current computational approaches. In addition, the use of the proposed methodology is extended to discover novel disease-gene associations by including valuable domain knowledge from genome-wide association studies. To evaluate our approach, its effectiveness is compared against a held-out set where, again, it revealed very compelling results. For selected diseases, we further identify candidate gene lists for which disease-gene associations were not studied previously. Thus, our approach provides biomedical researchers with new tools to filter genes of interest, thus, reducing costly lab studies.


Subject(s)
Electronic Health Records , Genetic Diseases, Inborn/genetics , Genetic Predisposition to Disease , Genome-Wide Association Study/statistics & numerical data , Algorithms , Databases, Factual , Humans , Phenotype
5.
BMC Bioinformatics ; 14 Suppl 3: S8, 2013.
Article in English | MEDLINE | ID: mdl-23514608

ABSTRACT

BACKGROUND: Protein function determination is a key challenge in the post-genomic era. Experimental determination of protein functions is accurate, but time-consuming and resource-intensive. A cost-effective alternative is to use the known information about sequence, structure, and functional properties of genes and proteins to predict functions using statistical methods. In this paper, we describe the Multi-Source k-Nearest Neighbor (MS-kNN) algorithm for function prediction, which finds k-nearest neighbors of a query protein based on different types of similarity measures and predicts its function by weighted averaging of its neighbors' functions. Specifically, we used 3 data sources to calculate the similarity scores: sequence similarity, protein-protein interactions, and gene expressions. RESULTS: We report the results in the context of 2011 Critical Assessment of Function Annotation (CAFA). Prior to CAFA submission deadline, we evaluated our algorithm on 1,302 human test proteins that were represented in all 3 data sources. Using only the sequence similarity information, MS-kNN had term-based Area Under the Curve (AUC) accuracy of Gene Ontology (GO) molecular function predictions of 0.728 when 7,412 human training proteins were used, and 0.819 when 35,622 training proteins from multiple eukaryotic and prokaryotic organisms were used. By aggregating predictions from all three sources, the AUC was further improved to 0.848. Similar result was observed on prediction of GO biological processes. Testing on 595 proteins that were annotated after the CAFA submission deadline showed that overall MS-kNN accuracy was higher than that of baseline algorithms Gotcha and BLAST, which were based solely on sequence similarity information. Since only 10 of the 595 proteins were represented by all 3 data sources, and 66 by two data sources, the difference between 3-source and one-source MS-kNN was rather small. CONCLUSIONS: Based on our results, we have several useful insights: (1) the k-nearest neighbor algorithm is an efficient and effective model for protein function prediction; (2) it is beneficial to transfer functions across a wide range of organisms; (3) it is helpful to integrate multiple sources of protein information.


Subject(s)
Algorithms , Proteins/physiology , Genomics , Humans , Protein Interaction Mapping , Proteins/chemistry , Proteins/genetics , Sequence Analysis, Protein , Transcriptome , Vocabulary, Controlled
SELECTION OF CITATIONS
SEARCH DETAIL
...