Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 68
Filtrar
1.
Sci Rep ; 12(1): 22589, 2022 12 30.
Artigo em Inglês | MEDLINE | ID: mdl-36585416

RESUMO

Using data from a longitudinal viral challenge study, we find that the post-exposure viral shedding and symptom severity are associated with a novel measure of pre-exposure cognitive performance variability (CPV), defined before viral exposure occurs. Each individual's CPV score is computed from data collected from a repeated NeuroCognitive Performance Test (NCPT) over a 3 day pre-exposure period. Of the 18 NCPT measures reported by the tests, 6 contribute materially to the CPV score, prospectively differentiating the high from the low shedders. Among these 6 are the 4 clinical measures digSym-time, digSym-correct, trail-time, and reaction-time, commonly used for assessing cognitive executive functioning. CPV is found to be correlated with stress and also with several genes previously reported to be associated with cognitive development and dysfunction. A perturbation study over the number and timing of NCPT sessions indicates that as few as 5 sessions is sufficient to maintain high association between the CPV score and viral shedding, as long as the timing of these sessions is balanced over the three pre-exposure days. Our results suggest that variations in cognitive function are closely related to immunity and susceptibility to severe infection. Further studying these relationships may help us better understand the links between neurocognitive and neuroimmune systems which is timely in this COVID-19 pandemic era.


Assuntos
COVID-19 , Infecções Respiratórias , Humanos , Pandemias , Cognição , Tempo de Reação
2.
BMC Bioinformatics ; 23(1): 370, 2022 Sep 10.
Artigo em Inglês | MEDLINE | ID: mdl-36088285

RESUMO

BACKGROUND: Development of new methods for analysis of protein-protein interactions (PPIs) at molecular and nanometer scales gives insights into intracellular signaling pathways and will improve understanding of protein functions, as well as other nanoscale structures of biological and abiological origins. Recent advances in computational tools, particularly the ones involving modern deep learning algorithms, have been shown to complement experimental approaches for describing and rationalizing PPIs. However, most of the existing works on PPI predictions use protein-sequence information, and thus have difficulties in accounting for the three-dimensional organization of the protein chains. RESULTS: In this study, we address this problem and describe a PPI analysis based on a graph attention network, named Struct2Graph, for identifying PPIs directly from the structural data of folded protein globules. Our method is capable of predicting the PPI with an accuracy of 98.89% on the balanced set consisting of an equal number of positive and negative pairs. On the unbalanced set with the ratio of 1:10 between positive and negative pairs, Struct2Graph achieves a fivefold cross validation average accuracy of 99.42%. Moreover, Struct2Graph can potentially identify residues that likely contribute to the formation of the protein-protein complex. The identification of important residues is tested for two different interaction types: (a) Proteins with multiple ligands competing for the same binding area, (b) Dynamic protein-protein adhesion interaction. Struct2Graph identifies interacting residues with 30% sensitivity, 89% specificity, and 87% accuracy. CONCLUSIONS: In this manuscript, we address the problem of prediction of PPIs using a first of its kind, 3D-structure-based graph attention network (code available at https://github.com/baranwa2/Struct2Graph ). Furthermore, the novel mutual attention mechanism provides insights into likely interaction sites through its unsupervised knowledge selection process. This study demonstrates that a relatively low-dimensional feature embedding learned from graph structures of individual proteins outperforms other modern machine learning classifiers based on global protein features. In addition, through the analysis of single amino acid variations, the attention mechanism shows preference for disease-causing residue variations over benign polymorphisms, demonstrating that it is not limited to interface residues.


Assuntos
Algoritmos , Proteínas , Sequência de Aminoácidos , Aminoácidos , Aprendizado de Máquina , Proteínas/química
3.
Entropy (Basel) ; 24(8)2022 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-36010758

RESUMO

In this paper, we propose a compression-based anomaly detection method for time series and sequence data using a pattern dictionary. The proposed method is capable of learning complex patterns in a training data sequence, using these learned patterns to detect potentially anomalous patterns in a test data sequence. The proposed pattern dictionary method uses a measure of complexity of the test sequence as an anomaly score that can be used to perform stand-alone anomaly detection. We also show that when combined with a universal source coder, the proposed pattern dictionary yields a powerful atypicality detector that is equally applicable to anomaly detection. The pattern dictionary-based atypicality detector uses an anomaly score defined as the difference between the complexity of the test sequence data encoded by the trained pattern dictionary (typical) encoder and the universal (atypical) encoder, respectively. We consider two complexity measures: the number of parsed phrases in the sequence, and the length of the encoded sequence (codelength). Specializing to a particular type of universal encoder, the Tree-Structured Lempel-Ziv (LZ78), we obtain a novel non-asymptotic upper bound, in terms of the Lambert W function, on the number of distinct phrases resulting from the LZ78 parser. This non-asymptotic bound determines the range of anomaly score. As a concrete application, we illustrate the pattern dictionary framework for constructing a baseline of health against which anomalous deviations can be detected.

4.
Elife ; 112022 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-35736613

RESUMO

Predicting the dynamics and functions of microbiomes constructed from the bottom-up is a key challenge in exploiting them to our benefit. Current models based on ecological theory fail to capture complex community behaviors due to higher order interactions, do not scale well with increasing complexity and in considering multiple functions. We develop and apply a long short-term memory (LSTM) framework to advance our understanding of community assembly and health-relevant metabolite production using a synthetic human gut community. A mainstay of recurrent neural networks, the LSTM learns a high dimensional data-driven non-linear dynamical system model. We show that the LSTM model can outperform the widely used generalized Lotka-Volterra model based on ecological theory. We build methods to decipher microbe-microbe and microbe-metabolite interactions from an otherwise black-box model. These methods highlight that Actinobacteria, Firmicutes and Proteobacteria are significant drivers of metabolite production whereas Bacteroides shape community dynamics. We use the LSTM model to navigate a large multidimensional functional landscape to design communities with unique health-relevant metabolite profiles and temporal behaviors. In sum, the accuracy of the LSTM model can be exploited for experimental planning and to guide the design of synthetic microbiomes with target dynamic functions.


Assuntos
Microbioma Gastrointestinal , Microbiota , Bactérias , Humanos , Interações Microbianas , Redes Neurais de Computação
5.
J Am Stat Assoc ; 117(540): 2056-2073, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36908312

RESUMO

Network data often arises via a series of structured interactions among a population of constituent elements. E-mail exchanges, for example, have a single sender followed by potentially multiple receivers. Scientific articles, on the other hand, may have multiple subject areas and multiple authors. We introduce a statistical model, termed the Pitman-Yor hierarchical vertex components model (PY-HVCM), that is well suited for structured interaction data. The proposed PY-HVCM effectively models complex relational data by partial pooling of local information via a latent, shared population-level distribution. The PY-HCVM is a canonical example of hierarchical vertex components models - a subfamily of models for exchangeable structured interaction-labeled networks, i.e., networks invariant to interaction relabeling. Theoretical analysis and supporting simulations provide clear model interpretation, and establish global sparsity and power law degree distribution. A computationally tractable Gibbs sampling algorithm is derived for inferring sparsity and power law properties of complex networks. We demonstrate the model on both the Enron e-mail dataset and an ArXiv dataset, showing goodness of fit of the model via posterior predictive validation.

6.
J Big Data ; 8(1): 82, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34777945

RESUMO

Data-driven innovation is propelled by recent scientific advances, rapid technological progress, substantial reductions of manufacturing costs, and significant demands for effective decision support systems. This has led to efforts to collect massive amounts of heterogeneous and multisource data, however, not all data is of equal quality or equally informative. Previous methods to capture and quantify the utility of data include value of information (VoI), quality of information (QoI), and mutual information (MI). This manuscript introduces a new measure to quantify whether larger volumes of increasingly more complex data enhance, degrade, or alter their information content and utility with respect to specific tasks. We present a new information-theoretic measure, called Data Value Metric (DVM), that quantifies the useful information content (energy) of large and heterogeneous datasets. The DVM formulation is based on a regularized model balancing data analytical value (utility) and model complexity. DVM can be used to determine if appending, expanding, or augmenting a dataset may be beneficial in specific application domains. Subject to the choices of data analytic, inferential, or forecasting techniques employed to interrogate the data, DVM quantifies the information boost, or degradation, associated with increasing the data size or expanding the richness of its features. DVM is defined as a mixture of a fidelity and a regularization terms. The fidelity captures the usefulness of the sample data specifically in the context of the inferential task. The regularization term represents the computational complexity of the corresponding inferential method. Inspired by the concept of information bottleneck in deep learning, the fidelity term depends on the performance of the corresponding supervised or unsupervised model. We tested the DVM method for several alternative supervised and unsupervised regression, classification, clustering, and dimensionality reduction tasks. Both real and simulated datasets with weak and strong signal information are used in the experimental validation. Our findings suggest that DVM captures effectively the balance between analytical-value and algorithmic-complexity. Changes in the DVM expose the tradeoffs between algorithmic complexity and data analytical value in terms of the sample-size and the feature-richness of a dataset. DVM values may be used to determine the size and characteristics of the data to optimize the relative utility of various supervised or unsupervised algorithms.

7.
JAMA Netw Open ; 4(9): e2128534, 2021 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-34586364

RESUMO

Importance: Currently, there are no presymptomatic screening methods to identify individuals infected with a respiratory virus to prevent disease spread and to predict their trajectory for resource allocation. Objective: To evaluate the feasibility of using noninvasive, wrist-worn wearable biometric monitoring sensors to detect presymptomatic viral infection after exposure and predict infection severity in patients exposed to H1N1 influenza or human rhinovirus. Design, Setting, and Participants: The cohort H1N1 viral challenge study was conducted during 2018; data were collected from September 11, 2017, to May 4, 2018. The cohort rhinovirus challenge study was conducted during 2015; data were collected from September 14 to 21, 2015. A total of 39 adult participants were recruited for the H1N1 challenge study, and 24 adult participants were recruited for the rhinovirus challenge study. Exclusion criteria for both challenges included chronic respiratory illness and high levels of serum antibodies. Participants in the H1N1 challenge study were isolated in a clinic for a minimum of 8 days after inoculation. The rhinovirus challenge took place on a college campus, and participants were not isolated. Exposures: Participants in the H1N1 challenge study were inoculated via intranasal drops of diluted influenza A/California/03/09 (H1N1) virus with a mean count of 106 using the median tissue culture infectious dose (TCID50) assay. Participants in the rhinovirus challenge study were inoculated via intranasal drops of diluted human rhinovirus strain type 16 with a count of 100 using the TCID50 assay. Main Outcomes and Measures: The primary outcome measures included cross-validated performance metrics of random forest models to screen for presymptomatic infection and predict infection severity, including accuracy, precision, sensitivity, specificity, F1 score, and area under the receiver operating characteristic curve (AUC). Results: A total of 31 participants with H1N1 (24 men [77.4%]; mean [SD] age, 34.7 [12.3] years) and 18 participants with rhinovirus (11 men [61.1%]; mean [SD] age, 21.7 [3.1] years) were included in the analysis after data preprocessing. Separate H1N1 and rhinovirus detection models, using only data on wearble devices as input, were able to distinguish between infection and noninfection with accuracies of up to 92% for H1N1 (90% precision, 90% sensitivity, 93% specificity, and 90% F1 score, 0.85 [95% CI, 0.70-1.00] AUC) and 88% for rhinovirus (100% precision, 78% sensitivity, 100% specificity, 88% F1 score, and 0.96 [95% CI, 0.85-1.00] AUC). The infection severity prediction model was able to distinguish between mild and moderate infection 24 hours prior to symptom onset with an accuracy of 90% for H1N1 (88% precision, 88% sensitivity, 92% specificity, 88% F1 score, and 0.88 [95% CI, 0.72-1.00] AUC) and 89% for rhinovirus (100% precision, 75% sensitivity, 100% specificity, 86% F1 score, and 0.95 [95% CI, 0.79-1.00] AUC). Conclusions and Relevance: This cohort study suggests that the use of a noninvasive, wrist-worn wearable device to predict an individual's response to viral exposure prior to symptoms is feasible. Harnessing this technology would support early interventions to limit presymptomatic spread of viral respiratory infections, which is timely in the era of COVID-19.


Assuntos
Biometria/métodos , Resfriado Comum/diagnóstico , Vírus da Influenza A Subtipo H1N1 , Influenza Humana/diagnóstico , Rhinovirus , Índice de Gravidade de Doença , Dispositivos Eletrônicos Vestíveis , Adulto , Área Sob a Curva , Bioensaio , Biometria/instrumentação , Estudos de Coortes , Resfriado Comum/virologia , Diagnóstico Precoce , Estudos de Viabilidade , Feminino , Humanos , Vírus da Influenza A Subtipo H1N1/crescimento & desenvolvimento , Influenza Humana/virologia , Masculino , Programas de Rastreamento , Modelos Biológicos , Rhinovirus/crescimento & desenvolvimento , Sensibilidade e Especificidade , Eliminação de Partículas Virais , Adulto Jovem
8.
PLoS One ; 16(3): e0248046, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33735201

RESUMO

The ensemble Kalman filter (EnKF) is a data assimilation technique that uses an ensemble of models, updated with data, to track the time evolution of a usually non-linear system. It does so by using an empirical approximation to the well-known Kalman filter. However, its performance can suffer when the ensemble size is smaller than the state space, as is often necessary for computationally burdensome models. This scenario means that the empirical estimate of the state covariance is not full rank and possibly quite noisy. To solve this problem in this high dimensional regime, we propose a computationally fast and easy to implement algorithm called the penalized ensemble Kalman filter (PEnKF). Under certain conditions, it can be theoretically proven that the PEnKF will be accurate (the estimation error will converge to zero) despite having fewer ensemble members than state dimensions. Further, as contrasted to localization methods, the proposed approach learns the covariance structure associated with the dynamical system. These theoretical results are supported with simulations of several non-linear and high dimensional systems.


Assuntos
Modelos Teóricos , Dinâmica não Linear , Algoritmos
9.
IEEE Trans Biomed Eng ; 68(8): 2377-2388, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33201806

RESUMO

OBJECTIVE: To develop a multi-channel device event segmentation and feature extraction algorithm that is robust to changes in data distribution. METHODS: We introduce an adaptive transfer learning algorithm to classify and segment events from non-stationary multi-channel temporal data. Using a multivariate hidden Markov model (HMM) and Fisher's linear discriminant analysis (FLDA) the algorithm adaptively adjusts to shifts in distribution over time. The proposed algorithm is unsupervised and learns to label events without requiring a priori information about true event states. The procedure is illustrated on experimental data collected from a cohort in a human viral challenge (HVC) study, where certain subjects have disrupted wake and sleep patterns after exposure to an H1N1 influenza pathogen. RESULTS: Simulations establish that the proposed adaptive algorithm significantly outperforms other event classification methods. When applied to early time points in the HVC data, the algorithm extracts sleep/wake features that are predictive of both infection and infection onset time. CONCLUSION: The proposed transfer learning event segmentation method is robust to temporal shifts in data distribution and can be used to produce highly discriminative event-labeled features for health monitoring. SIGNIFICANCE: Our integrated multisensor signal processing and transfer learning method is applicable to many ambulatory monitoring applications.


Assuntos
Vírus da Influenza A Subtipo H1N1 , Algoritmos , Humanos , Avaliação de Resultados em Cuidados de Saúde , Processamento de Sinais Assistido por Computador , Sono
10.
Sci Rep ; 10(1): 6811, 2020 04 22.
Artigo em Inglês | MEDLINE | ID: mdl-32321941

RESUMO

We propose a sparsity-promoting Bayesian algorithm capable of identifying radionuclide signatures from weak sources in the presence of a high radiation background. The proposed method is relevant to radiation identification for security applications. In such scenarios, the background typically consists of terrestrial, cosmic, and cosmogenic radiation that may cause false positive responses. We evaluate the new Bayesian approach using gamma-ray data and are able to identify weapons-grade plutonium, masked by naturally-occurring radioactive material (NORM), in a measurement time of a few seconds. We demonstrate this identification capability using organic scintillators (stilbene crystals and EJ-309 liquid scintillators), which do not provide direct, high-resolution, source spectroscopic information. Compared to the EJ-309 detector, the stilbene-based detector exhibits a lower identification error, on average, owing to its better energy resolution. Organic scintillators are used within radiation portal monitors to detect gamma rays emitted from conveyances crossing ports of entry. The described method is therefore applicable to radiation portal monitors deployed in the field and could improve their threat discrimination capability by minimizing "nuisance" alarms produced either by NORM-bearing materials found in shipped cargoes, such as ceramics and fertilizers, or radionuclides in recently treated nuclear medicine patients.

11.
Bioinformatics ; 36(8): 2547-2553, 2020 04 15.
Artigo em Inglês | MEDLINE | ID: mdl-31879763

RESUMO

MOTIVATION: Understanding the mechanisms and structural mappings between molecules and pathway classes are critical for design of reaction predictors for synthesizing new molecules. This article studies the problem of prediction of classes of metabolic pathways (series of chemical reactions occurring within a cell) in which a given biochemical compound participates. We apply a hybrid machine learning approach consisting of graph convolutional networks used to extract molecular shape features as input to a random forest classifier. In contrast to previously applied machine learning methods for this problem, our framework automatically extracts relevant shape features directly from input SMILES representations, which are atom-bond specifications of chemical structures composing the molecules. RESULTS: Our method is capable of correctly predicting the respective metabolic pathway class of 95.16% of tested compounds, whereas competing methods only achieve an accuracy of 84.92% or less. Furthermore, our framework extends to the task of classification of compounds having mixed membership in multiple pathway classes. Our prediction accuracy for this multi-label task is 97.61%. We analyze the relative importance of various global physicochemical features to the pathway class prediction problem and show that simple linear/logistic regression models can predict the values of these global features from the shape features extracted using our framework. AVAILABILITY AND IMPLEMENTATION: https://github.com/baranwa2/MetabolicPathwayPrediction. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Assuntos
Aprendizado Profundo , Redes Neurais de Computação , Aprendizado de Máquina , Redes e Vias Metabólicas , Software
12.
Entropy (Basel) ; 21(4)2019 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-33267124

RESUMO

We consider the k-user successive refinement problem with causal decoder side information and derive an exponential strong converse theorem. The rate-distortion region for the problem can be derived as a straightforward extension of the two-user case by Maor and Merhav (2008). We show that for any rate-distortion tuple outside the rate-distortion region of the k-user successive refinement problem with causal decoder side information, the joint excess-distortion probability approaches one exponentially fast. Our proof follows by judiciously adapting the recently proposed strong converse technique by Oohama using the information spectrum method, the variational form of the rate-distortion region and Hölder's inequality. The lossy source coding problem with causal decoder side information considered by El Gamal and Weissman is a special case ( k = 1 ) of the current problem. Therefore, the exponential strong converse theorem for the El Gamal and Weissman problem follows as a corollary of our result.

13.
Entropy (Basel) ; 21(8)2019 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-33267500

RESUMO

This paper proposes a geometric estimator of dependency between a pair of multivariate random variables. The proposed estimator of dependency is based on a randomly permuted geometric graph (the minimal spanning tree) over the two multivariate samples. This estimator converges to a quantity that we call the geometric mutual information (GMI), which is equivalent to the Henze-Penrose divergence. between the joint distribution of the multivariate samples and the product of the marginals. The GMI has many of the same properties as standard MI but can be estimated from empirical data without density estimation; making it scalable to large datasets. The proposed empirical estimator of GMI is simple to implement, involving the construction of an minimal spanning tree (MST) spanning over both the original data and a randomly permuted version of this data. We establish asymptotic convergence of the estimator and convergence rates of the bias and variance for smooth multivariate density functions belonging to a Hölder class. We demonstrate the advantages of our proposed geometric dependency estimator in a series of experiments.

14.
iScience ; 6: 232-246, 2018 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-30240614

RESUMO

Genome architecture has emerged as a critical element of transcriptional regulation, although its role in the control of cell identity is not well understood. Here we use transcription factor (TF)-mediated reprogramming to examine the interplay between genome architecture and transcriptional programs that transition cells into the myogenic identity. We recently developed new methods for evaluating the topological features of genome architecture based on network centrality. Through integrated analysis of these features of genome architecture and transcriptome dynamics during myogenic reprogramming of human fibroblasts we find that significant architectural reorganization precedes activation of a myogenic transcriptional program. This interplay sets the stage for a critical transition observed at several genomic scales reflecting definitive adoption of the myogenic phenotype. Subsequently, TFs within the myogenic transcriptional program participate in entrainment of biological rhythms. These findings reveal a role for topological features of genome architecture in the initiation of transcriptional programs during TF-mediated human cellular reprogramming.

15.
Science ; 361(6403)2018 08 17.
Artigo em Inglês | MEDLINE | ID: mdl-30115781

RESUMO

Computational imaging combines measurement and computational methods with the aim of forming images even when the measurement conditions are weak, few in number, or highly indirect. The recent surge in quantum-inspired imaging sensors, together with a new wave of algorithms allowing on-chip, scalable and robust data processing, has induced an increase of activity with notable results in the domain of low-light flux imaging and sensing. We provide an overview of the major challenges encountered in low-illumination (e.g., ultrafast) imaging and how these problems have recently been addressed for imaging applications in extreme conditions. These methods provide examples of the future imaging solutions to be developed, for which the best results are expected to arise from an efficient codesign of the sensors and data analysis tools.

16.
J Opt Soc Am A Opt Image Sci Vis ; 35(4): 639-651, 2018 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-29603952

RESUMO

A joint-estimation algorithm is presented that enables simultaneous camera blur and pose estimation from a known calibration target in the presence of aliasing. Specifically, a parametric maximum-likelihood (ML) point-spread function estimate is derived for characterizing a camera's optical imperfections through the use of a calibration target in an otherwise loosely controlled environment. The imaging perspective, ambient-light levels, target reflectance, detector gain and offset, quantum efficiency, and read-noise levels are all treated as nuisance parameters. The Cramér-Rao bound is derived, and simulations demonstrate that the proposed estimator achieves near optimal mean squared error performance. The proposed method is applied to experimental data to validate the fidelity of the forward models as well as to establish the utility of the resulting ML estimates for both system identification and subsequent image restoration.

17.
Entropy (Basel) ; 20(8)2018 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-33265649

RESUMO

Recent work has focused on the problem of nonparametric estimation of information divergence functionals between two continuous random variables. Many existing approaches require either restrictive assumptions about the density support set or difficult calculations at the support set boundary which must be known a priori. The mean squared error (MSE) convergence rate of a leave-one-out kernel density plug-in divergence functional estimator for general bounded density support sets is derived where knowledge of the support boundary, and therefore, the boundary correction is not required. The theory of optimally weighted ensemble estimation is generalized to derive a divergence estimator that achieves the parametric rate when the densities are sufficiently smooth. Guidelines for the tuning parameter selection and the asymptotic distribution of this estimator are provided. Based on the theory, an empirical estimator of Rényi-α divergence is proposed that greatly outperforms the standard kernel density plug-in estimator in terms of mean squared error, especially in high dimensions. The estimator is shown to be robust to the choice of tuning parameters. We show extensive simulation results that verify the theoretical results of our paper. Finally, we apply the proposed estimator to estimate the bounds on the Bayes error rate of a cell classification problem.

18.
Front Robot AI ; 5: 55, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500937

RESUMO

Scaling up robot swarms to collectives of hundreds or even thousands without sacrificing sensing, processing, and locomotion capabilities is a challenging problem. Low-cost robots are potentially scalable, but the majority of existing systems have limited capabilities, and these limitations substantially constrain the type of experiments that could be performed by robotics researchers. Instead of adding functionality by adding more components and therefore increasing the cost, we demonstrate how low-cost hardware can be used beyond its standard functionality. We systematically review 15 swarm robotic systems and analyse their sensing capabilities by applying a general sensor model from the sensing and measurement community. This work is based on the HoverBot system. A HoverBot is a levitating circuit board that manoeuvres by pulling itself towards magnetic anchors that are embedded into the robot arena. We show that HoverBot's magnetic field readouts from its Hall-effect sensor can be associated to successful movement, robot rotation and collision measurands. We build a time series classifier based on these magnetic field readouts. We modify and apply signal processing techniques to enable the online classification of the time-variant magnetic field measurements on HoverBot's low-cost microcontroller. We enabled HoverBot with successful movement, rotation, and collision sensing capabilities by utilising its single Hall-effect sensor. We discuss how our classification method could be applied to other sensors to increase a robot's functionality while retaining its cost.

19.
EBioMedicine ; 17: 172-181, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28238698

RESUMO

Infection of respiratory mucosa with viral pathogens triggers complex immunologic events in the affected host. We sought to characterize this response through proteomic analysis of nasopharyngeal lavage in human subjects experimentally challenged with influenza A/H3N2 or human rhinovirus, and to develop targeted assays measuring peptides involved in this host response allowing classification of acute respiratory virus infection. Unbiased proteomic discovery analysis identified 3285 peptides corresponding to 438 unique proteins, and revealed that infection with H3N2 induces significant alterations in protein expression. These include proteins involved in acute inflammatory response, innate immune response, and the complement cascade. These data provide insights into the nature of the biological response to viral infection of the upper respiratory tract, and the proteins that are dysregulated by viral infection form the basis of signature that accurately classifies the infected state. Verification of this signature using targeted mass spectrometry in independent cohorts of subjects challenged with influenza or rhinovirus demonstrates that it performs with high accuracy (0.8623 AUROC, 75% TPR, 97.46% TNR). With further development as a clinical diagnostic, this signature may have utility in rapid screening for emerging infections, avoidance of inappropriate antibacterial therapy, and more rapid implementation of appropriate therapeutic and public health strategies.


Assuntos
Influenza Humana/diagnóstico , Proteoma/metabolismo , Mucosa Respiratória/metabolismo , Biomarcadores/metabolismo , Humanos , Vírus da Influenza A Subtipo H3N2/patogenicidade , Influenza Humana/virologia , Espectrometria de Massas , Proteoma/química , Rhinovirus/patogenicidade
20.
J Biomed Semantics ; 7(1): 53, 2016 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-27627881

RESUMO

BACKGROUND: Statistics play a critical role in biological and clinical research. However, most reports of scientific results in the published literature make it difficult for the reader to reproduce the statistical analyses performed in achieving those results because they provide inadequate documentation of the statistical tests and algorithms applied. The Ontology of Biological and Clinical Statistics (OBCS) is put forward here as a step towards solving this problem. RESULTS: The terms in OBCS including 'data collection', 'data transformation in statistics', 'data visualization', 'statistical data analysis', and 'drawing a conclusion based on data', cover the major types of statistical processes used in basic biological research and clinical outcome studies. OBCS is aligned with the Basic Formal Ontology (BFO) and extends the Ontology of Biomedical Investigations (OBI), an OBO (Open Biological and Biomedical Ontologies) Foundry ontology supported by over 20 research communities. Currently, OBCS comprehends 878 terms, representing 20 BFO classes, 403 OBI classes, 229 OBCS specific classes, and 122 classes imported from ten other OBO ontologies. We discuss two examples illustrating how the ontology is being applied. In the first (biological) use case, we describe how OBCS was applied to represent the high throughput microarray data analysis of immunological transcriptional profiles in human subjects vaccinated with an influenza vaccine. In the second (clinical outcomes) use case, we applied OBCS to represent the processing of electronic health care data to determine the associations between hospital staffing levels and patient mortality. Our case studies were designed to show how OBCS can be used for the consistent representation of statistical analysis pipelines under two different research paradigms. Other ongoing projects using OBCS for statistical data processing are also discussed. The OBCS source code and documentation are available at: https://github.com/obcs/obcs . CONCLUSIONS: The Ontology of Biological and Clinical Statistics (OBCS) is a community-based open source ontology in the domain of biological and clinical statistics. OBCS is a timely ontology that represents statistics-related terms and their relations in a rigorous fashion, facilitates standard data analysis and integration, and supports reproducible biological and clinical research.


Assuntos
Ontologias Biológicas , Estatística como Assunto , Mineração de Dados , Padrões de Referência , Reprodutibilidade dos Testes , Vacinas/imunologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...