Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Proc Natl Acad Sci U S A ; 116(28): 13825-13832, 2019 07 09.
Article in English | MEDLINE | ID: mdl-31235606

ABSTRACT

Matter evolved under the influence of gravity from minuscule density fluctuations. Nonperturbative structure formed hierarchically over all scales and developed non-Gaussian features in the Universe, known as the cosmic web. To fully understand the structure formation of the Universe is one of the holy grails of modern astrophysics. Astrophysicists survey large volumes of the Universe and use a large ensemble of computer simulations to compare with the observed data to extract the full information of our own Universe. However, to evolve billions of particles over billions of years, even with the simplest physics, is a daunting task. We build a deep neural network, the Deep Density Displacement Model ([Formula: see text]), which learns from a set of prerun numerical simulations, to predict the nonlinear large-scale structure of the Universe with the Zel'dovich Approximation (ZA), an analytical approximation based on perturbation theory, as the input. Our extensive analysis demonstrates that [Formula: see text] outperforms the second-order perturbation theory (2LPT), the commonly used fast-approximate simulation method, in predicting cosmic structure in the nonlinear regime. We also show that [Formula: see text] is able to accurately extrapolate far beyond its training data and predict structure formation for significantly different cosmological parameters. Our study proves that deep learning is a practical and accurate alternative to approximate 3D simulations of the gravitational structure formation of the Universe.

2.
Med Image Comput Comput Assist Interv ; 11070: 502-510, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30895278

ABSTRACT

We propose an attention-based method that aggregates local image features to a subject-level representation for predicting disease severity. In contrast to classical deep learning that requires a fixed dimensional input, our method operates on a set of image patches; hence it can accommodate variable length input image without image resizing. The model learns a clinically interpretable subject-level representation that is reflective of the disease severity. Our model consists of three mutually dependent modules which regulate each other: (1) a discriminative network that learns a fixed-length representation from local features and maps them to disease severity; (2) an attention mechanism that provides interpretability by focusing on the areas of the anatomy that contribute the most to the prediction task; and (3) a generative network that encourages the diversity of the local latent features. The generative term ensures that the attention weights are non-degenerate while maintaining the relevance of the local regions to the disease severity. We train our model end-to-end in the context of a large-scale lung CT study of Chronic Obstructive Pulmonary Disease (COPD). Our model gives state-of-the art performance in predicting clinical measures of severity for COPD.The distribution of the attention provides the regional relevance of lung tissue to the clinical measurements.


Subject(s)
Image Interpretation, Computer-Assisted , Lung , Tomography, X-Ray Computed , Algorithms , Humans , Lung/diagnostic imaging , Pattern Recognition, Automated , Reproducibility of Results , Sensitivity and Specificity
4.
PLoS One ; 10(5): e0124219, 2015.
Article in English | MEDLINE | ID: mdl-26017271

ABSTRACT

Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites) that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR) spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid), BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF), defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error), in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of NMR in clinical settings. BAYESIL is accessible at http://www.bayesil.ca.


Subject(s)
Magnetic Resonance Imaging , Metabolomics/methods , Algorithms
5.
Theor Biol Med Model ; 10: 29, 2013 May 01.
Article in English | MEDLINE | ID: mdl-23634782

ABSTRACT

BACKGROUND: As microtubules are essential for cell growth and division, its constituent protein ß-tubulin has been a popular target for various treatments, including cancer chemotherapy. There are several isotypes of human ß-tubulin and each type of cell expresses its characteristic distribution of these isotypes. Moreover, each tubulin-binding drug has its own distribution of binding affinities over the various isotypes, which further complicates identifying the optimal drug selection. An ideal drug would preferentially bind only the tubulin isotypes expressed abundantly by the cancer cells, but not those in the healthy cells. Unfortunately, as the distributions of the tubulin isotypes in cancer cells overlap with those of healthy cells, this ideal scenario is clearly not possible. We can, however, seek a drug that interferes significantly with the isotype distribution of the cancer cell, but has only minor interactions with those of the healthy cells. METHODS: We describe a quantitative methodology for identifying this optimal tubulin isotype profile for an ideal cancer drug, given the isotype distribution of a specific cancer type, as well as the isotype distributions in various healthy tissues, and the physiological importance of each such tissue. RESULTS: We report the optimal isotype profiles for different types of cancer with various routes of delivery. CONCLUSIONS: Our algorithm, which defines the best profile for each type of cancer (given the drug delivery route and some specified patient characteristics), will help to personalize the design of pharmaceuticals for individual patients. This paper is an attempt to explicitly consider the effects of the tubulin isotype distributions in both cancer and normal cell types, for rational chemotherapy design aimed at optimizing the drug's efficacy with minimal side effects.


Subject(s)
Antineoplastic Agents/therapeutic use , Neoplasms/drug therapy , Tubulin/metabolism , Algorithms , Humans , Models, Molecular , Tubulin/chemistry
SELECTION OF CITATIONS
SEARCH DETAIL
...