Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 9567, 2023 06 13.
Article in English | MEDLINE | ID: mdl-37311768

ABSTRACT

With the advent of multiplex fluorescence in situ hybridization (FISH) and in situ RNA sequencing technologies, spatial transcriptomics analysis is advancing rapidly, providing spatial location and gene expression information about cells in tissue sections at single cell resolution. Cell type classification of these spatially-resolved cells can be inferred by matching the spatial transcriptomics data to reference atlases derived from single cell RNA-sequencing (scRNA-seq) in which cell types are defined by differences in their gene expression profiles. However, robust cell type matching of the spatially-resolved cells to reference scRNA-seq atlases is challenging due to the intrinsic differences in resolution between the spatial and scRNA-seq data. In this study, we systematically evaluated six computational algorithms for cell type matching across four image-based spatial transcriptomics experimental protocols (MERFISH, smFISH, BaristaSeq, and ExSeq) conducted on the same mouse primary visual cortex (VISp) brain region. We find that many cells are assigned as the same type by multiple cell type matching algorithms and are present in spatial patterns previously reported from scRNA-seq studies in VISp. Furthermore, by combining the results of individual matching strategies into consensus cell type assignments, we see even greater alignment with biological expectations. We present two ensemble meta-analysis strategies used in this study and share the consensus cell type matching results in the Cytosplore Viewer ( https://viewer.cytosplore.org ) for interactive visualization and data exploration. The consensus matching can also guide spatial data analysis using SSAM, allowing segmentation-free cell type assignment.


Subject(s)
Primary Visual Cortex , Transcriptome , Animals , Mice , In Situ Hybridization, Fluorescence , Gene Expression Profiling , Algorithms
2.
Front Big Data ; 4: 663410, 2021.
Article in English | MEDLINE | ID: mdl-34604739

ABSTRACT

The development of scientific predictive models has been of great interest over the decades. A scientific model is capable of forecasting domain outcomes without the necessity of performing expensive experiments. In particular, in combustion kinetics, the model can help improving the combustion facilities and the fuel efficiency reducing the pollutants. At the same time, the amount of available scientific data has increased and helped speeding up the continuous cycle of model improvement and validation. This has also opened new opportunities for leveraging a large amount of data to support knowledge extraction. However, experiments are affected by several data quality problems since they are a collection of information over several decades of research, each characterized by different representation formats and reasons of uncertainty. In this context, it is necessary to develop an automatic data ecosystem capable of integrating heterogeneous information sources while maintaining a quality repository. We present an innovative approach to data quality management from the chemical engineering domain, based on an available prototype of a scientific framework, SciExpeM, which has been significantly extended. We identified a new methodology from the model development research process that systematically extracts knowledge from the experimental data and the predictive model. In the paper, we show how our general framework could support the model development process, and save precious research time also in other experimental domains with similar characteristics, i.e., managing numerical data from experiments.

3.
Nat Methods ; 18(11): 1352-1362, 2021 11.
Article in English | MEDLINE | ID: mdl-34711971

ABSTRACT

Charting an organs' biological atlas requires us to spatially resolve the entire single-cell transcriptome, and to relate such cellular features to the anatomical scale. Single-cell and single-nucleus RNA-seq (sc/snRNA-seq) can profile cells comprehensively, but lose spatial information. Spatial transcriptomics allows for spatial measurements, but at lower resolution and with limited sensitivity. Targeted in situ technologies solve both issues, but are limited in gene throughput. To overcome these limitations we present Tangram, a method that aligns sc/snRNA-seq data to various forms of spatial data collected from the same region, including MERFISH, STARmap, smFISH, Spatial Transcriptomics (Visium) and histological images. Tangram can map any type of sc/snRNA-seq data, including multimodal data such as those from SHARE-seq, which we used to reveal spatial patterns of chromatin accessibility. We demonstrate Tangram on healthy mouse brain tissue, by reconstructing a genome-wide anatomically integrated spatial map at single-cell resolution of the visual and somatomotor areas.


Subject(s)
Brain/metabolism , Chromatin/genetics , Deep Learning , Gene Expression Regulation , Single-Cell Analysis/methods , Software , Transcriptome , Animals , Chromatin/chemistry , Chromatin/metabolism , Female , Gene Expression Profiling , Male , Mice , Mice, Inbred C57BL , RNA-Seq , Regulatory Sequences, Nucleic Acid
5.
Nature ; 595(7868): 554-559, 2021 07.
Article in English | MEDLINE | ID: mdl-34163074

ABSTRACT

The mammalian cerebral cortex has an unparalleled diversity of cell types, which are generated during development through a series of temporally orchestrated events that are under tight evolutionary constraint and are critical for proper cortical assembly and function1,2. However, the molecular logic that governs the establishment and organization of cortical cell types remains unknown, largely due to the large number of cell classes that undergo dynamic cell-state transitions over extended developmental timelines. Here we generate a comprehensive atlas of the developing mouse neocortex, using single-cell RNA sequencing and single-cell assay for transposase-accessible chromatin using sequencing. We sampled the neocortex every day throughout embryonic corticogenesis and at early postnatal ages, and complemented the sequencing data with a spatial transcriptomics time course. We computationally reconstruct developmental trajectories across the diversity of cortical cell classes, and infer their spatial organization and the gene regulatory programs that accompany their lineage bifurcation decisions and differentiation trajectories. Finally, we demonstrate how this developmental map pinpoints the origin of lineage-specific developmental abnormalities that are linked to aberrant corticogenesis in mutant mice. The data provide a global picture of the regulatory mechanisms that govern cellular diversification in the neocortex.


Subject(s)
Neocortex/cytology , Neurogenesis , Animals , Cell Differentiation , DNA-Binding Proteins/genetics , Embryo, Mammalian , Gene Expression Regulation, Developmental , Mice , Mice, Inbred C57BL , Mice, Knockout , Neocortex/embryology , Nerve Tissue Proteins/genetics , Sequence Analysis, RNA , Single-Cell Analysis , Transcriptome
6.
J Chem Inf Model ; 60(6): 2697-2717, 2020 06 22.
Article in English | MEDLINE | ID: mdl-32243154

ABSTRACT

Advances in deep neural network (DNN)-based molecular property prediction have recently led to the development of models of remarkable accuracy and generalization ability, with graph convolutional neural networks (GCNNs) reporting state-of-the-art performance for this task. However, some challenges remain, and one of the most important that needs to be fully addressed concerns uncertainty quantification. DNN performance is affected by the volume and the quality of the training samples. Therefore, establishing when and to what extent a prediction can be considered reliable is just as important as outputting accurate predictions, especially when out-of-domain molecules are targeted. Recently, several methods to account for uncertainty in DNNs have been proposed, most of which are based on approximate Bayesian inference. Among these, only a few scale to the large data sets required in applications. Evaluating and comparing these methods has recently attracted great interest, but results are generally fragmented and absent for molecular property prediction. In this paper, we quantitatively compare scalable techniques for uncertainty estimation in GCNNs. We introduce a set of quantitative criteria to capture different uncertainty aspects and then use these criteria to compare MC-dropout, Deep Ensembles, and bootstrapping, both theoretically in a unified framework that separates aleatoric/epistemic uncertainty and experimentally on public data sets. Our experiments quantify the performance of the different uncertainty estimation methods and their impact on uncertainty-related error reduction. Our findings indicate that Deep Ensembles and bootstrapping consistently outperform MC-dropout, with different context-specific pros and cons. Our analysis leads to a better understanding of the role of aleatoric/epistemic uncertainty, also in relation to the target data set features, and highlights the challenge posed by out-of-domain uncertainty.


Subject(s)
Deep Learning , Bayes Theorem , Neural Networks, Computer , Uncertainty
7.
Sensors (Basel) ; 17(12)2017 Nov 29.
Article in English | MEDLINE | ID: mdl-29186080

ABSTRACT

In the first hours of a disaster, up-to-date information about the area of interest is crucial for effective disaster management. However, due to the delay induced by collecting and analysing satellite imagery, disaster management systems like the Copernicus Emergency Management Service (EMS) are currently not able to provide information products until up to 48-72 h after a disaster event has occurred. While satellite imagery is still a valuable source for disaster management, information products can be improved through complementing them with user-generated data like social media posts or crowdsourced data. The advantage of these new kinds of data is that they are continuously produced in a timely fashion because users actively participate throughout an event and share related information. The research project Evolution of Emergency Copernicus services (E2mC) aims to integrate these novel data into a new EMS service component called Witness, which is presented in this paper. Like this, the timeliness and accuracy of geospatial information products provided to civil protection authorities can be improved through leveraging user-generated data. This paper sketches the developed system architecture, describes applicable scenarios and presents several preliminary case studies, providing evidence that the scientific and operational goals have been achieved.


Subject(s)
Crowdsourcing , Computer Systems , Disasters , Emergency Medical Services , Social Media , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...