Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Front Bioinform ; 3: 1287168, 2023.
Article in English | MEDLINE | ID: mdl-38318534

ABSTRACT

A multiscale method proposed elsewhere for reconstructing plausible 3D configurations of the chromatin in cell nuclei is recalled, based on the integration of contact data from Hi-C experiments and additional information coming from ChIP-seq, RNA-seq and ChIA-PET experiments. Provided that the additional data come from independent experiments, this kind of approach is supposed to leverage them to complement possibly noisy, biased or missing Hi-C records. When the different data sources are mutually concurrent, the resulting solutions are corroborated; otherwise, their validity would be weakened. Here, a problem of reliability arises, entailing an appropriate choice of the relative weights to be assigned to the different informational contributions. A series of experiments is presented that help to quantify the advantages and the limitations offered by this strategy. Whereas the advantages in accuracy are not always significant, the case of missing Hi-C data demonstrates the effectiveness of additional information in reconstructing the highly packed segments of the structure.

2.
Comput Struct Biotechnol J ; 19: 5762-5790, 2021.
Article in English | MEDLINE | ID: mdl-34765093

ABSTRACT

We review the current applications of artificial intelligence (AI) in functional genomics. The recent explosion of AI follows the remarkable achievements made possible by "deep learning", along with a burst of "big data" that can meet its hunger. Biology is about to overthrow astronomy as the paradigmatic representative of big data producer. This has been made possible by huge advancements in the field of high throughput technologies, applied to determine how the individual components of a biological system work together to accomplish different processes. The disciplines contributing to this bulk of data are collectively known as functional genomics. They consist in studies of: i) the information contained in the DNA (genomics); ii) the modifications that DNA can reversibly undergo (epigenomics); iii) the RNA transcripts originated by a genome (transcriptomics); iv) the ensemble of chemical modifications decorating different types of RNA transcripts (epitranscriptomics); v) the products of protein-coding transcripts (proteomics); and vi) the small molecules produced from cell metabolism (metabolomics) present in an organism or system at a given time, in physiological or pathological conditions. After reviewing main applications of AI in functional genomics, we discuss important accompanying issues, including ethical, legal and economic issues and the importance of explainability.

3.
J Imaging ; 7(3)2021 Mar 12.
Article in English | MEDLINE | ID: mdl-34460709

ABSTRACT

Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology to facilitate and semi-automate the recognition and polygonization of areas corresponding to the characteristics searched. To this end, several algorithms have been tested that allow for separating the characteristics and creating binary masks to be statistically analyzed and polygonized. Since our methodology aims to offer a conservator-restorer model to obtain useful graphic documentation in a short time that is usable for design and statistical purposes, this process has been implemented in a single Geographic Information Systems (GIS) application.

4.
Biology (Basel) ; 10(4)2021 Apr 16.
Article in English | MEDLINE | ID: mdl-33923796

ABSTRACT

The three-dimensional structure of chromatin in the cellular nucleus carries important information that is connected to physiological and pathological correlates and dysfunctional cell behaviour. As direct observation is not feasible at present, on one side, several experimental techniques have been developed to provide information on the spatial organization of the DNA in the cell; on the other side, several computational methods have been developed to elaborate experimental data and infer 3D chromatin conformations. The most relevant experimental methods are Chromosome Conformation Capture and its derivatives, chromatin immunoprecipitation and sequencing techniques (CHIP-seq), RNA-seq, fluorescence in situ hybridization (FISH) and other genetic and biochemical techniques. All of them provide important and complementary information that relate to the three-dimensional organization of chromatin. However, these techniques employ very different experimental protocols and provide information that is not easily integrated, due to different contexts and different resolutions. Here, we present an open-source tool, which is an expansion of the previously reported code ChromStruct, for inferring the 3D structure of chromatin that, by exploiting a multilevel approach, allows an easy integration of information derived from different experimental protocols and referred to different resolution levels of the structure, from a few kilobases up to Megabases. Our results show that the introduction of chromatin modelling features related to CTCF CHIA-PET data, histone modification CHIP-seq, and RNA-seq data produce appreciable improvements in ChromStruct's 3D reconstructions, compared to the use of HI-C data alone, at a local level and at a very high resolution.

5.
J Adv Res ; 17: 31-42, 2019 May.
Article in English | MEDLINE | ID: mdl-31193359

ABSTRACT

In this work, a critical review of the current nondestructive probing and image analysis approaches is presented, to revealing otherwise invisible or hardly discernible details in manuscripts and paintings relevant to cultural heritage and archaeology. Multispectral imaging, X-ray fluorescence, Laser-Induced Breakdown Spectroscopy, Raman spectroscopy and Thermography are considered, as techniques for acquiring images and spectral image sets; statistical methods for the analysis of these images are then discussed, including blind separation and false colour techniques. Several case studies are presented, with particular attention dedicated to the approaches that appear most promising for future applications. Some of the techniques described herein are likely to replace, in the near future, classical digital photography in the study of ancient manuscripts and paintings.

6.
Article in English | MEDLINE | ID: mdl-29994172

ABSTRACT

We present a method to infer 3D chromatin configurations from Chromosome Conformation Capture data. Quite a few methods have been proposed to estimate the structure of the nuclear dna in homogeneous populations of cells from this kind of data. Many of them transform contact frequencies into euclidean distances between pairs of chromatin fragments, and then reconstruct the structure by solving a distance-to-geometry problem. To avoid inconsistencies, our method is based on a score function that does not require any frequency-to-distance translation. We propose a multiscale chromatin model where the chromatin fiber is suitably partitioned at each scale. The partial structures are estimated independently, and connected to rebuild the whole fiber. Our score function consists of a data-fit part and a penalty part, balanced automatically at each scale and each subchain. The penalty part enforces soft geometric constraints. As many different structures can fit the data, our sampling strategy produces a set of solutions with similar scores. The procedure contains a few parameters, independent of both the scale and the genomic segment treated. The partition of the fiber, along with intrinsically parallel parts, make this method computationally efficient. Results from human genome data support the biological plausibility of our solutions.


Subject(s)
Chromatin/ultrastructure , Models, Molecular , Algorithms , Bayes Theorem , Cell Line , Chromatin/chemistry , Chromatin/metabolism , Computational Biology , Humans , Reproducibility of Results
7.
Article in English | MEDLINE | ID: mdl-29993555

ABSTRACT

A method and a stand-alone Python(TM) code to estimate the 3D chromatin structure from chromosome conformation capture data are presented. The method is based on a multiresolution, modified-bead-chain chromatin model, evolved through quaternion operators in a Monte Carlo sampling. The solution space to be sampled is generated by a score function with a data-fit part and a constraint part where the available prior knowledge is implicitly coded. The final solution is a set of 3D configurations that are compatible with both the data and the prior knowledge. The iterative code, provided here as additional material, is equipped with a graphical user interface and stores its results in standard-format files for 3D visualization. We describe the mathematical-computational aspects of the method and explain the details of the code. Some experimental results are reported, with a demonstration of their fit to the data.

8.
BMC Bioinformatics ; 16: 234, 2015 Jul 29.
Article in English | MEDLINE | ID: mdl-26220581

ABSTRACT

BACKGROUND: The knowledge of the spatial organisation of the chromatin fibre in cell nuclei helps researchers to understand the nuclear machinery that regulates DNA activity. Recent experimental techniques of the type Chromosome Conformation Capture (3C, or similar) provide high-resolution, high-throughput data consisting in the number of times any possible pair of DNA fragments is found to be in contact, in a certain population of cells. As these data carry information on the structure of the chromatin fibre, several attempts have been made to use them to obtain high-resolution 3D reconstructions of entire chromosomes, or even an entire genome. The techniques proposed treat the data in different ways, possibly exploiting physical-geometric chromatin models. One popular strategy is to transform contact data into Euclidean distances between pairs of fragments, and then solve a classical distance-to-geometry problem. RESULTS: We developed and tested a reconstruction technique that does not require translating contacts into distances, thus avoiding a number of related drawbacks. Also, we introduce a geometrical chromatin chain model that allows us to include sound biochemical and biological constraints in the problem. This model can be scaled at different genomic resolutions, where the structures of the coarser models are influenced by the reconstructions at finer resolutions. The search in the solution space is then performed by a classical simulated annealing, where the model is evolved efficiently through quaternion operators. The presence of appropriate constraints permits the less reliable data to be overlooked, so the result is a set of plausible chromatin configurations compatible with both the data and the prior knowledge. CONCLUSIONS: To test our method, we obtained a number of 3D chromatin configurations from Hi-C data available in the literature for the long arm of human chromosome 1, and validated their features against known properties of gene density and transcriptional activity. Our results are compatible with biological features not introduced a priori in the problem: structurally different regions in our reconstructions highly correlate with functionally different regions as known from literature and genomic repositories.


Subject(s)
Chromatin/chemistry , Genomics/methods , Algorithms , Chromatin/metabolism , DNA/chemistry , DNA/metabolism , Humans , Internet , Molecular Conformation , Monte Carlo Method , User-Computer Interface
9.
IEEE Trans Image Process ; 19(9): 2357-68, 2010 Sep.
Article in English | MEDLINE | ID: mdl-20409994

ABSTRACT

We propose to model the image differentials of astrophysical source maps by Student's t-distribution and to use them in the Bayesian source separation method as priors. We introduce an efficient Markov Chain Monte Carlo (MCMC) sampling scheme to unmix the astrophysical sources and describe the derivation details. In this scheme, we use the Langevin stochastic equation for transitions, which enables parallel drawing of random samples from the posterior, and reduces the computation time significantly (by two orders of magnitude). In addition, Student's t-distribution parameters are updated throughout the iterations. The results on astrophysical source separation are assessed with two performance criteria defined in the pixel and the frequency domains.

10.
IEEE Trans Image Process ; 15(2): 473-82, 2006 Feb.
Article in English | MEDLINE | ID: mdl-16479817

ABSTRACT

This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Computer Simulation , Markov Chains , Models, Statistical , Signal Processing, Computer-Assisted , Subtraction Technique
11.
Neural Netw ; 16(3-4): 479-91, 2003.
Article in English | MEDLINE | ID: mdl-12672442

ABSTRACT

A microwave sky map results from a combination of signals from various astrophysical sources, such as cosmic microwave background radiation, synchrotron radiation and galactic dust radiation. To derive information about these sources, one needs to separate them from the measured maps on different frequency channels. Our insufficient knowledge of the weights to be given to the individual signals at different frequencies makes this a difficult task. Recent work on the problem led to only limited success due to ignoring the noise and to the lack of a suitable statistical model for the sources. In this paper, we derive the statistical distribution of some source realizations, and check the appropriateness of a Gaussian mixture model for them. A source separation technique, namely, independent factor analysis, has been suggested recently in the literature for Gaussian mixture sources in the presence of noise. This technique employs a three layered neural network architecture which allows a simple, hierarchical treatment of the problem. We modify the algorithm proposed in the literature to accommodate for space-varying noise and test its performance on simulated astrophysical maps. We also compare the performances of an expectation-maximization and a simulated annealing learning algorithm in estimating the mixture matrix and the source model parameters. The problem with expectation-maximization is that it does not ensure global optimization, and thus the choice of the starting point is a critical task. Indeed, we did not succeed to reach good solutions for random initializations of the algorithm. Conversely, our experiments with simulated annealing yielded initialization-independent results. The mixing matrix and the means and coefficients in the source model were estimated with a good accuracy while some of the variances of the components in the mixture model were not estimated satisfactorily.


Subject(s)
Astronomy/methods , Physics/methods , Astronomy/statistics & numerical data , Factor Analysis, Statistical , Physics/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...