Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
PLoS One ; 18(3): e0282142, 2023.
Article in English | MEDLINE | ID: mdl-36947504

ABSTRACT

Ancient manuscripts are a rich source of history and civilization. Unfortunately, these documents are often affected by different age and storage related degradation which impinge on their readability and information contents. In this paper, we propose a document restoration method that removes the unwanted interfering degradation patterns from color ancient manuscripts. We exploit different color spaces to highlight the spectral differences in various layers of information usually present in these documents. At each image pixel, the spectral representations of all color spaces are stacked to form a feature vector. PCA is applied to the whole data cube to eliminate correlation of the color planes and enhance separation among the patterns. The reduced data cube, along with the pixel spatial information, is used to perform a pixel based segmentation, where each cluster represents a class of pixels that share similar color properties in the decorrelated color spaces. The interfering, unwanted classes can thus be removed by inpainting their pixels with the background texture. Assuming Gaussian distributions for the various classes, a Gaussian Mixture Model (GMM) is estimated through the Expectation Maximization (EM) algorithm from the data, and then used to find appropriate labels for each pixel. In order to preserve the original appearance of the document and reproduce the background texture, the detected degraded pixels are replaced based on Gaussian conditional simulation, according to the surrounding context. Experiments are shown on manuscripts affected by different kinds of degradations, including manuscripts from the DIBCO 2018 and 2019 publicaly available dataset. We observe that the use of a few PCA dominant components accelerates the clustering process and provides a more accurate segmentation.


Subject(s)
Algorithms , Computer Simulation , Normal Distribution , Cluster Analysis , Color
2.
J Imaging ; 7(3)2021 Mar 12.
Article in English | MEDLINE | ID: mdl-34460709

ABSTRACT

Digital images represent the primary tool for diagnostics and documentation of the state of preservation of artifacts. Today the interpretive filters that allow one to characterize information and communicate it are extremely subjective. Our research goal is to study a quantitative analysis methodology to facilitate and semi-automate the recognition and polygonization of areas corresponding to the characteristics searched. To this end, several algorithms have been tested that allow for separating the characteristics and creating binary masks to be statistically analyzed and polygonized. Since our methodology aims to offer a conservator-restorer model to obtain useful graphic documentation in a short time that is usable for design and statistical purposes, this process has been implemented in a single Geographic Information Systems (GIS) application.

3.
Biology (Basel) ; 10(4)2021 Apr 16.
Article in English | MEDLINE | ID: mdl-33923796

ABSTRACT

The three-dimensional structure of chromatin in the cellular nucleus carries important information that is connected to physiological and pathological correlates and dysfunctional cell behaviour. As direct observation is not feasible at present, on one side, several experimental techniques have been developed to provide information on the spatial organization of the DNA in the cell; on the other side, several computational methods have been developed to elaborate experimental data and infer 3D chromatin conformations. The most relevant experimental methods are Chromosome Conformation Capture and its derivatives, chromatin immunoprecipitation and sequencing techniques (CHIP-seq), RNA-seq, fluorescence in situ hybridization (FISH) and other genetic and biochemical techniques. All of them provide important and complementary information that relate to the three-dimensional organization of chromatin. However, these techniques employ very different experimental protocols and provide information that is not easily integrated, due to different contexts and different resolutions. Here, we present an open-source tool, which is an expansion of the previously reported code ChromStruct, for inferring the 3D structure of chromatin that, by exploiting a multilevel approach, allows an easy integration of information derived from different experimental protocols and referred to different resolution levels of the structure, from a few kilobases up to Megabases. Our results show that the introduction of chromatin modelling features related to CTCF CHIA-PET data, histone modification CHIP-seq, and RNA-seq data produce appreciable improvements in ChromStruct's 3D reconstructions, compared to the use of HI-C data alone, at a local level and at a very high resolution.

4.
J Adv Res ; 17: 31-42, 2019 May.
Article in English | MEDLINE | ID: mdl-31193359

ABSTRACT

In this work, a critical review of the current nondestructive probing and image analysis approaches is presented, to revealing otherwise invisible or hardly discernible details in manuscripts and paintings relevant to cultural heritage and archaeology. Multispectral imaging, X-ray fluorescence, Laser-Induced Breakdown Spectroscopy, Raman spectroscopy and Thermography are considered, as techniques for acquiring images and spectral image sets; statistical methods for the analysis of these images are then discussed, including blind separation and false colour techniques. Several case studies are presented, with particular attention dedicated to the approaches that appear most promising for future applications. Some of the techniques described herein are likely to replace, in the near future, classical digital photography in the study of ancient manuscripts and paintings.

5.
Article in English | MEDLINE | ID: mdl-29994172

ABSTRACT

We present a method to infer 3D chromatin configurations from Chromosome Conformation Capture data. Quite a few methods have been proposed to estimate the structure of the nuclear dna in homogeneous populations of cells from this kind of data. Many of them transform contact frequencies into euclidean distances between pairs of chromatin fragments, and then reconstruct the structure by solving a distance-to-geometry problem. To avoid inconsistencies, our method is based on a score function that does not require any frequency-to-distance translation. We propose a multiscale chromatin model where the chromatin fiber is suitably partitioned at each scale. The partial structures are estimated independently, and connected to rebuild the whole fiber. Our score function consists of a data-fit part and a penalty part, balanced automatically at each scale and each subchain. The penalty part enforces soft geometric constraints. As many different structures can fit the data, our sampling strategy produces a set of solutions with similar scores. The procedure contains a few parameters, independent of both the scale and the genomic segment treated. The partition of the fiber, along with intrinsically parallel parts, make this method computationally efficient. Results from human genome data support the biological plausibility of our solutions.


Subject(s)
Chromatin/ultrastructure , Models, Molecular , Algorithms , Bayes Theorem , Cell Line , Chromatin/chemistry , Chromatin/metabolism , Computational Biology , Humans , Reproducibility of Results
6.
Article in English | MEDLINE | ID: mdl-29993555

ABSTRACT

A method and a stand-alone Python(TM) code to estimate the 3D chromatin structure from chromosome conformation capture data are presented. The method is based on a multiresolution, modified-bead-chain chromatin model, evolved through quaternion operators in a Monte Carlo sampling. The solution space to be sampled is generated by a score function with a data-fit part and a constraint part where the available prior knowledge is implicitly coded. The final solution is a set of 3D configurations that are compatible with both the data and the prior knowledge. The iterative code, provided here as additional material, is equipped with a graphical user interface and stores its results in standard-format files for 3D visualization. We describe the mathematical-computational aspects of the method and explain the details of the code. Some experimental results are reported, with a demonstration of their fit to the data.

7.
BMC Bioinformatics ; 16: 234, 2015 Jul 29.
Article in English | MEDLINE | ID: mdl-26220581

ABSTRACT

BACKGROUND: The knowledge of the spatial organisation of the chromatin fibre in cell nuclei helps researchers to understand the nuclear machinery that regulates DNA activity. Recent experimental techniques of the type Chromosome Conformation Capture (3C, or similar) provide high-resolution, high-throughput data consisting in the number of times any possible pair of DNA fragments is found to be in contact, in a certain population of cells. As these data carry information on the structure of the chromatin fibre, several attempts have been made to use them to obtain high-resolution 3D reconstructions of entire chromosomes, or even an entire genome. The techniques proposed treat the data in different ways, possibly exploiting physical-geometric chromatin models. One popular strategy is to transform contact data into Euclidean distances between pairs of fragments, and then solve a classical distance-to-geometry problem. RESULTS: We developed and tested a reconstruction technique that does not require translating contacts into distances, thus avoiding a number of related drawbacks. Also, we introduce a geometrical chromatin chain model that allows us to include sound biochemical and biological constraints in the problem. This model can be scaled at different genomic resolutions, where the structures of the coarser models are influenced by the reconstructions at finer resolutions. The search in the solution space is then performed by a classical simulated annealing, where the model is evolved efficiently through quaternion operators. The presence of appropriate constraints permits the less reliable data to be overlooked, so the result is a set of plausible chromatin configurations compatible with both the data and the prior knowledge. CONCLUSIONS: To test our method, we obtained a number of 3D chromatin configurations from Hi-C data available in the literature for the long arm of human chromosome 1, and validated their features against known properties of gene density and transcriptional activity. Our results are compatible with biological features not introduced a priori in the problem: structurally different regions in our reconstructions highly correlate with functionally different regions as known from literature and genomic repositories.


Subject(s)
Chromatin/chemistry , Genomics/methods , Algorithms , Chromatin/metabolism , DNA/chemistry , DNA/metabolism , Humans , Internet , Molecular Conformation , Monte Carlo Method , User-Computer Interface
8.
IEEE Trans Image Process ; 19(4): 912-25, 2010 Apr.
Article in English | MEDLINE | ID: mdl-20028627

ABSTRACT

In this paper, we apply Bayesian blind source separation (BSS) from noisy convolutive mixtures to jointly separate and restore source images degraded through unknown blur operators, and then linearly mixed. We found that this problem arises in several image processing applications, among which there are some interesting instances of degraded document analysis. In particular, the convolutive mixture model is proposed for describing multiple views of documents affected by the overlapping of two or more text patterns. We consider two different models, the interchannel model, where the data represent multispectral views of a single-sided document, and the intrachannel model, where the data are given by two sets of multispectral views of the recto and verso side of a document page. In both cases, the aim of the analysis is to recover clean maps of the main foreground text, but also the enhancement and extraction of other document features, such as faint or masked patterns. We adopt Bayesian estimation for all the unknowns and describe the typical local correlation within the individual source images through the use of suitable Gibbs priors, accounting also for well-behaved edges in the images. This a priori information is particularly suitable for the kind of objects depicted in the images treated, i.e., homogeneous texts in homogeneous background, and, as such, is capable to stabilize the ill-posed, inverse problem considered. The method is validated through numerical and real experiments that are representative of various real scenarios.

9.
IEEE Trans Image Process ; 15(2): 473-82, 2006 Feb.
Article in English | MEDLINE | ID: mdl-16479817

ABSTRACT

This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.


Subject(s)
Algorithms , Artificial Intelligence , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Computer Simulation , Markov Chains , Models, Statistical , Signal Processing, Computer-Assisted , Subtraction Technique
10.
Neural Netw ; 16(3-4): 479-91, 2003.
Article in English | MEDLINE | ID: mdl-12672442

ABSTRACT

A microwave sky map results from a combination of signals from various astrophysical sources, such as cosmic microwave background radiation, synchrotron radiation and galactic dust radiation. To derive information about these sources, one needs to separate them from the measured maps on different frequency channels. Our insufficient knowledge of the weights to be given to the individual signals at different frequencies makes this a difficult task. Recent work on the problem led to only limited success due to ignoring the noise and to the lack of a suitable statistical model for the sources. In this paper, we derive the statistical distribution of some source realizations, and check the appropriateness of a Gaussian mixture model for them. A source separation technique, namely, independent factor analysis, has been suggested recently in the literature for Gaussian mixture sources in the presence of noise. This technique employs a three layered neural network architecture which allows a simple, hierarchical treatment of the problem. We modify the algorithm proposed in the literature to accommodate for space-varying noise and test its performance on simulated astrophysical maps. We also compare the performances of an expectation-maximization and a simulated annealing learning algorithm in estimating the mixture matrix and the source model parameters. The problem with expectation-maximization is that it does not ensure global optimization, and thus the choice of the starting point is a critical task. Indeed, we did not succeed to reach good solutions for random initializations of the algorithm. Conversely, our experiments with simulated annealing yielded initialization-independent results. The mixing matrix and the means and coefficients in the source model were estimated with a good accuracy while some of the variances of the components in the mixture model were not estimated satisfactorily.


Subject(s)
Astronomy/methods , Physics/methods , Astronomy/statistics & numerical data , Factor Analysis, Statistical , Physics/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL
...