Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Faraday Discuss ; 218(0): 459-480, 2019 08 15.
Article in English | MEDLINE | ID: mdl-31173013

ABSTRACT

Analytical methods for mixtures of small molecules require specificity (is a certain molecule present in the mix?) and speciation capabilities. NMR spectroscopy has been a tool of choice for both of these issues since its early days, due to its quantitative (linear) response, sufficiently high resolving power and capabilities of inferring molecular structures from spectral features (even in the absence of a reference database). However, the analytical performances of NMR spectroscopy are being stretched by the increased complexity of the samples, the dynamic range of the components, and the need for a reasonable turnover time. One approach that has been actively pursued for disentangling the composition complexity is the use of 2D NMR spectroscopy. While any of the many experiments from this family will increase the spectral resolution, some are more apt for mixtures, as they are capable of unveiling signals belonging to whole molecules or fragments of it. Among the most popular ones, one can enumerate HSQC-TOCSY, DOSY and Maximum-Quantum (MaxQ) NMR spectroscopy. For multicomponent samples, the development of robust mathematical methods of signal decomposition would provide a clear edge towards identification. We have been pursuing, along these lines, Blind Source Separation (BSS). Here, the un-mixing of the spectra is achieved relying on correlations detected on a series of datasets. The series could be associated with samples of different relative composition or in a classically acquired 2D experiment by the mathematical laws underlying the construction of the indirect dimension, the one not recorded by the spectrometer. Many algorithms have been proposed for BSS in NMR spectroscopy since the seminal work of Nuzillard. In this paper, we use rather standard algorithms in BSS in order to disentangle NMR spectra. We show on simulated data (both 1D and 2D HSQC) that these approaches enable us to accurately disentangle multiple components, and provide good estimates for the concentrations of compounds. Furthermore, we show that after proper realignment of the signals, the same algorithms are able to disentangle real 1D NMR spectra. We obtain similar results on 2D HSQC spectra, where the BSS algorithms are able to successfully disentangle components, and provide even better estimates for concentrations.

2.
Article in English | MEDLINE | ID: mdl-28459115

ABSTRACT

This article deals with the use of optimal lattice and optimal window in Discrete Gabor Transform computation. In the case of a generalized Gaussian window, extending earlier contributions, we introduce an additional local window adaptation technique for non-stationary signals. We illustrate our approach and the earlier one by addressing three time-frequency analysis problems to show the improvements achieved by the use of optimal lattice and window: close frequencies distinction, frequency estimation and SNR estimation. The results are presented, when possible, with real world audio signals.

3.
Prog Nucl Magn Reson Spectrosc ; 81: 37-64, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25142734

ABSTRACT

Fourier transform is the data processing naturally associated to most NMR experiments. Notable exceptions are Pulse Field Gradient and relaxation analysis, the structure of which is only partially suitable for FT. With the revamp of NMR of complex mixtures, fueled by analytical challenges such as metabolomics, alternative and more apt mathematical methods for data processing have been sought, with the aim of decomposing the NMR signal into simpler bits. Blind source separation is a very broad definition regrouping several classes of mathematical methods for complex signal decomposition that use no hypothesis on the form of the data. Developed outside NMR, these algorithms have been increasingly tested on spectra of mixtures. In this review, we shall provide an historical overview of the application of blind source separation methodologies to NMR, including methods specifically designed for the specificity of this spectroscopy.


Subject(s)
Algorithms , Complex Mixtures/analysis , Nuclear Magnetic Resonance, Biomolecular/methods , Software , Animals , Humans , Signal Processing, Computer-Assisted
4.
Anal Chem ; 85(23): 11344-51, 2013 Dec 03.
Article in English | MEDLINE | ID: mdl-24098956

ABSTRACT

NMR diffusometry and its flagship layout, diffusion-ordered spectroscopy (DOSY), are versatile for studying mixtures of bioorganic and synthetic molecules, but a limiting factor of its applicability is the requirement of a mathematical treatment capable of distinguishing molecules with similar spectra or diffusion constants. We present here a processing strategy for DOSY, a synergy of two high-performance blind source separation (BSS) techniques: non-negative matrix factorization (NMF) using additional sparse conditioning (SC), and the JADE (joint approximate diagonalization of eigenmatrices) declination of independent component analysis (ICA). While the first approach has an intrinsic affinity for NMR data, the latter one can be orders of magnitude computationally faster and can be used to simplify the parametrization of the former.

5.
J Neurosci Methods ; 180(1): 161-70, 2009 May 30.
Article in English | MEDLINE | ID: mdl-19427543

ABSTRACT

Time-frequency representations are commonly used to analyze the oscillatory nature of brain signals in EEG, MEG or intracranial EEG. In the signal processing literature, there is growing interest in sparse time-frequency representations, where the data are described using few components. A popular algorithm is Matching Pursuit (MP) [Mallat SG, Zhang Z. Matching pursuits with time-frequency dictionaries. IEEE Trans Sig Proc 1993;41:3397-415], which iteratively subtracts from the signal its projection on atoms selected from a dictionary. The MP algorithm was recently adapted for multivariate datasets [Durka PJ, Matysiak A, Martinez-Montes E, Sosa PV, Blinowska KJ. Multichannel matching pursuit and EEG inverse solutions. J Neurosci Methods 2005;148:49-59; Gribonval R. Piecewise linear source separation. Proc SPIE'03 2003. p. 297-310], which is relevant for brain signals that are typically recorded using many channels and trials. So far, most approaches have assumed a stable pattern across channels or trials, even though cross-trial variability is often observed in brain signals. In this study, we adapt Matching Pursuit for brain signals with cross-trial variability in all their characteristics (time, frequency, number of oscillations). The originality of our method is to select each atom using a voting technique that is robust to variability, and to subtract it by adapting the parameters to each trial. Because the inter-trial variability is handled using a voting technique, the method is called Consensus Matching Pursuit (CMP). The CMP method is validated on simulated and real data, and shown to be robust to variability. Compared to existing multivariate Matching Pursuit algorithms, it (i) estimates atoms that are more representative of single-trial waveforms, (ii) leads to a sparser representation of the data, and (iii) permits to quantify the amount of variability across trials.


Subject(s)
Algorithms , Electroencephalography/methods , Evoked Potentials/physiology , Signal Processing, Computer-Assisted , Software , Biological Clocks/physiology , Cerebral Cortex/physiology , Humans , Observer Variation , Reproducibility of Results , Software Validation , Time Factors
6.
Comput Biol Chem ; 29(5): 319-36, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16219488

ABSTRACT

Microarrays are becoming a ubiquitous tool of research in life sciences. However, the working principles of microarray-based methodologies are often misunderstood or apparently ignored by the researchers who actually perform and interpret experiments. This in turn seems to lead to a common over-expectation regarding the explanatory and/or knowledge-generating power of microarray analyses. In this note we intend to explain basic principles of five (5) major groups of analytical techniques used in studies of microarray data and their interpretation: the principal component analysis (PCA), the independent component analysis (ICA), the t-test, the analysis of variance (ANOVA), and self organizing maps (SOM). We discuss answers to selected practical questions related to the analysis of microarray data. We also take a closer look at the experimental setup and the rules, which have to be observed in order to exploit microarrays efficiently. Finally, we discuss in detail the scope and limitations of microarray-based methods. We emphasize the fact that no amount of statistical analysis can compensate for (or replace) a well thought through experimental setup. We conclude that microarrays are indeed useful tools in life sciences but by no means should they be expected to generate complete answers to complex biological questions. We argue that even well posed questions, formulated within a microarray-specific terminology, cannot be completely answered with the use of microarray analyses alone.


Subject(s)
Computational Biology/methods , Microarray Analysis , Data Interpretation, Statistical , Microarray Analysis/methods , Microarray Analysis/statistics & numerical data , Multivariate Analysis , Principal Component Analysis/methods , Research Design , Software
7.
BMC Genomics ; 6: 84, 2005 Jun 06.
Article in English | MEDLINE | ID: mdl-15938745

ABSTRACT

BACKGROUND: Although the organisation of the bacterial chromosome is an area of active research, little is known yet on that subject. The difficulty lies in the fact that the system is dynamic and difficult to observe directly. The advent of massive hybridisation techniques opens the way to further studies of the chromosomal structure because the genes that are co-expressed, as identified by microarray experiments, probably share some spatial relationship. The use of several independent sets of gene expression data should make it possible to obtain an exhaustive view of the genes co-expression and thus a more accurate image of the structure of the chromosome. RESULTS: For both Bacillus subtilis and Escherichia coli the co-expression of genes varies as a function of the distance between the genes along the chromosome. The long-range correlations are surprising: the changes in the level of expression of any gene are correlated (positively or negatively) to the changes in the expression level of other genes located at well-defined long-range distances. This property is true for all the genes, regardless of their localisation on the chromosome. We also found short-range correlations, which suggest that the location of these co-expressed genes corresponds to DNA turns on the nucleoid surface (14-16 genes). CONCLUSION: The long-range correlations do not correspond to the domains so far identified in the nucleoid. We explain our results by a model of the nucleoid solenoid structure based on two types of spirals (short and long). The long spirals are uncoiled expressed DNA while the short ones correspond to coiled unexpressed DNA.


Subject(s)
Bacillus subtilis/genetics , Escherichia coli/genetics , Gene Expression Regulation, Bacterial , Genes, Bacterial , Genome, Bacterial , Bacterial Proteins/genetics , Chromosome Mapping , Chromosomes, Bacterial/ultrastructure , DNA, Bacterial/ultrastructure , Escherichia coli Proteins/genetics , Genome , Genomics/methods , Models, Statistical , Oligonucleotide Array Sequence Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...