Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Analyst ; 146(10): 3150-3156, 2021 May 21.
Artigo em Inglês | MEDLINE | ID: mdl-33999052

RESUMO

Quantitative vibrational absorption spectroscopies rely on Beer's law relating spectroscopic intensities in a linear fashion to chemical concentrations. To address and clarify contrasting results in the literature about the difference between volume- and mass-based concentrations units used for quantitative spectroscopy on liquid solutions, we performed near-infrared, mid-infrared, and Raman spectroscopy measurements on four different binary solvent mixtures. Using classical least squares (CLS) and partial least squares (PLS) as multivariate analysis methods, we demonstrate that spectroscopic intensities are linearly related to volume-based concentration units rather than more widely used mass-based concentration units such as weight percent. The CLS results show that the difference in root mean square error of prediction (RMSEP) values between CLS models based on mass and volume fractions correlates strongly with the density difference between the two solvents in each binary mixture. This is explained by the fact that density differences are the source of non-linearity between mass and volume fractions in such mixtures. We also show that PLS calibration handles the non-linearity in mass-based models by the inclusion of additional latent variables that describe residual spectroscopic variation beyond the first latent variable (e.g., due to small peak shifts), as observed in the experimental data of all binary solvent mixtures. Using simulation studies, we have quantified the relative errors (up to 10-15%) that are made in PLS modeling when using mass fractions instead of volume fractions. Overall, our results provide conclusive evidence that concentration units based on volume should be preferred for optimal spectroscopic calibration results in academic and industrial practice.

2.
Magn Reson Chem ; 59(2): 172-186, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32929750

RESUMO

Detection and quantification of low molecular weight components in polymeric samples via nuclear magnetic resonance (NMR) spectroscopy can be difficult due to overlapping signal caused by line broadening characteristics of polymers. A way of overcoming this problem could be the exploitation of the difference in relaxation between small molecules and macromolecular species, such as the application of a T2 filter by using the Carr-Purcell-Meiboom-Gill (CPMG) spin-echo pulse sequence. This technique, largely exploited in metabolomics studies, is applied here to material sciences. A Design of Experiments approach was used for evaluating the effect of different acquisition parameters (relaxation delay, echo time and number of cycles) and sample-related ones (concentration and polymer molecular weight) on selected responses, with a particular interest in performing a reliable quantitative analysis. Polymeric samples containing small molecules were analysed by NMR with and without the application of the filter, and analysis of variance was used to identify the most influential parameters. Results indicated that increasing the polymer concentration, hence sample viscosity, further attenuates polymer signals in CPMG experiments because the T2 of those signals tends to decrease with increasing viscosity. The signal-to-noise ratio measured for small molecules can undergo a minimum loss when specific parameters are chosen in relation to the polymer molecular weight. Furthermore, the difference in dynamics between aliphatic and aromatic nuclei, as well as between mobile and stiff polymers, translates into different results in terms of polymer signal reduction, suggesting that the relaxation filter can also be used for obtaining information on the polymer structure.

3.
Talanta ; 223(Pt 2): 121865, 2021 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-33298291

RESUMO

Quality control of liquid raw materials arriving on an industrial manufacturing site is typically performed in a dedicated laboratory using time- and chemicals-consuming analytical methods. Herein, we report the successful development of a handheld near-infrared spectroscopy method for the rapid, low-cost testing of organic solvents. Our methodology enables the classification of organic solvents with 100% accuracy and the quantification of water in methyl ethyl ketone with a precision of ~0.01 wt% in the 0-0.25 wt% range. The accessory that we have developed for the NIR sensor enables the development of a broad range of sensing applications on organic liquid systems.

4.
Anal Chim Acta ; 954: 22-31, 2017 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-28081811

RESUMO

In this work we show that convolutional neural networks (CNNs) can be efficiently used to classify vibrational spectroscopic data and identify important spectral regions. CNNs are the current state-of-the-art in image classification and speech recognition and can learn interpretable representations of the data. These characteristics make CNNs a good candidate for reducing the need for preprocessing and for highlighting important spectral regions, both of which are crucial steps in the analysis of vibrational spectroscopic data. Chemometric analysis of vibrational spectroscopic data often relies on preprocessing methods involving baseline correction, scatter correction and noise removal, which are applied to the spectra prior to model building. Preprocessing is a critical step because even in simple problems using 'reasonable' preprocessing methods may decrease the performance of the final model. We develop a new CNN based method and provide an accompanying publicly available software. It is based on a simple CNN architecture with a single convolutional layer (a so-called shallow CNN). Our method outperforms standard classification algorithms used in chemometrics (e.g. PLS) in terms of accuracy when applied to non-preprocessed test data (86% average accuracy compared to the 62% achieved by PLS), and it achieves better performance even on preprocessed test data (96% average accuracy compared to the 89% achieved by PLS). For interpretability purposes, our method includes a procedure for finding important spectral regions, thereby facilitating qualitative interpretation of results.

5.
Anal Chim Acta ; 938: 44-52, 2016 Sep 28.
Artigo em Inglês | MEDLINE | ID: mdl-27619085

RESUMO

The aim of data preprocessing is to remove data artifacts-such as a baseline, scatter effects or noise-and to enhance the contextually relevant information. Many preprocessing methods exist to deliver one or more of these benefits, but which method or combination of methods should be used for the specific data being analyzed is difficult to select. Recently, we have shown that a preprocessing selection approach based on Design of Experiments (DoE) enables correct selection of highly appropriate preprocessing strategies within reasonable time frames. In that approach, the focus was solely on improving the predictive performance of the chemometric model. This is, however, only one of the two relevant criteria in modeling: interpretation of the model results can be just as important. Variable selection is often used to achieve such interpretation. Data artifacts, however, may hamper proper variable selection by masking the true relevant variables. The choice of preprocessing therefore has a huge impact on the outcome of variable selection methods and may thus hamper an objective interpretation of the final model. To enhance such objective interpretation, we here integrate variable selection into the preprocessing selection approach that is based on DoE. We show that the entanglement of preprocessing selection and variable selection not only improves the interpretation, but also the predictive performance of the model. This is achieved by analyzing several experimental data sets of which the true relevant variables are available as prior knowledge. We show that a selection of variables is provided that complies more with the true informative variables compared to individual optimization of both model aspects. Importantly, the approach presented in this work is generic. Different types of models (e.g. PCR, PLS, …) can be incorporated into it, as well as different variable selection methods and different preprocessing methods, according to the taste and experience of the user. In this work, the approach is illustrated by using PLS as model and PPRV-FCAM (Predictive Property Ranked Variable using Final Complexity Adapted Models) for variable selection.

6.
Anal Chem ; 87(24): 12096-103, 2015 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-26632985

RESUMO

The selection of optimal preprocessing is among the main bottlenecks in chemometric data analysis. Preprocessing currently is a burden, since a multitude of different preprocessing methods is available for, e.g., baseline correction, smoothing, and alignment, but it is not clear beforehand which method(s) should be used for which data set. The process of preprocessing selection is often limited to trial-and-error and is therefore considered somewhat subjective. In this paper, we present a novel, simple, and effective approach for preprocessing selection. The defining feature of this approach is a design of experiments. On the basis of the design, model performance of a few well-chosen preprocessing methods, and combinations thereof (called strategies) is evaluated. Interpretation of the main effects and interactions subsequently enables the selection of an optimal preprocessing strategy. The presented approach is applied to eight different spectroscopic data sets, covering both calibration and classification challenges. We show that the approach is able to select a preprocessing strategy which improves model performance by at least 50% compared to the raw data; in most cases, it leads to a strategy very close to the true optimum. Our approach makes preprocessing selection fast, insightful, and objective.

7.
Anal Chim Acta ; 781: 14-32, 2013 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-23684461

RESUMO

Warping methods are an important class of methods that can correct for misalignments in (a.o.) chemical measurements. Their use in preprocessing of chromatographic, spectroscopic and spectrometric data has grown rapidly over the last decade. This tutorial review aims to give a critical introduction to the most important warping methods, the place of warping in preprocessing and current views on the related matters of reference selection, optimization, and evaluation. Some pitfalls in warping, notably for liquid chromatography-mass spectrometry (LC-MS) data and similar, will be discussed. Examples will be given of the application of a number of freely available warping methods to a nuclear magnetic resonance (NMR) spectroscopic dataset and a chromatographic dataset. As part of the Supporting Information, we provide a number of programming scripts in Matlab and R, allowing the reader to work the extended examples in detail and to reproduce the figures in this paper.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...