Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Anal Chem ; 77(24): 7998-8007, 2005 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-16351148

RESUMO

Spectroscopy is a fast and rich analytical tool. On many occasions, spectra are acquired of two or more sets of samples that differ only slightly. These data sets then need to be compared and analyzed, but sometimes it is difficult to find the differences. We present a simple and effective method that detects and extracts new spectral features in a spectrum coming from one set with respect to spectra of another set on the basis of the fact that these new spectral features are essentially positive quantities. The proposed procedure (i) characterizes the spectra of the reference set by a component model and (ii) uses asymmetric least squares (ASLS) to find differences with respect to this component model. It should be stressed that the method only focuses on new features and does not trace relative changes of spectral features that occur in both sets of spectra. A comparison is made with the conventional ordinary least squares (OLS) approach. Both methods (OLS and ASLS) are illustrated with simulations and are tested for size-exclusion chromatography with infrared detection (SEC-IR) of mixtures of polymer standards. Both methods are able to provide information about new spectral features. It is shown that the ASLS-based procedure yields the best recovery of new features in the simulations and in the SEC-IR experiments. Band positions and band shapes of new spectral features are better retrieved with the ASLS than with the OLS method, even those which could hardly be detected visually. Depending on the spectroscopic technique used, the ASLS-based method facilitates identification of the new chemical compounds.


Assuntos
Cromatografia Líquida/métodos , Espectrofotometria Infravermelho/métodos , Análise Espectral/métodos , Simulação por Computador , Análise dos Mínimos Quadrados , Modelos Teóricos , Cimento de Policarboxilato/análise , Ácidos Polimetacrílicos/análise , Polimetil Metacrilato/análise , Espectroscopia de Infravermelho com Transformada de Fourier
2.
Langmuir ; 21(15): 6830-5, 2005 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-16008393

RESUMO

Near-infrared (NIR) spectroscopy is used to monitor online a large variety of processes. Hydrocarbons with their strong NIR spectral signature are good candidate analytes. For this work, the sorption data are measured in a manometric setup coupled with online NIR spectroscopy, to monitor the bulk composition. The assessment of time based results faces a baseline stability problem. The goal of this article is to study the robustness of different spectral preprocessing methods when dealing with time based data. In this study, it was found that for time based experiments it is necessary to perform drift correction on the spectra combined with a water band correction. For the calibration experiments, which only last few seconds, offset correction and drift correction performed equally well.


Assuntos
Butanos/química , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Adsorção , Calibragem , Isomerismo , Modelos Teóricos
3.
Appl Spectrosc ; 59(3): 267-74, 2005 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-15901305

RESUMO

Raman spectroscopy is applied for characterizing paintable displays. Few other options than Raman spectroscopy exist for doing so because of the liquid nature of functional materials. The challenge is to develop a method that can be used for estimating the composition of a single display cell on the basis of the collected three-dimensional Raman spectra. A classical least squares (CLS) model is used to model the measured spectra. It is shown that spectral preprocessing is a necessary and critical step for obtaining a good CLS model and reliable compositional profiles. Different kinds of preprocessing are explained. For each data set the type and amount of preprocessing may be different. This is shown using two data sets measured on essentially the same type of display cell, but under different experimental conditions. For model validation three criteria are introduced: mean sum of squares of residuals, percentage of unexplained information (PUN), and average residual curve. It is shown that the decision about the best combination of preprocessing techniques cannot be based only on overall error indicators (such as PUN). In addition, local residual analysis must be done and the feasibility of the extracted profiles should be taken into account.


Assuntos
Algoritmos , Apresentação de Dados , Análise de Falha de Equipamento/métodos , Teste de Materiais/métodos , Metacrilatos/análise , Modelos Químicos , Análise Espectral Raman/métodos , Simulação por Computador , Terminais de Computador , Cristalografia/métodos , Modelos Lineares , Modelos Estatísticos , Estilbenos/análise
4.
J Chromatogr A ; 1057(1-2): 21-30, 2004 Nov 19.
Artigo em Inglês | MEDLINE | ID: mdl-15584219

RESUMO

A new method to eliminate the background spectrum (EBS) during analyte elution in column liquid chromatography (LC) coupled to spectroscopic techniques is proposed. This method takes into account the shape and also intensity differences of the background eluent spectrum. This allows the EBS method to make a better estimation of the background eluent spectrum during analyte elution. This is an advantage for quantification as well as for identification of analytes. The EBS method uses a two-step procedure. First, the baseline spectra are modeled using a limited number of principal components (PCs). Subsequently, an asymmetric least squares (asLS) regression method is applied using these principal components to correct the measured spectra during elution for the background contribution. The asymmetric least squares regression needs one parameter, the asymmetry factor p. This asymmetry factor determines relative weight of positive and negative residuals. Simulations are performed to test the EBS method in well-defined situations. The effect of spectral noise on the performance and the sensitivity of the EBS method for the value of the asymmetry factorp is tested. Two applications of the EBS method are discussed. In the first application, the goal is to extract the analyte spectrum from an LC-Raman analysis. In this case, the EBS method facilitates easy identification of unknown analytes using spectral libraries. In a second application, the EBS method is used for baseline correction in LC-diode array detection (DAD) analysis of polymeric standards during a gradient elution separation. It is shown that the EBS method yields a good baseline correction, without the need to perform a blank chromatographic run.


Assuntos
Espectrofotometria Infravermelho/métodos , Espectrofotometria Ultravioleta/métodos , Análise Espectral Raman/métodos
5.
Anal Chem ; 76(9): 2656-63, 2004 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-15117212

RESUMO

To increase the power and the robustness of spectroscopic process analyzers, methods are needed that suppress the spectral variation that is not related to the property of interest in the process stream. An approach for the selection of a suitable method is presented. The approach uses the net analyte signal (NAS) to analyze the situation and to select methods to suppress the nonrelevant spectral variation. The empirically determined signal-to-noise of the NAS is used as a figure of merit. The advantages of the approach are (i). that the error of the reference method does not affect method selection and (ii). that only a few spectral measurements are needed. A diagnostic plot is proposed that guides the user in the evaluation of the particular suppression method. As an example, NIR spectroscopic monitoring of a mol-sieve separation process is used.

6.
Anal Chem ; 76(11): 3171-8, 2004 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-15167798

RESUMO

The practical difficulties encountered in analyzing the kinetics of new reactions are considered from the viewpoint of the capabilities of state-of-the-art high-throughput systems. There are three problems. The first problem is that of model selection, i.e., choosing the correct reaction rate law. The second problem is how to obtain good estimates of the reaction parameters using only a small number of samples once a kinetic model is selected. The third problem is how to perform both functions using just one small set of measurements. To solve the first problem, we present an optimal sampling protocol to choose the correct kinetic model for a given reaction, based on T-optimal design. This protocol is then tested for the case of second-order and pseudo-first-order reactions using both experiments and computer simulations. To solve the second problem, we derive the information function for second-order reactions and use this function to find the optimal sampling points for estimating the kinetic constants. The third problem is further complicated by the fact that the optimal measurement times for determining the correct kinetic model differ from those needed to obtain good estimates of the kinetic constants. To solve this problem, we propose a Pareto optimal approach that can be tuned to give the set of best possible solutions for the two criteria. One important advantage of this approach is that it enables the integration of a priori knowledge into the workflow.

7.
Bioinformatics ; 20(15): 2438-46, 2004 Oct 12.
Artigo em Inglês | MEDLINE | ID: mdl-15087313

RESUMO

MOTIVATION: Metabolomics datasets are generally large and complex. Using principal component analysis (PCA), a simplified view of the variation in the data is obtained. The PCA model can be interpreted and the processes underlying the variation in the data can be analysed. In metabolomics, often a priori information is present about the data. Various forms of this information can be used in an unsupervised data analysis with weighted PCA (WPCA). A WPCA model will give a view on the data that is different from the view obtained using PCA, and it will add to the interpretation of the information in a metabolomics dataset. RESULTS: A method is presented to translate spectra of repeated measurements into weights describing the experimental error. These weights are used in the data analysis with WPCA. The WPCA model will give a view on the data where the non-uniform experimental error is accounted for. Therefore, the WPCA model will focus more on the natural variation in the data. AVAILABILITY: M-files for MATLAB for the algorithm used in this research are available at http://www-its.chem.uva.nl/research/pac/Software/pcaw.zip.


Assuntos
Algoritmos , Perfilação da Expressão Gênica/métodos , Espectroscopia de Ressonância Magnética/métodos , Modelos Biológicos , Mapeamento de Interação de Proteínas/métodos , Proteínas/metabolismo , Transdução de Sinais/fisiologia , Animais , Simulação por Computador , Macaca mulatta , Análise de Componente Principal/métodos , Fatores de Tempo , Urinálise/métodos
8.
Chemistry ; 9(16): 3876-81, 2003 Aug 18.
Artigo em Inglês | MEDLINE | ID: mdl-12916112

RESUMO

Combinatorial chemistry and high-throughput experimentation (HTE) have revolutionized the pharmaceutical industry-but can chemists truly repeat this success in the fields of catalysis and materials science? We propose to bridge the traditional "discovery" and "optimization" stages in HTE by enabling parallel kinetic analysis of an array of chemical reactions. We present here the theoretical basis to extract concentration profiles from reaction arrays and derive the optimal criteria to follow (pseudo)first-order reactions in time in parallel systems. We use the information vector f and introduce in this context the information gain ratio, chi(r), to quantify the amount of useful information that can be obtained by measuring the extent of a specified reaction r in the array at any given time. Our method is general and independent of the analysis technique, but it is more effective if the analysis is performed on-line. The feasibility of this new approach is demonstrated in the fast kinetic analysis of the carbon-sulfur coupling between 3-chlorophenylhydrazonopropane dinitrile and beta-mercaptoethanol. The theory agrees well with the results obtained from 31 repeated C-S coupling experiments.

9.
Analyst ; 128(1): 98-102, 2003 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-12572811

RESUMO

Many high quality products are produced in a batch wise manner. One of the characteristics of a batch process is the recipe driven nature. By repeating the recipe in an identical manner a desired end-product is obtained. However, in spite of repeating the recipe in an identical manner, process differences occur. These differences can be caused by a change of feed stock supplier or impurities in the process. Because of this, differences might occur in the end-product quality or unsafe process situations arise. Therefore, the need to monitor an industrial batch process exists. An industrial process is usually monitored by process measurements such as pressures and temperatures. Nowadays, due to technical developments, spectroscopy is more and more used for process monitoring. Spectroscopic measurements have the advantage of giving a direct chemical insight in the process. Multivariate statistical process control (MSPC) is a statistical way of monitoring the behaviour of a process. Combining spectroscopic measurements with MSPC will notice process perturbations or process deviations from normal operating conditions in a very simple manner. In the following an application is given of batch process monitoring. It is shown how a calibration model is developed and used with the principles of MSPC. Statistical control charts are developed and used to detect batches with a process upset.

10.
Anal Chem ; 75(23): 6701-7, 2003 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-16465720

RESUMO

The application of robotic systems to the study of complex reaction kinetics is considered, using the cascade reaction A --> B --> C as a working example. Practical problems in calculating the rate constants k1 and k2 for the reactions A --> B and B --> C from concentration measurements of CA, CB, or CC are discussed in the light of the symmetry and invertability of the rate equations. A D-optimal analysis is used to determine the points in time and the species that will give the best (i.e., most accurate) results. When exact data are used, the most robust solution results from measuring the pair of concentrations (CA, CC). The system's information function is computed using numeric methods. This function is then used to estimate the amount of information obtainable from a given cascade reaction at any given time. The theoretical findings are compared with experimental results from a set of two-stage cascade experiments monitored using UV-visible spectroscopy. Finally, the pros and cons of using a single reaction sample to estimate both k1 and k2 are discussed.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...