Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
SAR QSAR Environ Res ; 24(9): 711-31, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23767783

RESUMO

Quantitative structure-activity relationship (QSAR) models have been widely used to study the permeability of chemicals or solutes through skin. Among the various QSAR models, Abraham's linear free-energy relationship (LFER) model is often employed. However, when the experimental conditions are complex, it is not always appropriate to use Abraham's LFER model with a single set of regression coefficients. In this paper, we propose an expanded model in which one set of partial slopes is defined for each experimental condition, where conditions are defined according to solvent: water, synthetic oil, semi-synthetic oil, or soluble oil. This model not only accounts for experimental conditions but also improves the ability to conduct rigorous hypothesis testing. To more adequately evaluate the predictive power of the QSAR model, we modified the usual leave-one-out internal validation strategy to employ a leave-one-solute-out strategy and accordingly adjust the Q(2) LOO statistic. Skin permeability was shown to have the rank order: water > synthetic > semi-synthetic > soluble oil. In addition, fitted relationships between permeability and solute characteristics differ according to solvents. We demonstrated that the expanded model (r(2) = 0.70) improved both the model fit and the predictive power when compared with the simple model (r(2) = 0.21).


Assuntos
Misturas Complexas/farmacocinética , Relação Quantitativa Estrutura-Atividade , Pele/efeitos dos fármacos , Permeabilidade
2.
SAR QSAR Environ Res ; 24(2): 135-56, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23157374

RESUMO

Quantitative structure-activity relationship (QSAR) models are being used increasingly in skin permeation studies. The main idea of QSAR modelling is to quantify the relationship between biological activities and chemical properties, and thus to predict the activity of chemical solutes. As a key step, the selection of a representative and structurally diverse training set is critical to the prediction power of a QSAR model. Early QSAR models selected training sets in a subjective way and solutes in the training set were relatively homogenous. More recently, statistical methods such as D-optimal design or space-filling design have been applied but such methods are not always ideal. This paper describes a comprehensive procedure to select training sets from a large candidate set of 4534 solutes. A newly proposed 'Baynes' rule', which is a modification of Lipinski's 'rule of five', was used to screen out solutes that were not qualified for the study. U-optimality was used as the selection criterion. A principal component analysis showed that the selected training set was representative of the chemical space. Gas chromatograph amenability was verified. A model built using the training set was shown to have greater predictive power than a model built using a previous dataset [1].


Assuntos
Química/métodos , Compostos Inorgânicos/farmacocinética , Compostos Orgânicos/farmacocinética , Permeabilidade , Relação Quantitativa Estrutura-Atividade , Pele/efeitos dos fármacos , Compostos Inorgânicos/química , Modelos Estatísticos , Compostos Orgânicos/química
3.
Biometrics ; 57(3): 922-30, 2001 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-11550946

RESUMO

Pooling experiments are used as a cost-effective approach for screening chemical compounds as part of the drug discovery process in pharmaceutical companies. When a biologically potent pool is found, the goal is to decode the pool, i.e., to determine which of the individual compounds are potent. We propose augmenting the data on pooled testing with information on the chemical structure of compounds in order to complete the decoding process. This proposal is based on the well-known relationship between biological potency of a compound and its chemical structure. Application to real data from a drug discovery process at GlaxoSmithKline reveals a 100% increase in hit rate, namely, the number of potent compounds identified divided by the number of tests required.


Assuntos
Biometria , Desenho de Fármacos , Avaliação Pré-Clínica de Medicamentos/estatística & dados numéricos , Bioensaio/estatística & dados numéricos , Funções Verossimilhança , Modelos Estatísticos , Estrutura Molecular
4.
Lifetime Data Anal ; 5(2): 173-83, 1999 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-10408183

RESUMO

In testing product reliability, there is often a critical cutoff level that determines whether a specimen is classified as "failed." One consequence is that the number of degradation data collected varies from specimen to specimen. The information of random sample size should be included in the model, and our study shows that it can be influential in estimating model parameters. Two-stage least squares (LS) and maximum modified likelihood (MML) estimation, which both assume fixed sample sizes, are commonly used for estimating parameters in the repeated measurements models typically applied to degradation data. However, the LS estimate is not consistent in the case of random sample sizes. This article derives the likelihood for the random sample size model and suggests using maximum likelihood (ML) for parameter estimation. Our simulation studies show that ML estimates have smaller biases and variances compared to the LS and MML estimates. All estimation methods can be greatly improved if the number of specimens increases from 5 to 10. A data set from a semiconductor application is used to illustrate our methods.


Assuntos
Simulação por Computador , Manufaturas/estatística & dados numéricos , Modelos Teóricos , Análise dos Mínimos Quadrados , Funções Verossimilhança , Controle de Qualidade , Estudos de Amostragem , Semicondutores/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...