Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Ther Innov Regul Sci ; 57(2): 316-320, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36289189

RESUMO

The two trials paradigm plays a prominent role in drug development and has been widely and controversially discussed. Its purpose is to ensure replicability or substantiation of study results. This note investigates a simple generalization of the paradigm to more than two trials that preserves the project wise type-I error rate and power.

2.
Stat Med ; 40(18): 4068-4076, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33928668

RESUMO

Replicability of results is regarded as the corner stone of science. Recent research seems to raise doubts about whether this requirement is generally fulfilled. Often, replicability of results is defined as repeating a statistically significant result. However, since significance may not imply scientific relevance, dual-criterion study designs that take both aspects into account have been proposed and investigated during the last decade. Originally developed for proof-of-concept trials, the design could be appropriate for phase III trials as well. In fact, a dual-criterion design has been requested for COVID-19 vaccine applications by major health authorities. In this article, replicability of dual-criterion designs is investigated. It turns out that the probability to replicate a significant and relevant result can become as low as 0.5. The replication probability increases if the effect estimator exceeds the minimum relevant effect in the original study by an extra amount.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Humanos , Probabilidade , Projetos de Pesquisa , SARS-CoV-2
3.
PLoS One ; 13(10): e0205971, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30335831

RESUMO

Identifying subgroups of treatment responders through the different phases of clinical trials has the potential to increase success in drug development. Recent developments in subgroup analysis consider subgroups that are defined in terms of the predicted individual treatment effect, i.e. the difference between the predicted outcome under treatment and the predicted outcome under control for each individual, which in turn may depend on multiple biomarkers. In this work, we study the properties of different modelling strategies to estimate the predicted individual treatment effect. We explore linear models and compare different estimation methods, such as maximum likelihood and the Lasso with and without randomized response. For the latter, we implement confidence intervals based on the selective inference framework to account for the model selection stage. We illustrate the methods in a dataset of a treatment for Alzheimer disease (normal response) and in a dataset of a treatment for prostate cancer (survival outcome). We also evaluate via simulations the performance of using the predicted individual treatment effect to identify subgroups where a novel treatment leads to better outcomes compared to a control treatment.


Assuntos
Ensaios Clínicos como Assunto , Medicina de Precisão , Doença de Alzheimer/terapia , Simulação por Computador , Intervalos de Confiança , Bases de Dados como Assunto , Humanos , Masculino , Neoplasias da Próstata/terapia , Sensibilidade e Especificidade , Fatores de Tempo , Resultado do Tratamento
4.
Biom J ; 58(5): 1217-28, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-27230820

RESUMO

The interest in individualized medicines and upcoming or renewed regulatory requests to assess treatment effects in subgroups of confirmatory trials requires statistical methods that account for selection uncertainty and selection bias after having performed the search for meaningful subgroups. The challenge is to judge the strength of the apparent findings after mining the same data to discover them. In this paper, we describe a resampling approach that allows to replicate the subgroup finding process many times. The replicates are used to adjust the effect estimates for selection bias and to provide variance estimators that account for selection uncertainty. A simulation study provides some evidence of the performance of the method and an example from oncology illustrates its use.


Assuntos
Modelos Teóricos , Medicina de Precisão/métodos , Simulação por Computador , Humanos , Neoplasias/mortalidade , Incerteza
5.
Clin Trials ; 13(3): 338-43, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26768555

RESUMO

BACKGROUND: High response under placebo constitutes a concern in clinical studies, particularly in psychiatry. Discontinuation of placebo responders identified during a placebo run-in is often recommended to avoid failures of clinical trials in the presence of high placebo effects. Evidence for the benefit of this approach is ambiguous. PURPOSE: We investigate under which conditions a placebo lead-in can be beneficial in the context of continuous data, assuming that the data in the placebo run-in and the treatment stage follow a bivariate normal distribution. Placebo responders are defined as patients with an effect during placebo lead-in which is larger than a pre-defined threshold on the absolute value or the absolute or relative change from baseline or a combination thereof. RESULTS: Data are less variable under either placebo or test treatment after placebo responders have been removed. Whether the effect of test over placebo increases or decreases after enrichment for placebo non-responders depends on the parameters of the distribution, in particular the covariance structure, and the threshold in the definition of placebo responders. LIMITATIONS: The results apply in the continuous case, and the binary or ordinary case is not studied. The findings explain to some extent the ambiguity in the assessments of the usefulness of placebo lead-in periods in clinical trials; however, besides the clear statement on variability reduction, it is not straightforward to judge upfront whether placebo lead-in is useful. Concerns relating to the conduct and interpretation of results of such trials are mentioned.


Assuntos
Efeito Placebo , Projetos de Pesquisa , Humanos , Psiquiatria
6.
Stat Methods Med Res ; 24(4): 420-33, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-24501227

RESUMO

This paper addresses some aspects of the analysis of cross-over trials with missing or incomplete data. A literature review on the topic reveals that many proposals provide correct results under the missing completely at random assumption while only some consider the more general missing at random situation. It is argued that mixed-effects models have a role in this context to recover some of the missing intra-subject from the inter-subject information, in particular when missingness is ignorable. Eventually, sensitivity analyses to deal with more general missingness mechanisms are presented.


Assuntos
Estudos Cross-Over , Interpretação Estatística de Dados , Ensaios Clínicos como Assunto
7.
Ther Innov Regul Sci ; 47(4): 455-459, 2013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-30235517

RESUMO

The debate about what constitutes a valid analysis of clinical trial data is longstanding. While the intention-to-treat (ITT) principle seems to be widely accepted in the context of controlled clinical trials aiming to show superiority of an experimental treatment over a control, the best choice for a noninferiority trial is still under discussion. In this article, it is argued that the definition of analysis sets and the purpose of ITT and per-protocol analyses proposed in the International Conference on Harmonisation biostatistics guideline E9 should be revised to allow for more appropriate analyses, given that statistical methodology has been developed since the guideline was issued.

8.
Stat Med ; 30(30): 3475-87, 2011 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-21953285

RESUMO

The design of a comparative clinical trial involves a method of allocating treatments to patients. Usually, this assignment is performed to achieve several objectives: to minimize selection and accidental bias, to achieve balanced treatment assignment in order to maximize the power of the comparison, and most importantly, to obtain the basis for a valid statistical inference. In this paper, we are concerned exclusively with the last point. In our investigation, we will assume that measurements can be decomposed in a patient-specific effect, a treatment effect, and a measurement error. If the patient can be considered to be randomly drawn from a population, the randomization method does not affect the analysis. In fact, under this so-called population model, randomization would be unnecessary to obtain a valid inference. However, when individuals cannot be considered randomly selected, the patient effects may become fixed but unknown constants. In this case, randomization is necessary to obtain valid statistical analyses, and it cannot be precluded that the randomization method has an impact on the results. This paper elaborates that the impact can be substantial even for a two-sample comparison when a standard t-test is used for data analysis. We provide some theoretical results as well as simulations.


Assuntos
Distribuição Aleatória , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Bioestatística , Interpretação Estatística de Dados , Humanos , Modelos Estatísticos
9.
Pharm Stat ; 10(3): 196-202, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21574240

RESUMO

In many morbidity/mortality studies, composite endpoints are considered. Although the primary interest is to demonstrate that an invention delays death, the expected death rate is often that low that studies focussing on survival exclusively are not feasible. Components of the composite endpoint are chosen such that their occurrence is predictive for time to death. Therefore, if the time to non-fatal events is censored by death, censoring is no longer independent. As a consequence, the analysis of the components of a composite endpoint cannot be reasonably performed using classical methods for the analysis of survival times like Kaplan-Meier estimates or log-rank tests. In this paper we visualize the impact of disregarding dependent censoring during the analysis and discuss practicable alternatives for the analysis of morbidity/mortality studies. In the context of simulations we provide evidence that copula-based methods have the potential to deliver practically unbiased estimates of hazards of components of a composite endpoint. At the same time, they require minimal assumptions, which is important since not all assumptions are generally verifiable because of censoring. Therefore, there are alternative ways to analyze morbidity/mortality studies more appropriately by accounting for the dependencies among the components of composite endpoints. Despite the limitations mentioned, these alternatives can at minimum serve as sensitivity analyses to check the robustness of the currently used methods.


Assuntos
Doenças Cardiovasculares/epidemiologia , Modelos Estatísticos , Projetos de Pesquisa/estatística & dados numéricos , Doenças Cardiovasculares/mortalidade , Ensaios Clínicos como Assunto , Humanos , Estimativa de Kaplan-Meier , Funções Verossimilhança , Modelos Teóricos , Mortalidade , Probabilidade , Estatística como Assunto
10.
J Biopharm Stat ; 21(2): 252-62, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21390999

RESUMO

A modeling framework is described for the specific setting of clinical trials in which there is only a single post-randomization response measurement, which may itself be missing, or, for clinical reasons, may be measured before the trial end. Such settings have three simultaneous processes: the outcome itself, the time to measurement, and the occurrence of missing values. A simple latent variable structure within a multivariate Gaussian distribution is used to model them. The full model is strictly nonrandom with respect to the missing value process, and therefore estimability of certain parameters depends on unverifiable assumptions. We use a simulation study to assess the behavior of the maximum likelihood estimators from the model; we then compare and contrast with a simpler last observation carried forward (LOCF) approach that ignores both the time to response and the missingness process, and is commonly used in practice in such settings. The proposed approach is illustrated using data from a trial on the treatment of congestive heart failure, in which the response measurements were obtained by echocardiography.


Assuntos
Ensaios Clínicos como Assunto , Modelos Estatísticos , Resultado do Tratamento , Cardiotônicos/uso terapêutico , Simulação por Computador , Insuficiência Cardíaca/diagnóstico por imagem , Insuficiência Cardíaca/tratamento farmacológico , Humanos , Pacientes Desistentes do Tratamento , Fatores de Tempo , Ultrassonografia
11.
Pharm Stat ; 9(2): 162-7, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-19718773

RESUMO

The Hodges-Lehmann estimator was originally developed as a non-parametric estimator of a shift parameter. As it is widely used in statistical applications, the question is investigated what it is estimating if the shift model does not hold. It is shown that for data whose distributions are symmetric about their median the Hodges-Lehmann estimator based on the Wilcoxon Rank Sum test estimates the difference between the medians of the distributions. This result does generally not hold if the symmetry assumption is violated.


Assuntos
Interpretação Estatística de Dados , Modelos Estatísticos , Análise de Sobrevida , Estatísticas não Paramétricas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...