Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
1.
Clin Trials ; : 17407745241251851, 2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38825839
2.
Clin Trials ; : 17407745241251568, 2024 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-38825841

RESUMO

There has been a growing interest in covariate adjustment in the analysis of randomized controlled trials in past years. For instance, the US Food and Drug Administration recently issued guidance that emphasizes the importance of distinguishing between conditional and marginal treatment effects. Although these effects may sometimes coincide in the context of linear models, this is not typically the case in other settings, and this distinction is often overlooked in clinical trial practice. Considering these developments, this article provides a review of when and how to use covariate adjustment to enhance precision in randomized controlled trials. We describe the differences between conditional and marginal estimands and stress the necessity of aligning statistical analysis methods with the chosen estimand. In addition, we highlight the potential misalignment of commonly used methods in estimating marginal treatment effects. We hereby advocate for the use of the standardization approach, as it can improve efficiency by leveraging the information contained in baseline covariates while remaining robust to model misspecification. Finally, we present practical considerations that have arisen in our respective consultations to further clarify the advantages and limitations of covariate adjustment.

3.
Am J Epidemiol ; 192(10): 1772-1780, 2023 10 10.
Artigo em Inglês | MEDLINE | ID: mdl-37338999

RESUMO

Randomized trials offer a powerful strategy for estimating the effect of a treatment on an outcome. However, interpretation of trial results can be complicated when study subjects do not take the treatment to which they were assigned; this is referred to as nonadherence. Prior authors have described instrumental variable approaches to analyze trial data with nonadherence; under their approaches, the initial assignment to treatment is used as an instrument. However, their approaches require the assumption that initial assignment to treatment has no direct effect on the outcome except via the actual treatment received (i.e., the exclusion restriction), which may be implausible. We propose an approach to identification of a causal effect of treatment in a trial with 1-sided nonadherence without assuming exclusion restriction. The proposed approach leverages the study subjects initially assigned to control status as an unexposed reference population; we then employ a bespoke instrumental variable analysis, where the key assumption is "partial exchangeability" of the association between a covariate and an outcome in the treatment and control arms. We provide a formal description of the conditions for identification of causal effects, illustrate the method using simulations, and provide an empirical application.


Assuntos
Ensaios Clínicos como Assunto , Cooperação do Paciente , Humanos , Causalidade
4.
Biometrics ; 79(4): 3096-3110, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37349873

RESUMO

The problem of how to best select variables for confounding adjustment forms one of the key challenges in the evaluation of exposure effects in observational studies, and has been the subject of vigorous recent activity in causal inference. A major drawback of routine procedures is that there is no finite sample size at which they are guaranteed to deliver exposure effect estimators and associated confidence intervals with adequate performance. In this work, we will consider this problem when inferring conditional causal hazard ratios from observational studies under the assumption of no unmeasured confounding. The major complication that we face with survival data is that the key confounding variables may not be those that explain the censoring mechanism. In this paper, we overcome this problem using a novel and simple procedure that can be implemented using off-the-shelf software for penalized Cox regression. In particular, we will propose tests of the null hypothesis that the exposure has no effect on the considered survival endpoint, which are uniformly valid under standard sparsity conditions. Simulation results show that the proposed methods yield valid inferences even when covariates are high-dimensional.


Assuntos
Software , Viés , Simulação por Computador , Modelos de Riscos Proporcionais , Tamanho da Amostra
5.
Arch Gynecol Obstet ; 308(4): 1085-1091, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-36738316

RESUMO

Administration of antenatal corticosteroids (ACS) for accelerating foetal lung maturation in threatened preterm birth is one of the cornerstones of prevention of neonatal mortality and morbidity. To identify the optimal timing of ACS administration, most studies have compared subgroups based on treatment-to-delivery intervals. Such subgroup analysis of the first placebo-controlled randomised controlled trial indicated that a one to seven day interval between ACS administration and birth resulted in the lowest rates of neonatal respiratory distress syndrome. This efficacy window was largely confirmed by a series of subgroup analyses of subsequent trials and observational studies and strongly influenced obstetric management. However, these subgroup analyses suffer from a methodological flaw that often seems to be overlooked and potentially has important consequences for drawing valid conclusions. In this commentary, we point out that studies comparing treatment outcomes between subgroups that are retrospectively identified at birth (i.e. after randomisation) may not only be plagued by post-randomisation confounding bias but, more importantly, may not adequately inform decision making before birth, when the projected duration of the interval is still unknown. We suggest two more formal interpretations of these subgroup analyses, using a counterfactual framework for causal inference, and demonstrate that each of these interpretations can be linked to a different hypothetical trial. However, given the infeasibility of these trials, we argue that none of these rescue interpretations are helpful for clinical decision making. As a result, guidelines based on these subgroup analyses may have led to suboptimal clinical practice. As an alternative to these flawed subgroup analyses, we suggest a more principled approach that clearly formulates the question about optimal timing of ACS treatment in terms of the protocol of a future randomised study. Even if this 'target trial' would never be conducted, its protocol may still provide important guidance to avoid repeating common design flaws when conducting observational 'real world' studies using statistical methods for causal inference.


Assuntos
Nascimento Prematuro , Síndrome do Desconforto Respiratório do Recém-Nascido , Recém-Nascido , Gravidez , Feminino , Humanos , Nascimento Prematuro/prevenção & controle , Nascimento Prematuro/tratamento farmacológico , Estudos Retrospectivos , Corticosteroides/uso terapêutico , Síndrome do Desconforto Respiratório do Recém-Nascido/prevenção & controle , Mortalidade Infantil , Ensaios Clínicos Controlados Aleatórios como Assunto
7.
Stat Med ; 41(16): 3211-3228, 2022 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-35578779

RESUMO

Intercurrent (post-treatment) events occur frequently in randomized trials, and investigators often express interest in treatment effects that suitably take account of these events. Contrasts that naively condition on intercurrent events do not have a straight-forward causal interpretation, and the practical relevance of other commonly used approaches is debated. In this work, we discuss how to formulate and choose an estimand, beyond the marginal intention-to-treat effect, from the point of view of a decision maker and drug developer. In particular, we argue that careful articulation of a practically useful research question should either reflect decision making at this point in time or future drug development. Indeed, a substantially interesting estimand is simply a formalization of the (plain English) description of a research question. A common feature of estimands that are practically useful is that they correspond to possibly hypothetical but well-defined interventions in identifiable (sub)populations. To illustrate our points, we consider five examples that were recently used to motivate consideration of principal stratum estimands in clinical trials. In all of these examples, we propose alternative causal estimands, such as conditional effects, sequential regime effects, and separable effects, that correspond to explicit research questions of substantial interest.


Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Causalidade , Interpretação Estatística de Dados , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto
9.
Stat Med ; 40(16): 3779-3790, 2021 07 20.
Artigo em Inglês | MEDLINE | ID: mdl-33942919

RESUMO

Using data from observational studies to estimate the causal effect of a time-varying exposure, repeatedly measured over time, on an outcome of interest requires careful adjustment for confounding. Standard regression adjustment for observed time-varying confounders is unsuitable, as it can eliminate part of the causal effect and induce bias. Inverse probability weighting, g-computation, and g-estimation have been proposed as being more suitable methods. G-estimation has some advantages over the other two methods, but until recently there has been a lack of flexible g-estimation methods for a survival time outcome. The recently proposed Structural Nested Cumulative Survival Time Model (SNCSTM) is such a method. Efficient estimation of the parameters of this model required bespoke software. In this article we show how the SNCSTM can be fitted efficiently via g-estimation using standard software for fitting generalised linear models. The ability to implement g-estimation for a survival outcome using standard statistical software greatly increases the potential uptake of this method. We illustrate the use of this method of fitting the SNCSTM by reanalyzing data from the UK Cystic Fibrosis Registry, and provide example R code to facilitate the use of this approach by other researchers.


Assuntos
Modelos Estatísticos , Viés , Causalidade , Humanos , Modelos Lineares , Probabilidade
10.
Stat Med ; 40(18): 4108-4121, 2021 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-33978249

RESUMO

The analysis of randomized trials with time-to-event endpoints is nearly always plagued by the problem of censoring. In practice, such analyses typically invoke the assumption of noninformative censoring. While this assumption usually becomes more plausible as more baseline covariates are being adjusted for, such adjustment also raises concerns. Prespecification of which covariates will be adjusted for (and how) is difficult, thus prompting the use of data-driven variable selection procedures, which may impede valid inferences to be drawn. The adjustment for covariates moreover adds concerns about model misspecification, and the fact that each change in adjustment set also changes the censoring assumption and the treatment effect estimand. In this article, we discuss these concerns and propose a simple variable selection strategy designed to produce a valid test of the null in large samples. The proposal can be implemented using off-the-shelf software for (penalized) Cox regression, and is empirically found to work well in simulation studies and real data analyses.


Assuntos
Ensaios Clínicos Controlados Aleatórios como Assunto , Simulação por Computador , Humanos
11.
Eur Heart J ; 41(35): 3325-3333, 2020 09 14.
Artigo em Inglês | MEDLINE | ID: mdl-33011775

RESUMO

AIMS: Cardiovascular disease (CVD) risk prediction models are used in Western European countries, but less so in Eastern European countries where rates of CVD can be two to four times higher. We recalibrated the SCORE prediction model for three Eastern European countries and evaluated the impact of adding seven behavioural and psychosocial risk factors to the model. METHODS AND RESULTS: We developed and validated models using data from the prospective HAPIEE cohort study with 14 598 participants from Russia, Poland, and the Czech Republic (derivation cohort, median follow-up 7.2 years, 338 fatal CVD cases) and Estonian Biobank data with 4632 participants (validation cohort, median follow-up 8.3 years, 91 fatal CVD cases). The first model (recalibrated SCORE) used the same risk factors as in the SCORE model. The second model (HAPIEE SCORE) added education, employment, marital status, depression, body mass index, physical inactivity, and antihypertensive use. Discrimination of the original SCORE model (C-statistic 0.78 in the derivation and 0.83 in the validation cohorts) was improved in recalibrated SCORE (0.82 and 0.85) and HAPIEE SCORE (0.84 and 0.87) models. After dichotomizing risk at the clinically meaningful threshold of 5%, and when comparing the final HAPIEE SCORE model against the original SCORE model, the net reclassification improvement was 0.07 [95% confidence interval (CI) 0.02-0.11] in the derivation cohort and 0.14 (95% CI 0.04-0.25) in the validation cohort. CONCLUSION: Our recalibrated SCORE may be more appropriate than the conventional SCORE for some Eastern European populations. The addition of seven quick, non-invasive, and cheap predictors further improved prediction accuracy.


Assuntos
Doenças Cardiovasculares , Doenças Cardiovasculares/epidemiologia , Estudos de Coortes , República Tcheca , Fatores de Risco de Doenças Cardíacas , Humanos , Polônia , Estudos Prospectivos , Medição de Risco , Fatores de Risco , Federação Russa
12.
Biometrics ; 76(4): 1190-1200, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32002989

RESUMO

After variable selection, standard inferential procedures for regression parameters may not be uniformly valid; there is no finite-sample size at which a standard test is guaranteed to approximately attain its nominal size. This problem is exacerbated in high-dimensional settings, where variable selection becomes unavoidable. This has prompted a flurry of activity in developing uniformly valid hypothesis tests for a low-dimensional regression parameter (eg, the causal effect of an exposure A on an outcome Y) in high-dimensional models. So far there has been limited focus on model misspecification, although this is inevitable in high-dimensional settings. We propose tests of the null that are uniformly valid under sparsity conditions weaker than those typically invoked in the literature, assuming working models for the exposure and outcome are both correctly specified. When one of the models is misspecified, by amending the procedure for estimating the nuisance parameters, our tests continue to be valid; hence, they are doubly robust. Our proposals are straightforward to implement using existing software for penalized maximum likelihood estimation and do not require sample splitting. We illustrate them in simulations and an analysis of data obtained from the Ghent University intensive care unit.


Assuntos
Simulação por Computador , Causalidade , Humanos , Tamanho da Amostra
13.
Stat Methods Med Res ; 29(3): 677-694, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31385558

RESUMO

The problem of how to best select variables for confounding adjustment forms one of the key challenges in the evaluation of exposure or treatment effects in observational studies. Routine practice is often based on stepwise selection procedures that use hypothesis testing, change-in-estimate assessments or the lasso, which have all been criticised for - amongst other things - not giving sufficient priority to the selection of confounders. This has prompted vigorous recent activity in developing procedures that prioritise the selection of confounders, while preventing the selection of so-called instrumental variables that are associated with exposure, but not outcome (after adjustment for the exposure). A major drawback of all these procedures is that there is no finite sample size at which they are guaranteed to deliver treatment effect estimators and associated confidence intervals with adequate performance. This is the result of the estimator jumping back and forth between different selected models, and standard confidence intervals ignoring the resulting model selection uncertainty. In this paper, we will develop insight into this by evaluating the finite-sample distribution of the exposure effect estimator in linear regression, under a number of the aforementioned confounder selection procedures. We will show that by making clever use of propensity scores, a simple and generic solution is obtained in the context of generalized linear models, which overcomes this concern (under weaker conditions than competing proposals). Specifically, we propose to use separate regularized regressions for the outcome and propensity score models in order to construct a doubly robust 'g-estimator'; when these models are sufficiently sparse and correctly specified, standard confidence intervals for the g-estimator implicitly incorporate the uncertainty induced by the variable selection procedure.


Assuntos
Pontuação de Propensão , Causalidade , Simulação por Computador , Intervalos de Confiança , Tamanho da Amostra
14.
Biometrics ; 76(2): 472-483, 2020 06.
Artigo em Inglês | MEDLINE | ID: mdl-31562652

RESUMO

Accounting for time-varying confounding when assessing the causal effects of time-varying exposures on survival time is challenging. Standard survival methods that incorporate time-varying confounders as covariates generally yield biased effect estimates. Estimators using weighting by inverse probability of exposure can be unstable when confounders are highly predictive of exposure or the exposure is continuous. Structural nested accelerated failure time models (AFTMs) require artificial recensoring, which can cause estimation difficulties. Here, we introduce the structural nested cumulative survival time model (SNCSTM). This model assumes that intervening to set exposure at time t to zero has an additive effect on the subsequent conditional hazard given exposure and confounder histories when all subsequent exposures have already been set to zero. We show how to fit it using standard software for generalized linear models and describe two more efficient, double robust, closed-form estimators. All three estimators avoid the artificial recensoring of AFTMs and the instability of estimators that use weighting by the inverse probability of exposure. We examine the performance of our estimators using a simulation study and illustrate their use on data from the UK Cystic Fibrosis Registry. The SNCSTM is compared with a recently proposed structural nested cumulative failure time model, and several advantages of the former are identified.


Assuntos
Modelos Estatísticos , Análise de Sobrevida , Biometria , Simulação por Computador , Intervalos de Confiança , Fatores de Confusão Epidemiológicos , Fibrose Cística/tratamento farmacológico , Fibrose Cística/mortalidade , Desoxirribonucleases/uso terapêutico , Humanos , Modelos Lineares , Modelos de Riscos Proporcionais , Sistema de Registros/estatística & dados numéricos , Fatores de Tempo , Reino Unido/epidemiologia
15.
Biometrics ; 75(1): 100-109, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30133696

RESUMO

The estimation of conditional treatment effects in an observational study with a survival outcome typically involves fitting a hazards regression model adjusted for a high-dimensional covariate. Standard estimation of the treatment effect is then not entirely satisfactory, as the misspecification of the effect of this covariate may induce a large bias. Such misspecification is a particular concern when inferring the hazard difference, because it is difficult to postulate additive hazards models that guarantee non-negative hazards over the entire observed covariate range. We therefore consider a novel class of semiparametric additive hazards models which leave the effects of covariates unspecified. The efficient score under this model is derived. We then propose two different estimation approaches for the hazard difference (and hence also the relative chance of survival), both of which yield estimators that are doubly robust. The approaches are illustrated using simulation studies and data on right heart catheterization and mortality from the SUPPORT study.


Assuntos
Interpretação Estatística de Dados , Modelos de Riscos Proporcionais , Viés , Cateterismo Cardíaco/mortalidade , Cateterismo Cardíaco/estatística & dados numéricos , Simulação por Computador , Humanos , Estudos Observacionais como Assunto , Análise de Sobrevida
16.
Am J Epidemiol ; 187(5): 1079-1084, 2018 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-29538720

RESUMO

G-estimation is a flexible, semiparametric approach for estimating exposure effects in epidemiologic studies. It has several underappreciated advantages over other propensity score-based methods popular in epidemiology, which we review in this article. However, it is rarely used in practice, due to a lack of off-the-shelf software. To rectify this, we show a simple trick for obtaining G-estimators of causal risk ratios using existing generalized estimating equations software. We extend the procedure to more complex settings with time-varying confounders.


Assuntos
Biometria/métodos , Razão de Chances , Estatística como Assunto/métodos , Fatores de Confusão Epidemiológicos , Humanos , Software
17.
Biostatistics ; 19(4): 426-443, 2018 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-29028924

RESUMO

Mendelian randomization studies employ genotypes as experimental handles to infer the effect of genetically modified exposures (e.g. vitamin D exposure) on disease outcomes (e.g. mortality). The statistical analysis of these studies makes use of the standard instrumental variables framework. Many of these studies focus on elderly populations, thereby ignoring the problem of left truncation, which arises due to the selection of study participants being conditional upon surviving up to the time of study onset. Such selection, in general, invalidates the assumptions on which the instrumental variables analysis rests. We show that Mendelian randomization studies of adult or elderly populations will therefore, in general, return biased estimates of the exposure effect when the considered genotype affects mortality; in contrast, standard tests of the causal null hypothesis that the exposure does not affect the mortality rate remain unbiased, even when they ignore this problem of left truncation. To eliminate "survivor bias" or "truncation bias" from the effect of exposure on mortality, we next propose various simple strategies under a semi-parametric additive hazard model. We examine the performance of the proposed methods in simulation studies and use them to infer the effect of vitamin D on all-cause mortality based on the Monica10 study with the genetic variant filaggrin as instrumental variable.


Assuntos
Viés , Bioestatística/métodos , Análise da Randomização Mendeliana , Modelos Estatísticos , Adulto , Idoso , Proteínas Filagrinas , Humanos , Proteínas de Filamentos Intermediários/genética , Pessoa de Meia-Idade , Vitamina D/sangue
19.
J Antimicrob Chemother ; 72(3): 914-922, 2017 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-27999063

RESUMO

Objectives: To investigate the predictors of general practitioner (GP) consultation and antibiotic use in those developing sore throat. Methods: We conducted a prospective population-based cohort study on 4461 participants in two rounds (2010-11) from 1897 households. Results: Participants reported 2193 sore throat illnesses, giving a community sore throat incidence of 1.57/ person-year. 13% of sore throat illnesses led to a GP consultation and 56% of these consultations led to antibiotic use. Participants most likely to have sore throats included women and children (e.g. school compared with retirement age); adjusted incidence rate ratio (aIRR) of 1.33 and 1.52, respectively. Participants with sore throat were more likely to consult their GP if they were preschool compared with retirement age [adjusted OR (aOR) 3.22], had more days of sore throat (aOR 1.11), reported more severe pain (aOR 4.24) or reported fever (aOR 3.82). Antibiotics were more often used by chronically ill individuals (aOR 1.78), those reporting severe pain (aOR 4.14), those reporting fever (aOR 2.58) or children with earache (aOR 1.85). Among those who consulted, males and adults who reported feeling anxious were more likely to use antibiotics; aOR 1.87 and 5.36, respectively. Conclusions: Only 1 in 10 people who have a sore throat see a doctor and more than half of those attending get antibiotics. Further efforts to curb antibiotic use should focus on reducing initial GP consultations through public information promoting safe self-management, targeted at groups identified above as most likely to attend with sore throats.


Assuntos
Antibacterianos/uso terapêutico , Prescrições de Medicamentos , Faringite/tratamento farmacológico , Autorrelato , Adolescente , Adulto , Idoso , Criança , Pré-Escolar , Doença Crônica , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Pessoa de Meia-Idade , Dor , Estudos Prospectivos , Adulto Jovem
20.
Biores Open Access ; 4(1): 160-3, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26309792

RESUMO

Human albumin is the most abundant protein in sera and a valuable biomarker in monitoring a variety of diseases. In this study we investigated the relationship between serum albumin concentrations and effects of initiation of highly active antiretroviral therapy (HAART). Serum albumin concentrations amongst 70 HIV-infected patients from diverse ethnicities were analyzed, in the absence of any other confounding comorbidities, over a period of 8 years in South East London, United Kingdom. Serum albumin data was collected, on average, every 4-6 weeks during routine visits. Serum albumin was measured prior to starting HAART, and measured at the first clinic visit after commencing HAART. These were compared to a control group of untreated individuals. Based on our analyses we conclude that serum albumin concentrations increase significantly after the initiation of therapy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...