Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
1.
Stat Med ; 41(2): 407-432, 2022 01 30.
Article in English | MEDLINE | ID: mdl-34713468

ABSTRACT

The main purpose of many medical studies is to estimate the effects of a treatment or exposure on an outcome. However, it is not always possible to randomize the study participants to a particular treatment, therefore observational study designs may be used. There are major challenges with observational studies; one of which is confounding. Controlling for confounding is commonly performed by direct adjustment of measured confounders; although, sometimes this approach is suboptimal due to modeling assumptions and misspecification. Recent advances in the field of causal inference have dealt with confounding by building on classical standardization methods. However, these recent advances have progressed quickly with a relative paucity of computational-oriented applied tutorials contributing to some confusion in the use of these methods among applied researchers. In this tutorial, we show the computational implementation of different causal inference estimators from a historical perspective where new estimators were developed to overcome the limitations of the previous estimators (ie, nonparametric and parametric g-formula, inverse probability weighting, double-robust, and data-adaptive estimators). We illustrate the implementation of different methods using an empirical example from the Connors study based on intensive care medicine, and most importantly, we provide reproducible and commented code in Stata, R, and Python for researchers to adapt in their own observational study. The code can be accessed at https://github.com/migariane/Tutorial_Computational_Causal_Inference_Estimators.


Subject(s)
Models, Statistical , Research Design , Causality , Computer Simulation , Humans , Probability , Propensity Score
2.
Int J Epidemiol ; 51(2): 641-667, 2022 05 09.
Article in English | MEDLINE | ID: mdl-34480556

ABSTRACT

BACKGROUND: Questions remain about the effect on mortality of physical activity and sedentary behaviour over time. We summarized the evidence from studies that assessed exposure from multiple time points and critiqued the analytic approaches used. METHODS: A search was performed on MEDLINE, Embase, Emcare, Scopus and Web of Science up to January 2021 for studies of repeatedly assessed physical activity or sedentary behaviour in relation to all-cause or cause-specific mortality. Relative risks from individual studies were extracted. Each study was assessed for risk of bias from multiple domains. RESULTS: We identified 64 eligible studies (57 on physical activity, 6 on sedentary behaviour, 1 on both). Cox regression with a time-fixed exposure history (n = 45) or time-varying covariates (n = 13) were the most frequently used methods. Only four studies used g-methods, which are designed to adjust for time-varying confounding. Risk of bias arose primarily from inadequate adjustment for time-varying confounders, participant selection, exposure classification and changes from measured exposure. Despite heterogeneity in methods, most studies found that being consistently or increasingly active over adulthood was associated with lower all-cause and cardiovascular-disease mortality compared with being always inactive. Few studies examined physical-activity changes and cancer mortality or effects of sedentary-behaviour changes on mortality outcomes. CONCLUSIONS: Accumulating more evidence using longitudinal data while addressing the methodological challenges would provide greater insight into the health effects of initiating or maintaining a more active and less sedentary lifestyle.


Subject(s)
Cardiovascular Diseases , Sedentary Behavior , Adult , Bias , Cause of Death , Exercise , Humans
3.
BMC Public Health ; 19(1): 1733, 2019 Dec 26.
Article in English | MEDLINE | ID: mdl-31878916

ABSTRACT

BACKGROUND: Adherence to a traditional Mediterranean diet has been associated with lower mortality and cardiovascular disease risk. The relative importance of diet compared to other lifestyle factors and effects of dietary patterns over time remains unknown. METHODS: We used the parametric G-formula to account for time-dependent confounding, in order to assess the relative importance of diet compared to other lifestyle factors and effects of dietary patterns over time. We included healthy Melbourne Collaborative Cohort Study participants attending a visit during 1995-1999. Questionnaires assessed diet and physical activity at each of three study waves. Deaths were identified by linkage to national registries. We estimated mortality risk over approximately 14 years (1995-2011). RESULTS: Of 22,213 participants, 2163 (9.7%) died during 13.6 years median follow-up. Sustained high physical activity and adherence to a Mediterranean-style diet resulted in an estimated reduction in all-cause mortality of 1.82 per 100 people (95% confidence interval (CI): 0.03, 3.6). The population attributable fraction was 13% (95% CI: 4, 23%) for sustained high physical activity, 7% (95% CI: - 3, 17%) for sustained adherence to a Mediterranean-style diet and 18% (95% CI: 0, 36%) for their combination. CONCLUSIONS: A small reduction in mortality may be achieved by sustained elevated physical activity levels in healthy middle-aged adults, but there may be comparatively little gain from increasing adherence to a Mediterranean-style diet.


Subject(s)
Diet, Mediterranean/statistics & numerical data , Exercise , Mortality/trends , Aged , Australia/epidemiology , Cohort Studies , Epidemiologic Methods , Female , Humans , Male , Middle Aged , Surveys and Questionnaires
4.
Stat Med ; 38(24): 4888-4911, 2019 10 30.
Article in English | MEDLINE | ID: mdl-31436859

ABSTRACT

Longitudinal targeted maximum likelihood estimation (LTMLE) has very rarely been used to estimate dynamic treatment effects in the context of time-dependent confounding affected by prior treatment when faced with long follow-up times, multiple time-varying confounders, and complex associational relationships simultaneously. Reasons for this include the potential computational burden, technical challenges, restricted modeling options for long follow-up times, and limited practical guidance in the literature. However, LTMLE has desirable asymptotic properties, ie, it is doubly robust, and can yield valid inference when used in conjunction with machine learning. It also has the advantage of easy-to-calculate analytic standard errors in contrast to the g-formula, which requires bootstrapping. We use a topical and sophisticated question from HIV treatment research to show that LTMLE can be used successfully in complex realistic settings, and we compare results to competing estimators. Our example illustrates the following practical challenges common to many epidemiological studies: (1) long follow-up time (30 months); (2) gradually declining sample size; (3) limited support for some intervention rules of interest; (4) a high-dimensional set of potential adjustment variables, increasing both the need and the challenge of integrating appropriate machine learning methods; and (5) consideration of collider bias. Our analyses, as well as simulations, shed new light on the application of LTMLE in complex and realistic settings: We show that (1) LTMLE can yield stable and good estimates, even when confronted with small samples and limited modeling options; (2) machine learning utilized with a small set of simple learners (if more complex ones cannot be fitted) can outperform a single, complex model, which is tailored to incorporate prior clinical knowledge; and (3) performance can vary considerably depending on interventions and their support in the data, and therefore critical quality checks should accompany every LTMLE analysis. We provide guidance for the practical application of LTMLE.


Subject(s)
Anti-HIV Agents/administration & dosage , HIV Infections/drug therapy , Likelihood Functions , Causality , Child , Computer Simulation , Confounding Factors, Epidemiologic , HIV Infections/epidemiology , Humans , Sample Size
5.
Gac. méd. espirit ; 21(2): 146-160, mayo.-ago. 2019.
Article in Spanish | LILACS | ID: biblio-1090436

ABSTRACT

RESUMEN Fundamento: Los estudios de causalidad deben aportar resultados certeros, lo cual depende de la adecuación de los mismos, de ahí la necesidad de conocer los métodos que aseguren la validez de estas investigaciones. Objetivo: Sistematizar los métodos actuales para el estudio de causalidad en Medicina que incluye el diseño, los requerimientos que aseguran su validez y los métodos para el cumplimiento de estos requerimientos. Desarrollo: Se realizó una revisión bibliográfica en bases de datos biomédicas, se seleccionó la literatura de mayor actualidad, integralidad y cientificidad con la cual se organizó una síntesis crítica, a la que se le agregó la experiencia de las autoras. Se presentan técnicas para la detección y tratamiento de la confusión y la interacción y para garantizar la comparabilidad entre grupos. Entre las técnicas se destacan la aleatorización mendeliana, el puntaje de susceptibilidad, los G-métodos, los modelos estructurales marginales y anidados, la lógica difusa y el análisis estadístico implicativo. Conclusiones: A pesar del avance en los métodos estadísticos es el investigador el encargado de garantizar la no confusión residual y discernir entre lo estadísticamente significativo y lo clínicamente aceptable.


ABSTRACT Background: Causality studies must provide accurate results, which depends on their adequacy, therefore the need of knowing the methods that ensure the validity of these investigations. Objective: To systematize the current methods for the study of causality in Medicine that includes the design, the requirements that ensure its validity and the methods for complying with these requirements. Development: It was carried out a bibliographic review in biomedical databases and selected the most current, comprehensive, scientific literature, with this, a critical synthesis was organized, with the experience of the authors. Techniques for the detection and treatment of confusion and interaction were presented, also to ensure comparability between groups. Among the techniques, Mendelian randomization, susceptibility score, G-methods, marginal and nested structural models, fuzzy logic and implicative statistical analysis stand out. Conclusions: Despite the progress in statistical methods, the researcher is responsible for guaranteeing residual non-confusion and distinguishing between statistically significant and clinically acceptable.


Subject(s)
Reproducibility of Results , Data Interpretation, Statistical , Biomedical Research/statistics & numerical data , Case-Control Studies , Regression Analysis , Models, Structural
6.
Gac. méd. espirit ; 21(2): 146-160, mayo.-ago. 2019.
Article in Spanish | CUMED | ID: cum-76893

ABSTRACT

RESUMEN Fundamento: Los estudios de causalidad deben aportar resultados certeros, lo cual depende de la adecuación de los mismos, de ahí la necesidad de conocer los métodos que aseguren la validez de estas investigaciones. Objetivo: Sistematizar los métodos actuales para el estudio de causalidad en Medicina que incluye el diseño, los requerimientos que aseguran su validez y los métodos para el cumplimiento de estos requerimientos. Desarrollo: Se realizó una revisión bibliográfica en bases de datos biomédicas, se seleccionó la literatura de mayor actualidad, integralidad y cientificidad con la cual se organizó una síntesis crítica, a la que se le agregó la experiencia de las autoras. Se presentan técnicas para la detección y tratamiento de la confusión y la interacción y para garantizar la comparabilidad entre grupos. Entre las técnicas se destacan la aleatorización mendeliana, el puntaje de susceptibilidad, los G-métodos, los modelos estructurales marginales y anidados, la lógica difusa y el análisis estadístico implicativo. Conclusiones: A pesar del avance en los métodos estadísticos es el investigador el encargado de garantizar la no confusión residual y discernir entre lo estadísticamente significativo y lo clínicamente aceptable.


ABSTRACT Background: Causality studies must provide accurate results, which depends on their adequacy, therefore the need of knowing the methods that ensure the validity of these investigations. Objective: To systematize the current methods for the study of causality in Medicine that includes the design, the requirements that ensure its validity and the methods for complying with these requirements. Development: It was carried out a bibliographic review in biomedical databases and selected the most current, comprehensive, scientific literature, with this, a critical synthesis was organized, with the experience of the authors. Techniques for the detection and treatment of confusion and interaction were presented, also to ensure comparability between groups. Among the techniques, Mendelian randomization, susceptibility score, G-methods, marginal and nested structural models, fuzzy logic and implicative statistical analysis stand out. Conclusions: Despite the progress in statistical methods, the researcher is responsible for guaranteeing residual non-confusion and distinguishing between statistically significant and clinically acceptable.


Subject(s)
Humans , Reproducibility of Results , Data Interpretation, Statistical , Biomedical Research/statistics & numerical data , Case-Control Studies , Regression Analysis , Models, Structural
7.
Stat Med ; 37(14): 2252-2266, 2018 06 30.
Article in English | MEDLINE | ID: mdl-29682776

ABSTRACT

Many modern estimators require bootstrapping to calculate confidence intervals because either no analytic standard error is available or the distribution of the parameter of interest is nonsymmetric. It remains however unclear how to obtain valid bootstrap inference when dealing with multiple imputation to address missing data. We present 4 methods that are intuitively appealing, easy to implement, and combine bootstrap estimation with multiple imputation. We show that 3 of the 4 approaches yield valid inference, but that the performance of the methods varies with respect to the number of imputed data sets and the extent of missingness. Simulation studies reveal the behavior of our approaches in finite samples. A topical analysis from HIV treatment research, which determines the optimal timing of antiretroviral treatment initiation in young children, demonstrates the practical implications of the 4 methods in a sophisticated and realistic setting. This analysis suffers from missing data and uses the g-formula for inference, a method for which no standard errors are available.


Subject(s)
Biometry/methods , Models, Statistical , Anti-Retroviral Agents , Computer Simulation , HIV Infections/drug therapy , Humans
8.
Curr Epidemiol Rep ; 4(4): 288-297, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29204332

ABSTRACT

PURPOSE OF REVIEW: Pharmacoepidemiologists are often interested in estimating the effects of dynamic treatment strategies, where treatments are modified based on patients' evolving characteristics. For such problems, appropriate control of both baseline and time-varying confounders is critical. Conventional methods that control confounding by including time-varying treatments and confounders in an outcome regression model may not have a causal interpretation, even when all baseline and time-varying confounders are measured. This problem occurs when time-varying confounders are, themselves, affected by past treatment. We review alternative analytic approaches that can produce valid inferences in the presence of such confounding. We focus on the parametric g-formula and inverse probability weighting of marginal structural models, two examples of Robins' g-methods. RECENT FINDINGS: Unlike standard outcome regression methods, the parametric g-formula and inverse probability weighting of marginal structural models can estimate the effects of dynamic treatment strategies and appropriately control for measured time-varying confounders affected by prior treatment. Few applications of g-methods exist in the pharmacoepidemiology literature, primarily due to the common use of administrative claims data, which typically lack detailed measurements of time-varying information, and the limited availability of or familiarity with tools to help perform the relatively complex analysis. These barriers may be overcome with the increasing availability of data sources containing more detailed time-varying information and more accessible learning tools and software. SUMMARY: With appropriate data and study design, g-methods can improve our ability to make causal inferences on dynamic treatment strategies from observational data in pharmacoepidemiology.

9.
Curr Environ Health Rep ; 4(3): 364-372, 2017 09.
Article in English | MEDLINE | ID: mdl-28712046

ABSTRACT

PURPOSE OF REVIEW: We offer an in-depth discussion of the time-varying confounding and selection bias mechanisms that give rise to the healthy worker survivor effect (HWSE). RECENT FINDINGS: In this update of an earlier review, we distinguish between the mechanisms collectively known as the HWSE and the statistical bias that can result. This discussion highlights the importance of identifying both the target parameter and the target population for any research question in occupational epidemiology. Target parameters can correspond to hypothetical workplace interventions; we explore whether these target parameters' true values reflect the etiologic effect of an exposure on an outcome or the potential impact of enforcing an exposure limit in a more realistic setting. If a cohort includes workers hired before the start of follow-up, HWSE mechanisms can limit the transportability of the estimates to other target populations. We summarize recent publications that applied g-methods to control for the HWSE, focusing on their target parameters, target populations, and hypothetical interventions.


Subject(s)
Healthy Worker Effect , Occupational Diseases/epidemiology , Occupational Exposure/analysis , Survivors , Bias , Humans , Selection Bias
10.
Int J Epidemiol ; 46(2): 756-762, 2017 04 01.
Article in English | MEDLINE | ID: mdl-28039382

ABSTRACT

Robins' generalized methods (g methods) provide consistent estimates of contrasts (e.g. differences, ratios) of potential outcomes under a less restrictive set of identification conditions than do standard regression methods (e.g. linear, logistic, Cox regression). Uptake of g methods by epidemiologists has been hampered by limitations in understanding both conceptual and technical details. We present a simple worked example that illustrates basic concepts, while minimizing technical complications.


Subject(s)
Causality , Epidemiologic Research Design , Models, Statistical , Computer Simulation , Confounding Factors, Epidemiologic , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...