Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Am J Epidemiol ; 191(12): 2084-2097, 2022 11 19.
Artigo em Inglês | MEDLINE | ID: mdl-35925053

RESUMO

We estimated the degree to which language used in the high-profile medical/public health/epidemiology literature implied causality using language linking exposures to outcomes and action recommendations; examined disconnects between language and recommendations; identified the most common linking phrases; and estimated how strongly linking phrases imply causality. We searched for and screened 1,170 articles from 18 high-profile journals (65 per journal) published from 2010-2019. Based on written framing and systematic guidance, 3 reviewers rated the degree of causality implied in abstracts and full text for exposure/outcome linking language and action recommendations. Reviewers rated the causal implication of exposure/outcome linking language as none (no causal implication) in 13.8%, weak in 34.2%, moderate in 33.2%, and strong in 18.7% of abstracts. The implied causality of action recommendations was higher than the implied causality of linking sentences for 44.5% or commensurate for 40.3% of articles. The most common linking word in abstracts was "associate" (45.7%). Reviewers' ratings of linking word roots were highly heterogeneous; over half of reviewers rated "association" as having at least some causal implication. This research undercuts the assumption that avoiding "causal" words leads to clarity of interpretation in medical research.


Assuntos
Pesquisa Biomédica , Idioma , Humanos , Causalidade
2.
Biom J ; 64(8): 1389-1403, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34993990

RESUMO

In causal studies, the near-violation of the positivity may occur by chance, because of sample-to-sample fluctuation despite the theoretical veracity of the positivity assumption in the population. It may mostly happen when the exposure prevalence is low or when the sample size is small. We aimed to compare the robustness of g-computation (GC), inverse probability weighting (IPW), truncated IPW, targeted maximum likelihood estimation (TMLE), and truncated TMLE in this situation, using simulations and one real application. We also tested different extrapolation situations for the sub-group with a positivity violation. The results illustrated that the near-violation of the positivity impacted all methods. We demonstrated the robustness of GC and TMLE-based methods. Truncation helped in limiting the bias in near-violation situations, but at the cost of bias in normal conditions. The application illustrated the variability of the results between the methods and the importance of choosing the most appropriate one. In conclusion, compared to propensity score-based methods, methods based on outcome regression should be preferred when suspecting near-violation of the positivity assumption.


Assuntos
Modelos Estatísticos , Funções Verossimilhança , Causalidade , Pontuação de Propensão , Viés , Simulação por Computador
3.
Stat Methods Med Res ; 31(4): 706-718, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-34861799

RESUMO

In time-to-event settings, g-computation and doubly robust estimators are based on discrete-time data. However, many biological processes are evolving continuously over time. In this paper, we extend the g-computation and the doubly robust standardisation procedures to a continuous-time context. We compare their performance to the well-known inverse-probability-weighting estimator for the estimation of the hazard ratio and restricted mean survival times difference, using a simulation study. Under a correct model specification, all methods are unbiased, but g-computation and the doubly robust standardisation are more efficient than inverse-probability-weighting. We also analyse two real-world datasets to illustrate the practical implementation of these approaches. We have updated the R package RISCA to facilitate the use of these methods and their dissemination.


Assuntos
Modelos Estatísticos , Simulação por Computador , Probabilidade , Padrões de Referência
4.
Ann Intern Med ; 174(10): 1385-1394, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34424731

RESUMO

BACKGROUND: The HLA evolutionary divergence (HED), a continuous metric quantifying the peptidic differences between 2 homologous HLA alleles, reflects the breadth of the immunopeptidome presented to T lymphocytes. OBJECTIVE: To assess the potential effect of donor or recipient HED on liver transplant rejection. DESIGN: Retrospective cohort study. SETTING: Liver transplant units. PATIENTS: 1154 adults and 113 children who had a liver transplant between 2004 and 2018. MEASUREMENTS: Liver biopsies were done 1, 2, 5, and 10 years after the transplant and in case of liver dysfunction. Donor-specific anti-HLA antibodies (DSAs) were measured in children at the time of biopsy. The HED was calculated using the physicochemical Grantham distance for class I (HLA-A or HLA-B) and class II (HLA-DRB1 or HLA-DQB1) alleles. The influence of HED on the incidence of liver lesions was analyzed through the inverse probability weighting approach based on covariate balancing, generalized propensity scores. RESULTS: In adults, class I HED of the donor was associated with acute rejection (hazard ratio [HR], 1.09 [95% CI, 1.03 to 1.16]), chronic rejection (HR, 1.20 [CI, 1.10 to 1.31]), and ductopenia of 50% or more (HR, 1.33 [CI, 1.09 to 1.62]) but not with other histologic lesions. In children, class I HED of the donor was also associated with acute rejection (HR, 1.16 [CI, 1.03 to 1.30]) independent of the presence of DSAs. There was no effect of either donor class II HED or recipient class I or class II HED on the incidence of liver lesions in adults and children. LIMITATION: The DSAs were measured only in children. CONCLUSION: Class I HED of the donor predicts acute or chronic rejection of liver transplant. This novel and accessible prognostic marker could orientate donor selection and guide immunosuppression. PRIMARY FUNDING SOURCE: Institut National de la Santé et de la Recherche Médicale.


Assuntos
Rejeição de Enxerto/genética , Antígenos HLA/genética , Transplante de Fígado/efeitos adversos , Adulto , Alelos , Biomarcadores , Biópsia , Pré-Escolar , Evolução Molecular , Feminino , Rejeição de Enxerto/etiologia , Humanos , Lactente , Fígado/patologia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Fatores de Risco , Fatores de Tempo
5.
Sci Rep ; 11(1): 1435, 2021 01 14.
Artigo em Inglês | MEDLINE | ID: mdl-33446866

RESUMO

In clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.

6.
Sci Rep ; 10(1): 9219, 2020 06 08.
Artigo em Inglês | MEDLINE | ID: mdl-32514028

RESUMO

Controlling for confounding bias is crucial in causal inference. Distinct methods are currently employed to mitigate the effects of confounding bias. Each requires the introduction of a set of covariates, which remains difficult to choose, especially regarding the different methods. We conduct a simulation study to compare the relative performance results obtained by using four different sets of covariates (those causing the outcome, those causing the treatment allocation, those causing both the outcome and the treatment allocation, and all the covariates) and four methods: g-computation, inverse probability of treatment weighting, full matching and targeted maximum likelihood estimator. Our simulations are in the context of a binary treatment, a binary outcome and baseline confounders. The simulations suggest that considering all the covariates causing the outcome led to the lowest bias and variance, particularly for g-computation. The consideration of all the covariates did not decrease the bias but significantly reduced the power. We apply these methods to two real-world examples that have clinical relevance, thereby illustrating the real-world importance of using these methods. We propose an R package RISCA to encourage the use of g-computation in causal inference.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...