Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 175
Filtrar
1.
medRxiv ; 2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38712282

RESUMO

Propensity score adjustment addresses confounding by balancing covariates in subject treatment groups through matching, stratification, inverse probability weighting, etc. Diagnostics ensure that the adjustment has been effective. A common technique is to check whether the standardized mean difference for each relevant covariate is less than a threshold like 0.1. For small sample sizes, the probability of falsely rejecting the validity of a study because of chance imbalance when no underlying balance exists approaches 1. We propose an alternative diagnostic that checks whether the standardized mean difference statistically significantly exceeds the threshold. Through simulation and real-world data, we find that this diagnostic achieves a better trade-off of type 1 error rate and power than standard nominal threshold tests and not testing for sample sizes from 250 to 4000 and for 20 to 100,000 covariates. In network studies, meta-analysis of effect estimates must be accompanied by meta-analysis of the diagnostics or else systematic confounding may overwhelm the estimated effect. Our procedure for statistically testing balance at both the database level and the meta-analysis level achieves the best balance of type-1 error rate and power. Our procedure supports the review of large numbers of covariates, enabling more rigorous diagnostics.

2.
J Comput Graph Stat ; 33(1): 289-302, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38716090

RESUMO

Large-scale observational health databases are increasingly popular for conducting comparative effectiveness and safety studies of medical products. However, increasing number of patients poses computational challenges when fitting survival regression models in such studies. In this paper, we use graphics processing units (GPUs) to parallelize the computational bottlenecks of massive sample-size survival analyses. Specifically, we develop and apply time- and memory-efficient single-pass parallel scan algorithms for Cox proportional hazards models and forward-backward parallel scan algorithms for Fine-Gray models for analysis with and without a competing risk using a cyclic coordinate descent optimization approach. We demonstrate that GPUs accelerate the computation of fitting these complex models in large databases by orders of magnitude as compared to traditional multi-core CPU parallelism. Our implementation enables efficient large-scale observational studies involving millions of patients and thousands of patient characteristics. The above implementation is available in the open-source R package Cyclops (Suchard et al., 2013).

3.
J Clin Hypertens (Greenwich) ; 26(4): 425-430, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38501749

RESUMO

Previous work comparing safety and effectiveness outcomes for new initiators of angiotensin converting-enzyme inhibitors (ACEi) and thiazides demonstrated more favorable outcomes for thiazides, although cohort definitions allowed for addition of a second antihypertensive medication after a week of monotherapy. Here, we modify the monotherapy definition, imposing exit from cohorts upon addition of another antihypertensive medication. We determine hazard ratios (HR) for 55 safety and effectiveness outcomes over six databases and compare results to earlier findings. We find, for all primary outcomes, statistically significant differences in effectiveness between ACEi and thiazides were not replicated (HRs: 1.11, 1.06, 1.12 for acute myocardial infarction, hospitalization with heart failure and stroke, respectively). While statistical significance is similarly lost for several safety outcomes, the safety profile of thiazides remains more favorable. Our results indicate a less striking difference in effectiveness of thiazides compared to ACEi and reflect some sensitivity to the monotherapy cohort definition modification.


Assuntos
Inibidores da Enzima Conversora de Angiotensina , Hipertensão , Humanos , Inibidores da Enzima Conversora de Angiotensina/efeitos adversos , Anti-Hipertensivos/efeitos adversos , Diuréticos/efeitos adversos , Hipertensão/tratamento farmacológico , Inibidores de Simportadores de Cloreto de Sódio/efeitos adversos , Tiazidas/efeitos adversos
5.
Stud Health Technol Inform ; 310: 966-970, 2024 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-38269952

RESUMO

The Health-Analytics Data to Evidence Suite (HADES) is an open-source software collection developed by Observational Health Data Sciences and Informatics (OHDSI). It executes directly against healthcare data such as electronic health records and administrative claims, that have been converted to the Observational Medical Outcomes Partnership (OMOP) Common Data Model. Using advanced analytics, HADES performs characterization, population-level causal effect estimation, and patient-level prediction, potentially across a federated data network, allowing patient-level data to remain locally while only aggregated statistics are shared. Designed to run across a wide array of technical environments, including different operating systems and database platforms, HADES uses continuous integration with a large set of unit tests to maintain reliability. HADES implements OHDSI best practices, and is used in almost all published OHDSI studies, including some that have directly informed regulatory decisions.


Assuntos
Ciência de Dados , Registros Eletrônicos de Saúde , Humanos , Bases de Dados Factuais , Reprodutibilidade dos Testes , Software , Estudos Observacionais como Assunto
6.
J Am Med Inform Assoc ; 31(3): 583-590, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38175665

RESUMO

IMPORTANCE: The Observational Health Data Sciences and Informatics (OHDSI) is the largest distributed data network in the world encompassing more than 331 data sources with 2.1 billion patient records across 34 countries. It enables large-scale observational research through standardizing the data into a common data model (CDM) (Observational Medical Outcomes Partnership [OMOP] CDM) and requires a comprehensive, efficient, and reliable ontology system to support data harmonization. MATERIALS AND METHODS: We created the OHDSI Standardized Vocabularies-a common reference ontology mandatory to all data sites in the network. It comprises imported and de novo-generated ontologies containing concepts and relationships between them, and the praxis of converting the source data to the OMOP CDM based on these. It enables harmonization through assigned domains according to clinical categories, comprehensive coverage of entities within each domain, support for commonly used international coding schemes, and standardization of semantically equivalent concepts. RESULTS: The OHDSI Standardized Vocabularies comprise over 10 million concepts from 136 vocabularies. They are used by hundreds of groups and several large data networks. More than 8600 users have performed 50 000 downloads of the system. This open-source resource has proven to address an impediment of large-scale observational research-the dependence on the context of source data representation. With that, it has enabled efficient phenotyping, covariate construction, patient-level prediction, population-level estimation, and standard reporting. DISCUSSION AND CONCLUSION: OHDSI has made available a comprehensive, open vocabulary system that is unmatched in its ability to support global observational research. We encourage researchers to exploit it and contribute their use cases to this dynamic resource.


Assuntos
Ciência de Dados , Informática Médica , Humanos , Vocabulário , Bases de Dados Factuais , Registros Eletrônicos de Saúde
7.
Stat Med ; 43(2): 395-418, 2024 01 30.
Artigo em Inglês | MEDLINE | ID: mdl-38010062

RESUMO

Postmarket safety surveillance is an integral part of mass vaccination programs. Typically relying on sequential analysis of real-world health data as they accrue, safety surveillance is challenged by sequential multiple testing and by biases induced by residual confounding in observational data. The current standard approach based on the maximized sequential probability ratio test (MaxSPRT) fails to satisfactorily address these practical challenges and it remains a rigid framework that requires prespecification of the surveillance schedule. We develop an alternative Bayesian surveillance procedure that addresses both aforementioned challenges using a more flexible framework. To mitigate bias, we jointly analyze a large set of negative control outcomes that are adverse events with no known association with the vaccines in order to inform an empirical bias distribution, which we then incorporate into estimating the effect of vaccine exposure on the adverse event of interest through a Bayesian hierarchical model. To address multiple testing and improve on flexibility, at each analysis timepoint, we update a posterior probability in favor of the alternative hypothesis that vaccination induces higher risks of adverse events, and then use it for sequential detection of safety signals. Through an empirical evaluation using six US observational healthcare databases covering more than 360 million patients, we benchmark the proposed procedure against MaxSPRT on testing errors and estimation accuracy, under two epidemiological designs, the historical comparator and the self-controlled case series. We demonstrate that our procedure substantially reduces Type 1 error rates, maintains high statistical power and fast signal detection, and provides considerably more accurate estimation than MaxSPRT. Given the extensiveness of the empirical study which yields more than 7 million sets of results, we present all results in a public R ShinyApp. As an effort to promote open science, we provide full implementation of our method in the open-source R package EvidenceSynthesis.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos , Vigilância de Produtos Comercializados , Vacinas , Humanos , Teorema de Bayes , Viés , Probabilidade , Vacinas/efeitos adversos
8.
J Am Med Inform Assoc ; 31(1): 209-219, 2023 12 22.
Artigo em Inglês | MEDLINE | ID: mdl-37952118

RESUMO

OBJECTIVE: Health data standardized to a common data model (CDM) simplifies and facilitates research. This study examines the factors that make standardizing observational health data to the Observational Medical Outcomes Partnership (OMOP) CDM successful. MATERIALS AND METHODS: Twenty-five data partners (DPs) from 11 countries received funding from the European Health Data Evidence Network (EHDEN) to standardize their data. Three surveys, DataQualityDashboard results, and statistics from the conversion process were analyzed qualitatively and quantitatively. Our measures of success were the total number of days to transform source data into the OMOP CDM and participation in network research. RESULTS: The health data converted to CDM represented more than 133 million patients. 100%, 88%, and 84% of DPs took Surveys 1, 2, and 3. The median duration of the 6 key extract, transform, and load (ETL) processes ranged from 4 to 115 days. Of the 25 DPs, 21 DPs were considered applicable for analysis of which 52% standardized their data on time, and 48% participated in an international collaborative study. DISCUSSION: This study shows that the consistent workflow used by EHDEN proves appropriate to support the successful standardization of observational data across Europe. Over the 25 successful transformations, we confirmed that getting the right people for the ETL is critical and vocabulary mapping requires specific expertise and support of tools. Additionally, we learned that teams that proactively prepared for data governance issues were able to avoid considerable delays improving their ability to finish on time. CONCLUSION: This study provides guidance for future DPs to standardize to the OMOP CDM and participate in distributed networks. We demonstrate that the Observational Health Data Sciences and Informatics community must continue to evaluate and provide guidance and support for what ultimately develops the backbone of how community members generate evidence.


Assuntos
Saúde Global , Medicina , Humanos , Bases de Dados Factuais , Europa (Continente) , Registros Eletrônicos de Saúde
9.
BMJ Med ; 2(1): e000651, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37829182

RESUMO

Objective: To assess the uptake of second line antihyperglycaemic drugs among patients with type 2 diabetes mellitus who are receiving metformin. Design: Federated pharmacoepidemiological evaluation in LEGEND-T2DM. Setting: 10 US and seven non-US electronic health record and administrative claims databases in the Observational Health Data Sciences and Informatics network in eight countries from 2011 to the end of 2021. Participants: 4.8 million patients (≥18 years) across US and non-US based databases with type 2 diabetes mellitus who had received metformin monotherapy and had initiated second line treatments. Exposure: The exposure used to evaluate each database was calendar year trends, with the years in the study that were specific to each cohort. Main outcomes measures: The outcome was the incidence of second line antihyperglycaemic drug use (ie, glucagon-like peptide-1 receptor agonists, sodium-glucose cotransporter-2 inhibitors, dipeptidyl peptidase-4 inhibitors, and sulfonylureas) among individuals who were already receiving treatment with metformin. The relative drug class level uptake across cardiovascular risk groups was also evaluated. Results: 4.6 million patients were identified in US databases, 61 382 from Spain, 32 442 from Germany, 25 173 from the UK, 13 270 from France, 5580 from Scotland, 4614 from Hong Kong, and 2322 from Australia. During 2011-21, the combined proportional initiation of the cardioprotective antihyperglycaemic drugs (glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors) increased across all data sources, with the combined initiation of these drugs as second line drugs in 2021 ranging from 35.2% to 68.2% in the US databases, 15.4% in France, 34.7% in Spain, 50.1% in Germany, and 54.8% in Scotland. From 2016 to 2021, in some US and non-US databases, uptake of glucagon-like peptide-1 receptor agonists and sodium-glucose cotransporter-2 inhibitors increased more significantly among populations with no cardiovascular disease compared with patients with established cardiovascular disease. No data source provided evidence of a greater increase in the uptake of these two drug classes in populations with cardiovascular disease compared with no cardiovascular disease. Conclusions: Despite the increase in overall uptake of cardioprotective antihyperglycaemic drugs as second line treatments for type 2 diabetes mellitus, their uptake was lower in patients with cardiovascular disease than in people with no cardiovascular disease over the past decade. A strategy is needed to ensure that medication use is concordant with guideline recommendations to improve outcomes of patients with type 2 diabetes mellitus.

10.
Drug Saf ; 46(12): 1335-1352, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37804398

RESUMO

INTRODUCTION: Individual case reports are the main asset in pharmacovigilance signal management. Signal validation is the first stage after signal detection and aims to determine if there is sufficient evidence to justify further assessment. Throughout signal management, a prioritization of signals is continually made. Routinely collected health data can provide relevant contextual information but are primarily used at a later stage in pharmacoepidemiological studies to assess communicated signals. OBJECTIVE: The aim of this study was to examine the feasibility and utility of analysing routine health data from a multinational distributed network to support signal validation and prioritization and to reflect on key user requirements for these analyses to become an integral part of this process. METHODS: Statistical signal detection was performed in VigiBase, the WHO global database of individual case safety reports, targeting generic manufacturer drugs and 16 prespecified adverse events. During a 5-day study-a-thon, signal validation and prioritization were performed using information from VigiBase, regulatory documents and the scientific literature alongside descriptive analyses of routine health data from 10 partners of the European Health Data and Evidence Network (EHDEN). Databases included in the study were from the UK, Spain, Norway, the Netherlands and Serbia, capturing records from primary care and/or hospitals. RESULTS: Ninety-five statistical signals were subjected to signal validation, of which eight were considered for descriptive analyses in the routine health data. Design, execution and interpretation of results from these analyses took up to a few hours for each signal (of which 15-60 minutes were for execution) and informed decisions for five out of eight signals. The impact of insights from the routine health data varied and included possible alternative explanations, potential public health and clinical impact and feasibility of follow-up pharmacoepidemiological studies. Three signals were selected for signal assessment, two of these decisions were supported by insights from the routine health data. Standardization of analytical code, availability of adverse event phenotypes including bridges between different source vocabularies, and governance around the access and use of routine health data were identified as important aspects for future development. CONCLUSIONS: Analyses of routine health data from a distributed network to support signal validation and prioritization are feasible in the given time limits and can inform decision making. The cost-benefit of integrating these analyses at this stage of signal management requires further research.


Assuntos
Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Farmacovigilância , Humanos , Sistemas de Notificação de Reações Adversas a Medicamentos , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/epidemiologia , Bases de Dados Factuais , Países Baixos
11.
J Biomed Inform ; 145: 104476, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37598737

RESUMO

OBJECTIVE: We developed and evaluated a novel one-shot distributed algorithm for evidence synthesis in distributed research networks with rare outcomes. MATERIALS AND METHODS: Fed-Padé, motivated by a classic mathematical tool, Padé approximants, reconstructs the multi-site data likelihood via Padé approximant whose key parameters can be computed distributively. Thanks to the simplicity of [2,2] Padé approximant, Fed-Padé requests an extremely simple task and low communication cost for data partners. Specifically, each data partner only needs to compute and share the log-likelihood and its first 4 gradients evaluated at an initial estimator. We evaluated the performance of our algorithm with extensive simulation studies and four observational healthcare databases. RESULTS: Our simulation studies revealed that a [2,2]-Padé approximant can well reconstruct the multi-site likelihood so that Fed-Padé produces nearly identical estimates to the pooled analysis. Across all simulation scenarios considered, the median of relative bias and rate of instability of our Fed-Padé are both <0.1%, whereas meta-analysis estimates have bias up to 50% and instability up to 75%. Furthermore, the confidence intervals derived from the Fed-Padé algorithm showed better coverage of the truth than confidence intervals based on the meta-analysis. In real data analysis, the Fed-Padé has a relative bias of <1% for all three comparisons for risks of acute liver injury and decreased libido, whereas the meta-analysis estimates have a substantially higher bias (around 10%). CONCLUSION: The Fed-Padé algorithm is nearly lossless, stable, communication-efficient, and easy to implement for models with rare outcomes. It provides an extremely suitable and convenient approach for synthesizing evidence in distributed research networks with rare outcomes.


Assuntos
Algoritmos , Aprendizado de Máquina , Simulação por Computador , Metanálise como Assunto
12.
Drug Saf ; 46(8): 797-807, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37328600

RESUMO

INTRODUCTION: Vaccine safety surveillance commonly includes a serial testing approach with a sensitive method for 'signal generation' and specific method for 'signal validation.' The extent to which serial testing in real-world studies improves or hinders overall performance in terms of sensitivity and specificity remains unknown. METHODS: We assessed the overall performance of serial testing using three administrative claims and one electronic health record database. We compared type I and II errors before and after empirical calibration for historical comparator, self-controlled case series (SCCS), and the serial combination of those designs against six vaccine exposure groups with 93 negative control and 279 imputed positive control outcomes. RESULTS: The historical comparator design mostly had fewer type II errors than SCCS. SCCS had fewer type I errors than the historical comparator. Before empirical calibration, the serial combination increased specificity and decreased sensitivity. Type II errors mostly exceeded 50%. After empirical calibration, type I errors returned to nominal; sensitivity was lowest when the methods were combined. CONCLUSION: While serial combination produced fewer false-positive signals compared with the most specific method, it generated more false-negative signals compared with the most sensitive method. Using a historical comparator design followed by an SCCS analysis yielded decreased sensitivity in evaluating safety signals relative to a one-stage SCCS approach. While the current use of serial testing in vaccine surveillance may provide a practical paradigm for signal identification and triage, single epidemiological designs should be explored as valuable approaches to detecting signals.


Assuntos
Vacinas , Humanos , Vacinas/efeitos adversos , Sensibilidade e Especificidade , Projetos de Pesquisa , Bases de Dados Factuais , Registros Eletrônicos de Saúde
13.
Stat Med ; 42(5): 619-631, 2023 02 28.
Artigo em Inglês | MEDLINE | ID: mdl-36642826

RESUMO

Post-approval safety surveillance of medical products using observational healthcare data can help identify safety issues beyond those found in pre-approval trials. When testing sequentially as data accrue, maximum sequential probability ratio testing (MaxSPRT) is a common approach to maintaining nominal type 1 error. However, the true type 1 error may still deviate from the specified one because of systematic error due to the observational nature of the analysis. This systematic error may persist even after controlling for known confounders. Here we propose to address this issue by combing MaxSPRT with empirical calibration. In empirical calibration, we assume uncertainty about the systematic error in our analysis, the source of uncertainty commonly overlooked in practice. We infer a probability distribution of systematic error by relying on a large set of negative controls: exposure-outcome pairs where no causal effect is believed to exist. Integrating this distribution into our test statistics has previously been shown to restore type 1 error to nominal. Here we show how we can calibrate the critical value central to MaxSPRT. We evaluate this novel approach using simulations and real electronic health records, using H1N1 vaccinations during the 2009-2010 season as an example. Results show that combining empirical calibration with MaxSPRT restores nominal type 1 error. In our real-world example, adjusting for systematic error using empirical calibration has a larger impact than, and hence is just as essential as, adjusting for sequential testing using MaxSPRT. We recommend performing both, using the method described here.


Assuntos
Vírus da Influenza A Subtipo H1N1 , Humanos , Calibragem , Probabilidade , Atenção à Saúde , Registros Eletrônicos de Saúde
15.
Front Pharmacol ; 13: 945592, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36188566

RESUMO

Purpose: Alpha-1 blockers, often used to treat benign prostatic hyperplasia (BPH), have been hypothesized to prevent COVID-19 complications by minimising cytokine storm release. The proposed treatment based on this hypothesis currently lacks support from reliable real-world evidence, however. We leverage an international network of large-scale healthcare databases to generate comprehensive evidence in a transparent and reproducible manner. Methods: In this international cohort study, we deployed electronic health records from Spain (SIDIAP) and the United States (Department of Veterans Affairs, Columbia University Irving Medical Center, IQVIA OpenClaims, Optum DOD, Optum EHR). We assessed association between alpha-1 blocker use and risks of three COVID-19 outcomes-diagnosis, hospitalization, and hospitalization requiring intensive services-using a prevalent-user active-comparator design. We estimated hazard ratios using state-of-the-art techniques to minimize potential confounding, including large-scale propensity score matching/stratification and negative control calibration. We pooled database-specific estimates through random effects meta-analysis. Results: Our study overall included 2.6 and 0.46 million users of alpha-1 blockers and of alternative BPH medications. We observed no significant difference in their risks for any of the COVID-19 outcomes, with our meta-analytic HR estimates being 1.02 (95% CI: 0.92-1.13) for diagnosis, 1.00 (95% CI: 0.89-1.13) for hospitalization, and 1.15 (95% CI: 0.71-1.88) for hospitalization requiring intensive services. Conclusion: We found no evidence of the hypothesized reduction in risks of the COVID-19 outcomes from the prevalent-use of alpha-1 blockers-further research is needed to identify effective therapies for this novel disease.

16.
J Biomed Inform ; 134: 104204, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-36108816

RESUMO

Confounding remains one of the major challenges to causal inference with observational data. This problem is paramount in medicine, where we would like to answer causal questions from large observational datasets like electronic health records (EHRs) and administrative claims. Modern medical data typically contain tens of thousands of covariates. Such a large set carries hope that many of the confounders are directly measured, and further hope that others are indirectly measured through their correlation with measured covariates. How can we exploit these large sets of covariates for causal inference? To help answer this question, this paper examines the performance of the large-scale propensity score (LSPS) approach on causal analysis of medical data. We demonstrate that LSPS may adjust for indirectly measured confounders by including tens of thousands of covariates that may be correlated with them. We present conditions under which LSPS removes bias due to indirectly measured confounders, and we show that LSPS may avoid bias when inadvertently adjusting for variables (like colliders) that otherwise can induce bias. We demonstrate the performance of LSPS with both simulated medical data and real medical data.


Assuntos
Fatores de Confusão Epidemiológicos , Viés , Causalidade , Pontuação de Propensão
17.
J Biomed Inform ; 135: 104177, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-35995107

RESUMO

PURPOSE: Phenotype algorithms are central to performing analyses using observational data. These algorithms translate the clinical idea of a health condition into an executable set of rules allowing for queries of data elements from a database. PheValuator, a software package in the Observational Health Data Sciences and Informatics (OHDSI) tool stack, provides a method to assess the performance characteristics of these algorithms, namely, sensitivity, specificity, and positive and negative predictive value. It uses machine learning to develop predictive models for determining a probabilistic gold standard of subjects for assessment of cases and non-cases of health conditions. PheValuator was developed to complement or even replace the traditional approach of algorithm validation, i.e., by expert assessment of subject records through chart review. Results in our first PheValuator paper suggest a systematic underestimation of the PPV compared to previous results using chart review. In this paper we evaluate modifications made to the method designed to improve its performance. METHODS: The major changes to PheValuator included allowing all diagnostic conditions, clinical observations, drug prescriptions, and laboratory measurements to be included as predictors within the modeling process whereas in the prior version there were significant restrictions on the included predictors. We also have allowed for the inclusion of the temporal relationships of the predictors in the model. To evaluate the performance of the new method, we compared the results from the new and original methods against results found from the literature using traditional validation of algorithms for 19 phenotypes. We performed these tests using data from five commercial databases. RESULTS: In the assessment aggregating all phenotype algorithms, the median difference between the PheValuator estimate and the gold standard estimate for PPV was reduced from -21 (IQR -34, -3) in Version 1.0 to 4 (IQR -3, 15) using Version 2.0. We found a median difference in specificity of 3 (IQR 1, 4.25) for Version 1.0 and 3 (IQR 1, 4) for Version 2.0. The median difference between the two versions of PheValuator and the gold standard for estimates of sensitivity was reduced from -39 (-51, -20) to -16 (-34, -6). CONCLUSION: PheValuator 2.0 produces estimates for the performance characteristics for phenotype algorithms that are significantly closer to estimates from traditional validation through chart review compared to version 1.0. With this tool in researcher's toolkits, methods, such as quantitative bias analysis, may now be used to improve the reliability and reproducibility of research studies using observational data.


Assuntos
Algoritmos , Aprendizado de Máquina , Reprodutibilidade dos Testes , Bases de Dados Factuais , Fenótipo
18.
Pharmacoepidemiol Drug Saf ; 31(12): 1242-1252, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35811396

RESUMO

PURPOSE: Propensity score matching (PSM) is subject to limitations associated with limited degrees of freedom and covariate overlap. Cardinality matching (CM), an optimization algorithm, overcomes these limitations by matching directly on the marginal distribution of covariates. This study compared the performance of PSM and CM. METHODS: Comparative cohort study of new users of angiotensin-converting enzyme inhibitor (ACEI) and ß-blocker monotherapy identified from a large U.S. administrative claims database. One-to-one matching was conducted through PSM using nearest-neighbor matching (caliper = 0.15) and CM permitting a maximum standardized mean difference (SMD) of 0, 0.01, 0.05, and 0.10 between comparison groups. Matching covariates included 37 patient demographic and clinical characteristics. Observed covariates included patient demographics, and all observed prior conditions, drug exposures, and procedures. Residual confounding was assessed based on the expected absolute systematic error of negative control outcome experiments. PSM and CM were compared in terms of post-match patient retention, matching and observed covariate balance, and residual confounding within a 10%, 1%, 0.25% and 0.125% sample group. RESULTS: The eligible study population included 182 235 (ACEI: 129363; ß-blocker: 56872) patients. CM achieved superior patient retention and matching covariate balance in all analyses. After PSM, 1.6% and 28.2% of matching covariates were imbalanced in the 10% and 0.125% sample groups, respectively. No significant difference in observed covariate balance was observed between matching techniques. CM permitting a maximum SMD <0.05 was associated with improved residual bias as compared to PSM. CONCLUSION: We recommend CM with more stringent balance criteria as an alternative to PSM when matching on a set of clinically relevant covariates.


Assuntos
Algoritmos , Humanos , Pontuação de Propensão , Estudos de Coortes , Viés , Bases de Dados Factuais
19.
Front Pharmacol ; 13: 893484, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35873596

RESUMO

Background: Routinely collected healthcare data such as administrative claims and electronic health records (EHR) can complement clinical trials and spontaneous reports to detect previously unknown risks of vaccines, but uncertainty remains about the behavior of alternative epidemiologic designs to detect and declare a true risk early. Methods: Using three claims and one EHR database, we evaluate several variants of the case-control, comparative cohort, historical comparator, and self-controlled designs against historical vaccinations using real negative control outcomes (outcomes with no evidence to suggest that they could be caused by the vaccines) and simulated positive control outcomes. Results: Most methods show large type 1 error, often identifying false positive signals. The cohort method appears either positively or negatively biased, depending on the choice of comparator index date. Empirical calibration using effect-size estimates for negative control outcomes can bring type 1 error closer to nominal, often at the cost of increasing type 2 error. After calibration, the self-controlled case series (SCCS) design most rapidly detects small true effect sizes, while the historical comparator performs well for strong effects. Conclusion: When applying any method for vaccine safety surveillance we recommend considering the potential for systematic error, especially due to confounding, which for many designs appears to be substantial. Adjusting for age and sex alone is likely not sufficient to address differences between vaccinated and unvaccinated, and for the cohort method the choice of index date is important for the comparability of the groups. Analysis of negative control outcomes allows both quantification of the systematic error and, if desired, subsequent empirical calibration to restore type 1 error to its nominal value. In order to detect weaker signals, one may have to accept a higher type 1 error.

20.
Drug Saf ; 45(7): 791-807, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35810265

RESUMO

INTRODUCTION: Hip fractures among older people are a major public health issue, which can impact quality of life and increase mortality within the year after they occur. A recent observational study found an increased risk of hip fracture in subjects who were new users of tramadol compared with codeine. These drugs have somewhat different indications. Tramadol is indicated for moderate to severe pain and can be used for an extended period; codeine is indicated for mild to moderate pain and cough suppression. OBJECTIVE: In this observational study, we compared the risk of hip fracture in new users of tramadol or codeine, using multiple databases and analytical methods. METHODS: Using data from the Clinical Practice Research Datalink and three US claims databases, we compared the risk of hip fracture after exposure to tramadol or codeine in subjects aged 50-89 years. To ensure comparability, large-scale propensity scores were used to adjust for confounding. RESULTS: We observed a calibrated hazard ratio of 1.10 (95% calibrated confidence interval 0.99-1.21) in the Clinical Practice Research Datalink database, and a pooled estimate across the US databases yielded a calibrated hazard ratio of 1.06 (95% calibrated confidence interval 0.97-1.16). CONCLUSIONS: Our results did not demonstrate a statistically significant difference between subjects treated for pain with tramadol compared with codeine for the outcome of hip fracture risk.


Assuntos
Fraturas do Quadril , Tramadol , Idoso , Analgésicos Opioides/efeitos adversos , Codeína/efeitos adversos , Fraturas do Quadril/induzido quimicamente , Fraturas do Quadril/tratamento farmacológico , Fraturas do Quadril/epidemiologia , Humanos , Dor/tratamento farmacológico , Qualidade de Vida , Tramadol/efeitos adversos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...