Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 56
Filter
1.
JAMA Cardiol ; 8(8): 744-754, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37342056

ABSTRACT

Importance: Recent studies have produced inconsistent findings regarding the outcomes of the percutaneous microaxial left ventricular assist device (LVAD) during acute myocardial infarction with cardiogenic shock (AMICS). Objective: To compare the percutaneous microaxial LVAD vs alternative treatments among patients presenting with AMICS using observational analyses of administrative data. Design, Setting, and Participants: This comparative effectiveness research study used Medicare fee-for-service claims of patients admitted with AMICS undergoing percutaneous coronary intervention from October 1, 2015, through December 31, 2019. Treatment strategies were compared using (1) inverse probability of treatment weighting to estimate the effect of different baseline treatments in the overall population; (2) instrumental variable analysis to determine the effectiveness of the percutaneous microaxial LVAD among patients whose treatment was influenced by cross-sectional institutional practice patterns; (3) an instrumented difference-in-differences analysis to determine the effectiveness of treatment among patients whose treatment was influenced by longitudinal changes in institutional practice patterns; and (4) a grace period approach to determine the effectiveness of initiating the percutaneous microaxial LVAD within 2 days of percutaneous coronary intervention. Analysis took place between March 2021 and December 2022. Interventions: Percutaneous microaxial LVAD vs alternative treatments (including medical therapy and intra-aortic balloon pump). Main Outcomes and Measures: Thirty-day all-cause mortality and readmissions. Results: Of 23 478 patients, 14 264 (60.8%) were male and the mean (SD) age was 73.9 (9.8) years. In the inverse probability of treatment weighting analysis and grace period approaches, treatment with percutaneous microaxial LVAD was associated with a higher risk-adjusted 30-day mortality (risk difference, 14.9%; 95% CI, 12.9%-17.0%). However, patients receiving the percutaneous microaxial LVAD had a higher frequency of factors associated with severe illness, suggesting possible confounding by measures of illness severity not available in the data. In the instrumental variable analysis, 30-day mortality was also higher with percutaneous microaxial LVAD, but patient and hospital characteristics differed across levels of the instrumental variable, suggesting possible confounding by unmeasured variables (risk difference, 13.5%; 95% CI, 3.9%-23.2%). In the instrumented difference-in-differences analysis, the association between the percutaneous microaxial LVAD and mortality was imprecise, and differences in trends in characteristics between hospitals with different percutaneous microaxial LVAD use suggested potential assumption violations. Conclusions: In observational analyses comparing the percutaneous microaxial LVAD to alternative treatments among patients with AMICS, the percutaneous microaxial LVAD was associated with worse outcomes in some analyses, while in other analyses, the association was too imprecise to draw meaningful conclusions. However, the distribution of patient and institutional characteristics between treatment groups or groups defined by institutional differences in treatment use, including changes in use over time, combined with clinical knowledge of illness severity factors not captured in the data, suggested violations of key assumptions that are needed for valid causal inference with different observational analyses. Randomized clinical trials of mechanical support devices will allow valid comparisons across candidate treatment strategies and help resolve ongoing controversies.


Subject(s)
Heart-Assist Devices , Myocardial Infarction , Humans , Male , Aged , United States/epidemiology , Female , Shock, Cardiogenic/etiology , Shock, Cardiogenic/therapy , Shock, Cardiogenic/mortality , Heart-Assist Devices/adverse effects , Cross-Sectional Studies , Medicare , Myocardial Infarction/complications , Myocardial Infarction/therapy , Myocardial Infarction/physiopathology
2.
PLoS One ; 18(2): e0281365, 2023.
Article in English | MEDLINE | ID: mdl-36763574

ABSTRACT

BACKGROUND: As diagnostic tests for COVID-19 were broadly deployed under Emergency Use Authorization, there emerged a need to understand the real-world utilization and performance of serological testing across the United States. METHODS: Six health systems contributed electronic health records and/or claims data, jointly developed a master protocol, and used it to execute the analysis in parallel. We used descriptive statistics to examine demographic, clinical, and geographic characteristics of serology testing among patients with RNA positive for SARS-CoV-2. RESULTS: Across datasets, we observed 930,669 individuals with positive RNA for SARS-CoV-2. Of these, 35,806 (4%) were serotested within 90 days; 15% of which occurred <14 days from the RNA positive test. The proportion of people with a history of cardiovascular disease, obesity, chronic lung, or kidney disease; or presenting with shortness of breath or pneumonia appeared higher among those serotested compared to those who were not. Even in a population of people with active infection, race/ethnicity data were largely missing (>30%) in some datasets-limiting our ability to examine differences in serological testing by race. In datasets where race/ethnicity information was available, we observed a greater distribution of White individuals among those serotested; however, the time between RNA and serology tests appeared shorter in Black compared to White individuals. Test manufacturer data was available in half of the datasets contributing to the analysis. CONCLUSION: Our results inform the underlying context of serotesting during the first year of the COVID-19 pandemic and differences observed between claims and EHR data sources-a critical first step to understanding the real-world accuracy of serological tests. Incomplete reporting of race/ethnicity data and a limited ability to link test manufacturer data, lab results, and clinical data challenge the ability to assess the real-world performance of SARS-CoV-2 tests in different contexts and the overall U.S. response to current and future disease pandemics.


Subject(s)
COVID-19 , SARS-CoV-2 , Humans , United States/epidemiology , SARS-CoV-2/genetics , COVID-19/diagnosis , COVID-19/epidemiology , RNA , Pandemics , COVID-19 Testing
3.
Pharmacoepidemiol Drug Saf ; 32(7): 735-751, 2023 Jul.
Article in English | MEDLINE | ID: mdl-36779261

ABSTRACT

PURPOSE: To evaluate the impact of increased federal restrictions on hydrocodone combination product (HCP) utilization, misuse, abuse, and overdose death. METHODS: We assessed utilization, misuse, abuse, and overdose death trends involving hydrocodone versus select opioid analgesics (OAs) and heroin using descriptive and interrupted time-series (ITS) analyses during the nine quarters before and after the October 2014 rescheduling of HCPs from a less restrictive (CIII) to more restrictive (CII) category. RESULTS: Hydrocodone dispensing declined >30% over the study period, and declines accelerated after rescheduling. ITS analyses showed that immediately postrescheduling, quarterly hydrocodone dispensing decreased by 177M dosage units while codeine, oxycodone, and morphine dispensing increased by 49M, 62M, and 4M dosage units, respectively. Postrescheduling, hydrocodone-involved misuse/abuse poison center (PC) case rates had a statistically significant immediate drop but a deceleration of preperiod declines. There were small level increases in codeine-involved PC misuse/abuse and overdose death rates immediately after HCP's rescheduling, but these were smaller than level decreases in rates for hydrocodone. Heroin-involved PC case rates and overdose death rates increased across the study period, with exponential increases in PC case rates beginning 2015. CONCLUSIONS: HCP rescheduling was associated with accelerated declines in hydrocodone dispensing, only partially offset by smaller increases in codeine, oxycodone, and morphine dispensing. The net impact on hydrocodone and other OA-involved misuse/abuse and fatal overdose was unclear. We did not detect an immediate impact on heroin abuse or overdose death rates; however, the dynamic nature of the crisis and data limitations present challenges to causal inference.


Subject(s)
Drug Overdose , Hydrocodone , Humans , Oxycodone/adverse effects , Heroin , Practice Patterns, Physicians' , Analgesics, Opioid , Codeine/adverse effects , Drug Overdose/epidemiology , Drug Overdose/prevention & control , Drug Overdose/drug therapy , Morphine/adverse effects
4.
J Interv Card Electrophysiol ; 66(4): 997-1004, 2023 Jun.
Article in English | MEDLINE | ID: mdl-35334060

ABSTRACT

Post-market evaluation is important to ensure the ongoing safety and effectiveness of cardiovascular implantable electronic device (CIED) leads. The Twenty-First Century Cures Act and subsequent Food and Drug Administrative (FDA) Guidance provide an opportunity to leverage real-world data sources for this purpose. The past 4 years have seen the development of EP PASSION: a multi-stakeholder, collaborative effort between the FDA, CIED manufacturers, Heart Rhythm Society, and academics. Using real-world data, EP PASSION enables longitudinal evaluation of the long-term safety of CIED leads, addressing limitations of current approaches to generate evidence that informs regulatory, clinical, and manufacturer decision-making. This state of the art article describes the impetus for and launch of EP PASSION, the lessons learned, its current state, the current analytic approach, and the strengths and limitations of leveraging extant data sources for post-market lead evaluation. We also compare EP PASSION to traditional post-approval studies and describe possible future directions.


Subject(s)
Cardiac Electrophysiology , Defibrillators, Implantable , Humans , Lung , Registries
5.
Cancer Epidemiol Biomarkers Prev ; 31(10): 1890-1895, 2022 10 04.
Article in English | MEDLINE | ID: mdl-35839466

ABSTRACT

BACKGROUND: Evaluations of cancer etiology and safety and effectiveness of cancer treatments are predicated on large numbers of patients with sufficient baseline and follow-up data. To assess feasibility of FDA's Sentinel System's electronic healthcare data for surveillance of malignancy onset and examination of product safety, this study examined patterns of enrollment surrounding new-onset cancers. METHODS: Using a retrospective cohort of patients based on administrative claims, we identified incident events of 19 cancers among 292.5 million health plan members from January 2000 to February 2020 using International Classification of Diseases (ICD) diagnosis codes. Annual incident cases were stratified by sex, age, medical and drug coverage, and insurer type. Descriptive statistics were calculated for observable time prior to and following diagnosis. RESULTS: We identified 10,697,573 incident cancer events among members with medical coverage. When drug coverage was additionally required, number of incident cancers was reduced by 41%. Medicare data contributed 61% of cases, with similar duration trends as other insurers. Mean duration of follow-up prior to diagnosis ranged from 4.0 to 4.6 years, whereas follow-up post diagnosis ranged from 1.1 to 3.3 years. Approximately a third (36.1%) had at least 2 years both prior to and following diagnosis. CONCLUSIONS: The FDA Sentinel System's electronic healthcare data may be useful for characterizing relatively short latency cancer risk, examining cancer drug utilization and safety after diagnosis, and conducting surveillance for acute adverse events among patients with cancers. IMPACT: A national distributed system with electronic health data, the Sentinel system provides opportunity for rapid pharmacoepidemiologic assessments relevant in oncology.


Subject(s)
Medicare , Neoplasms , Aged , Computer Communication Networks , Delivery of Health Care , Electronics , Humans , Neoplasms/epidemiology , Retrospective Studies , United States/epidemiology
6.
Pharmacoepidemiol Drug Saf ; 29(12): 1540-1549, 2020 12.
Article in English | MEDLINE | ID: mdl-33146896

ABSTRACT

Epidemiology and pharmacoepidemiology frequently employ Real-World Data (RWD) from healthcare teams to inform research. These data sources usually include signs, symptoms, tests, and treatments, but may lack important information such as the patient's diet or adherence or quality of life. By harnessing digital tools a new fount of evidence, Patient (or Citizen/Person) Generated Health Data (PGHD), is becoming more readily available. This review focusses on the advantages and considerations in using PGHD for pharmacoepidemiological research. New and corroborative types of data can be collected directly from patients using digital devices, both passively and actively. Practical issues such as patient engagement, data linking, validation, and analysis are among important considerations in the use of PGHD. In our ever increasingly patient-centric world, PGHD incorporated into more traditional Real-Word data sources offers innovative opportunities to expand our understanding of the complex factors involved in health and the safety and effectiveness of disease treatments. Pharmacoepidemiologists have a unique role in realizing the potential of PGHD by ensuring that robust methodology, governance, and analytical techniques underpin its use to generate meaningful research results.


Subject(s)
Patient Generated Health Data , Pharmacoepidemiology , Humans , Patient Participation , Quality of Life
7.
Epidemiology ; 31(1): 82-89, 2020 01.
Article in English | MEDLINE | ID: mdl-31569120

ABSTRACT

Estimating hazard ratios (HR) presents challenges for propensity score (PS)-based analyses of cohorts with differential depletion of susceptibles. When the treatment effect is not null, cohorts that were balanced at baseline tend to become unbalanced on baseline characteristics over time as "susceptible" individuals drop out of the population at risk differentially across treatment groups due to having outcome events. This imbalance in baseline covariates causes marginal (population-averaged) HRs to diverge from conditional (covariate-adjusted) HRs over time and systematically move toward the null. Methods that condition on a baseline PS yield HR estimates that fall between the marginal and conditional HRs when these diverge. Unconditional methods that match on the PS or weight by a function of the PS can estimate the marginal HR consistently but are prone to misinterpretation when the marginal HR diverges toward the null. Here, we present results from a series of simulations to help analysts gain insight on these issues. We propose a novel approach that uses time-dependent PSs to consistently estimate conditional HRs, regardless of whether susceptibles have been depleted differentially. Simulations show that adjustment for time-dependent PSs can adjust for covariate imbalances over time that are caused by depletion of susceptibles. Updating the PS is unnecessary when outcome incidence is so low that depletion of susceptibles is negligible. But if incidence is high, and covariates and treatment affect risk, then covariate imbalances arise as susceptibles are depleted, and PS-based methods can consistently estimate the conditional HR only if the PS is periodically updated.


Subject(s)
Cohort Studies , Propensity Score , Proportional Hazards Models , Research Design , Humans , Time Factors
8.
Pharmacoepidemiol Drug Saf ; 28(6): 879-886, 2019 06.
Article in English | MEDLINE | ID: mdl-31020732

ABSTRACT

PURPOSE: Bootstrapping can account for uncertainty in propensity score (PS) estimation and matching processes in 1:1 PS-matched cohort studies. While theory suggests that the classical bootstrap can fail to produce proper coverage, practical impact of this theoretical limitation in settings typical to pharmacoepidemiology is not well studied. METHODS: In a plasmode-based simulation study, we compared performance of the standard parametric approach, which ignores uncertainty in PS estimation and matching, with two bootstrapping methods. The first method only accounted for uncertainty introduced during the matching process (the observation resampling approach). The second method accounted for uncertainty introduced during both PS estimation and matching processes (the PS reestimation approach). Variance was estimated based on percentile and empirical standard errors, and treatment effect estimation was based on median and mean of the estimated treatment effects across 1000 bootstrap resamples. Two treatment prevalence scenarios (5% and 29%) across two treatment effect scenarios (hazard ratio of 1.0 and 2.0) were evaluated in 500 simulated cohorts of 10 000 patients each. RESULTS: We observed that 95% confidence intervals from the bootstrapping approaches but not the standard approach, resulted in inaccurate coverage rates (98%-100% for the observation resampling approach, 99%-100% for the PS reestimation approach, and 95%-96% for standard approach). Treatment effect estimation based on bootstrapping approaches resulted in lower bias than the standard approach (less than 1.4% vs 4.1%) at 5% treatment prevalence; however, the performance was equivalent at 29% treatment prevalence. CONCLUSION: Use of bootstrapping led to variance overestimation and inconsistent coverage, while coverage remained more consistent with parametric estimation.


Subject(s)
Cohort Studies , Outcome Assessment, Health Care/methods , Research Design , Administration, Oral , Anticoagulants/therapeutic use , Atrial Fibrillation/drug therapy , Computer Simulation , Data Interpretation, Statistical , Humans , Monte Carlo Method , Outcome Assessment, Health Care/statistics & numerical data , Propensity Score , Proportional Hazards Models
9.
Pharmacoepidemiol Drug Saf ; 28(5): 649-656, 2019 05.
Article in English | MEDLINE | ID: mdl-30747473

ABSTRACT

PURPOSE: Develop a flexible analytic tool for the Food and Drug Administration's (FDA's) Sentinel System to assess adherence to safe use recommendations with two capabilities: characterize adherence to patient monitoring recommendations for a drug, and characterize concomitant medication use before, during, and/or after drug therapy. METHODS: We applied the tool in the Sentinel Distributed Database to assess adherence to the labeled recommendation that patients treated with dronedarone undergo electrocardiogram (ECG) testing no less often than every 3 months. Measures of length of treatment, time to first ECG, number of ECGs, and time between ECGs were assessed. We also assessed concomitant use of contraception among female users of mycophenolate per label recommendations (concomitancy 4 weeks before through 6 weeks after discontinuation of mycophenolate). Unadjusted results were stratified by age, month-year, and sex. RESULTS: We identified 21 457 new episodes of dronedarone use of greater than or equal to 90 days (July 2009 to September 2015); 86% had greater than or equal to one ECG, and 22% met the recommendation of an ECG no less often than every 3 months. We identified 21 942 new episodes of mycophenolate use among females 12 to 55 years (January 2016 to September 2015); 16% had greater than or equal to 1 day of concomitant contraception dispensed, 12% had concomitant contraception use for greater than or equal to 50% of the 4 weeks before initiation through 6 weeks after mycophenolate; younger females had more concomitancy. These results may be underestimates as the analyses are limited to claims data. CONCLUSIONS: We developed a tool for use in databases formatted to the Sentinel Common Data Model that can assess adherence to safe use recommendations involving patient monitoring and concomitant drug use over time.


Subject(s)
Adverse Drug Reaction Reporting Systems/organization & administration , Anti-Arrhythmia Agents/administration & dosage , Dronedarone/administration & dosage , Drug Monitoring/methods , Mycophenolic Acid/administration & dosage , Anti-Arrhythmia Agents/adverse effects , Contraception/statistics & numerical data , Databases, Factual , Dronedarone/adverse effects , Drug Interactions , Electrocardiography , Humans , Medication Adherence , Mycophenolic Acid/adverse effects , United States , United States Food and Drug Administration
10.
Lancet Child Adolesc Health ; 3(1): 15-22, 2019 01.
Article in English | MEDLINE | ID: mdl-30455109

ABSTRACT

BACKGROUND: Serious and fatal deferasirox-induced kidney injury has been reported in paediatric patients. This study aimed to investigate the effects of deferasirox dose and serum ferritin concentrations on kidney function and the effect of impaired kidney function on dose-normalised deferasirox minimum plasma concentration (Cmin). METHODS: We did a case-control analysis using pooled data from ten clinical studies. We identified transfusion-dependent patients with thalassaemia, aged 2-15 years, who were receiving deferasirox and had available baseline and follow-up serum creatinine and ferritin measurements. Cases of acute kidney injury (AKI) were defined according to an estimated glomerular filtration rate (eGFR) threshold of 90 mL/min per 1·73 m2 or less (if baseline eGFR was ≥100 mL/min per 1·73 m2), an eGFR of 60 mL/min per 1·73 m2 or less (if baseline eGFR was <100 mL/min per 1·73 m2), or an eGFR decrease from baseline of at least 25%. Cases were matched to control visits (eGFR ≥120 mL/min per 1·73 m2) on age, sex, study site, and time since drug initiation. We calculated rate ratios for AKI using conditional logistic regression, and evaluated the effect of eGFR changes on Cmin. FINDINGS: Among 1213 deferasirox-treated paediatric patients, 162 cases of AKI and 621 matched control visits were identified. Patients with AKI had a mean 50·2% (SD 15·5) decrease in eGFR from baseline, compared with a 6·9% (29·8) decrease in controls. A significantly increased risk for AKI (rate ratio 1·26, 95% CI 1·08-1·48, p=0·00418) was observed per 5 mg/kg per day increase in deferasirox dispersible tablet dose (equivalent to a 3·5 mg/kg per day dose of film-coated tablets or granules), above the typical starting dose (20 mg/kg per day). An increased risk (1·25, 1·01-1·56, p=0·0400) for AKI was also observed per 250 µg/L decrease in serum ferritin, starting from 1250 µg/L. High-dose deferasirox (dispersible tablet dose >30 mg/kg per day) resulted in an increased risk (4·47, 1·25-15·95, p=0·0209) for AKI when serum ferritin was less than 1000 µg/L. Decreases in eGFR were associated with increased Cmin. INTERPRETATION: Deferasirox can cause AKI in a dose-dependent manner. The increased AKI risk with high-dose deferasirox and lower serum ferritin concentration is consistent with overchelation as a causative factor. Small decreases in eGFR correlate with increased deferasirox Cmin, especially in younger patients. Physicians should closely monitor renal function and serum ferritin, use the lowest effective dose to maintain acceptable body iron burden, and interrupt deferasirox treatment when AKI or volume depletion are suspected. FUNDING: None.


Subject(s)
Acute Kidney Injury/blood , Deferasirox/therapeutic use , Ferritins/blood , Iron Chelating Agents/therapeutic use , Acute Kidney Injury/diagnosis , Acute Kidney Injury/etiology , Adolescent , Case-Control Studies , Child , Child, Preschool , Dose-Response Relationship, Drug , Female , Glomerular Filtration Rate , Humans , Male
11.
Epidemiology ; 29(6): 895-903, 2018 11.
Article in English | MEDLINE | ID: mdl-30074538

ABSTRACT

The tree-based scan statistic is a statistical data mining tool that has been used for signal detection with a self-controlled design in vaccine safety studies. This disproportionality statistic adjusts for multiple testing in evaluation of thousands of potential adverse events. However, many drug safety questions are not well suited for self-controlled analysis. We propose a method that combines tree-based scan statistics with propensity score-matched analysis of new initiator cohorts, a robust design for investigations of drug safety. We conducted plasmode simulations to evaluate performance. In multiple realistic scenarios, tree-based scan statistics in cohorts that were propensity score matched to adjust for confounding outperformed tree-based scan statistics in unmatched cohorts. In scenarios where confounding moved point estimates away from the null, adjusted analyses recovered the prespecified type 1 error while unadjusted analyses inflated type 1 error. In scenarios where confounding moved point estimates toward the null, adjusted analyses preserved power, whereas unadjusted analyses greatly reduced power. Although complete adjustment of true confounders had the best performance, matching on a moderately mis-specified propensity score substantially improved type 1 error and power compared with no adjustment. When there was true elevation in risk of an adverse event, there were often co-occurring signals for clinically related concepts. TreeScan with propensity score matching shows promise as a method for screening and prioritization of potential adverse events. It should be followed by clinical review and safety studies specifically designed to quantify the magnitude of effect, with confounding control targeted to the outcome of interest.


Subject(s)
Data Mining/methods , Drug-Related Side Effects and Adverse Reactions/epidemiology , Confounding Factors, Epidemiologic , Humans , Propensity Score , Software , Statistics as Topic
12.
J Crit Care ; 47: 192-197, 2018 10.
Article in English | MEDLINE | ID: mdl-30015289

ABSTRACT

PURPOSE: To estimate the incidence of Acute Respiratory Distress Syndrome (ARDS) and ARDS-related mortality rates. METHODS: We identified patients with a risk factor for ARDS in the National Inpatient Sample (NIS) (2006-2014). Using survey-weighted descriptive statistics we estimated annual and overall proportions of ARDS cases. RESULTS: From over 69 million discharges, 1,151,969 ARDS discharges and 969,567 ARDS discharges with a risk factor were identified. Sepsis (46.8%), pneumonia (44.9%) and shock (44.4%) were the most common ARDS risk factor. Pancreatitis (3.4%), pulmonary contusion (1.4%) and drowning (0.2%) were the least frequently reported. Incidence rates increased from 180.7 (2006) to 220.8 (2011) and again from 182.8 (2012) to 193.4 (2014). Incidence for pneumonia, shock and sepsis-associated ARDS increased steadily, while transfusion and trauma-associated ARDS declined. Trends for gastric aspiration and pancreatitis-related ARDS remained unchanged. Shock, sepsis and transfusion-associated ARDS had higher mortality rates compared to other factors. Except for transfusion and trauma-associated ARDS, mortality rates for other factors declined. CONCLUSION: Although increasing incidence for ARDS was observed, mortality rates declined for most risk factors. Mortality for transfusion and trauma-associated ARDS increased in the later study period, research is needed to examine reasons for the increasing in-hospital deaths associated with these risk factors.


Subject(s)
Respiratory Distress Syndrome/epidemiology , Adolescent , Adult , Age Distribution , Aged , Child , Comorbidity , Female , Humans , Incidence , Male , Middle Aged , Prospective Studies , Risk Factors , United States/epidemiology
13.
Am J Epidemiol ; 187(11): 2439-2448, 2018 11 01.
Article in English | MEDLINE | ID: mdl-29947726

ABSTRACT

Use of disease risk score (DRS)-based confounding adjustment when estimating treatment effects on multiple outcomes is not well studied. We designed an empirical cohort study to compare dabigatran initiators and warfarin initiators with respect to risks of ischemic stroke and major bleeding in 12 sequential monitoring periods (90 days each), using data from the Truven Marketscan database (Truven Health Analytics, Ann Arbor, Michigan). We implemented 2 approaches to combine DRS for multiple outcomes: 1) 1:1 matching on prognostic propensity scores (PPS), created using DRS for bleeding and stroke as independent variables in a propensity score (PS) model; and 2) simultaneous 1:1 matching on DRS for bleeding and stroke using Mahalanobis distance (M-distance), and compared their performance with that of traditional PS matching. M-distance matching appeared to produce more stable results in the early marketing period than both PPS and traditional PS matching; hazard ratios from unadjusted analysis, traditional PS matching, PPS matching, and M-distance matching after 4 periods were 0.72 (95% confidence interval (CI): 0.51, 1.03), 0.61 (95% CI: 0.31, 1.09), 0.55 (95% CI: 0.33, 0.91), and 0.78 (95% CI: 0.45, 1.34), respectively, for stroke and 0.65 (95% CI: 0.53, 0.80), 0.78 (95% CI: 0.60, 1.01), 0.75 (95% CI: 0.59, 0.96), and 0.78 (95% CI: 0.64, 0.95), respectively, for bleeding. In later periods, estimates were similar for traditional PS matching and M-distance matching but suggested potential residual confounding with PPS matching. These results suggest that M-distance matching may be a valid approach for extension of DRS-based confounding adjustments for multiple outcomes of interest.


Subject(s)
Confounding Factors, Epidemiologic , Epidemiologic Research Design , Risk Assessment/methods , Anticoagulants/administration & dosage , Computer Simulation , Dabigatran/administration & dosage , Data Interpretation, Statistical , Hemorrhage/chemically induced , Humans , Propensity Score , Stroke/prevention & control , Warfarin/administration & dosage
14.
J Manag Care Spec Pharm ; 24(7): 700-709, 2018 Jul.
Article in English | MEDLINE | ID: mdl-29952703

ABSTRACT

BACKGROUND: The FDA issued 2 main drug safety communications (DSCs) on the cardiovascular safety of tiotropium in March 2008 (warning of a potential increased stroke risk) and January 2010 (informing of an absence of a significant increased stroke risk or cardiovascular events based on findings from a large trial). OBJECTIVE: To describe the effect of the FDA DSCs on medication dispensing of tiotropium in a large U.S. claims database. METHODS: Initiation of tiotropium products among patients with chronic obstructive pulmonary disease (COPD) aged 40 years and older was determined monthly from 2006-2012 using medication dispensing from the IMS Lifelink Health Plan Claims Database. Similarly, monthly initiation of products containing long-acting beta-agonists (LABAs) was calculated to explore product switching. The effect of the 2008 and 2010 FDA DSCs was measured using interrupted time-series analysis. Subgroups of patients with greater cardiovascular risk were also examined. RESULTS: A decreasing trend in initiation of tiotropium-containing products was present before the initial 2008 DSC. The decline in tiotropium initiation continued until January 2010, accompanied by an increased initiation of LABA-containing products in patients with COPD. In the presence of the existing decreasing trend, the initial DSC was followed by an immediate 2.8% (P = 0.02) further reduction in tiotropium initiation. Tiotropium initiation increased 2.5% (P = 0.03) immediately after the 2010 DSC, reducing the overall decline in rate and stabilizing (flattening) the trend. No significant changes in dispensing level or trend were observed among COPD patients with cardiovascular comorbidity. CONCLUSIONS: Cardiovascular safety concerns may have affected tiotropium initiation as indicated by the decrease in tiotropium dispensing shown immediately following the initial DSC. The effect was alleviated as concerns lessened following the most recent DSC. DISCLOSURES: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. The authors are employed by the FDA and have no conflict of interest relevant to the content of this study. The views expressed herein do not necessarily represent the views of the FDA.


Subject(s)
Bronchodilator Agents/adverse effects , Pulmonary Disease, Chronic Obstructive/drug therapy , Stroke/prevention & control , Tiotropium Bromide/adverse effects , United States Food and Drug Administration/organization & administration , Administration, Inhalation , Administrative Claims, Healthcare/statistics & numerical data , Adrenergic beta-2 Receptor Agonists/therapeutic use , Adult , Aged , Drug Prescriptions/statistics & numerical data , Drug Substitution/statistics & numerical data , Drug Substitution/trends , Drug Therapy, Combination/methods , Drug Therapy, Combination/statistics & numerical data , Female , Health Communication , Humans , Interrupted Time Series Analysis , Male , Middle Aged , Stroke/chemically induced , United States
15.
Parkinsonism Relat Disord ; 53: 46-52, 2018 08.
Article in English | MEDLINE | ID: mdl-29759929

ABSTRACT

BACKGROUND: An increased incidence of prostate cancer was observed in Parkinson's disease (PD) patients treated with entacapone during a pre-approval randomized clinical trial; the relation has not been robustly investigated in the U.S. ambulatory setting. OBJECTIVE: To investigate whether entacapone is associated with prostate cancer and to assess whether the associations are correlated with advanced disease at the time of cancer diagnosis. METHODS: Using data from the Department of Veterans Affairs healthcare system, new-user cohorts were created of PD patients treated with add-on entacapone or add-on dopamine agonist/monoamine oxidase B inhibitors between January 2000 and December 2014. Patients were followed on-treatment for occurrence of prostate cancer, identified via linkage to the VA cancer registry. RESULTS: Mean follow-up time was 3.1 and 4.0 years in the entacapone and control cohort, respectively. There were 17,666 subjects meeting study criteria (mean age, 74 (SD 8.6) years); the entacapone-treated group comprised 5,257 subjects. Twenty-three prostate cancer cases occurred in the entacapone cohort and ninety-seven in the control cohort. The overall incidence of prostate cancer was 1.8 per 1,000 person-years of risk. There was no difference in risk of prostate cancer between the cohorts for increased duration of entacapone intake (adjusted HR: 1.08; 95% confidence interval: 0.46-2.51 for cumulative exposure of ≥2 years). Time since starting drug therapy and cumulative dose (mg) also do not suggest a difference in prostate cancer risk between cohorts. CONCLUSIONS: Prolonged therapy with entacapone was not associated with increased prostate cancer incidence; however, findings suggest a higher severity of prostate cancer.


Subject(s)
Antiparkinson Agents/adverse effects , Catechols/adverse effects , Nitriles/adverse effects , Parkinson Disease/drug therapy , Prostatic Neoplasms/chemically induced , Registries , Veterans , Adult , Aged , Aged, 80 and over , Databases, Factual , Follow-Up Studies , Humans , Incidence , Male , Middle Aged , Parkinson Disease/epidemiology , Prostatic Neoplasms/epidemiology , Registries/statistics & numerical data , Retrospective Studies , Risk , Severity of Illness Index , United States , United States Department of Veterans Affairs/statistics & numerical data , Veterans/statistics & numerical data
16.
Am J Epidemiol ; 187(8): 1799-1807, 2018 08 01.
Article in English | MEDLINE | ID: mdl-29554199

ABSTRACT

Postapproval drug safety studies often use propensity scores (PSs) to adjust for a large number of baseline confounders. These studies may involve examining whether treatment safety varies across subgroups. There are many ways a PS could be used to adjust for confounding in subgroup analyses. These methods have trade-offs that are not well understood. We conducted a plasmode simulation to compare relative performance of 5 methods involving PS matching for subgroup analysis, including methods frequently used in applied literature whose performance has not been previously directly compared. These methods varied as to whether the overall PS, subgroup-specific PS, or no rematching was used in subgroup analysis as well as whether subgroups were fully nested within the main analytical cohort. The evaluated PS subgroup matching methods performed similarly in terms of balance, bias, and precision in 12 simulated scenarios varying size of the cohort, prevalence of exposure and outcome, strength of relationships between baseline covariates and exposure, the true effect within subgroups, and the degree of confounding within subgroups. Each had strengths and limitations with respect to other performance metrics that could inform choice of method.


Subject(s)
Product Surveillance, Postmarketing/methods , Propensity Score , Research Design , Adrenergic Antagonists/adverse effects , Aged , Aged, 80 and over , Angioedema/chemically induced , Angiotensin-Converting Enzyme Inhibitors/adverse effects , Computer Simulation , Female , Humans , Male , Middle Aged , Models, Statistical , United States , United States Food and Drug Administration
17.
Pharmacoepidemiol Drug Saf ; 27(3): 299-306, 2018 03.
Article in English | MEDLINE | ID: mdl-29349833

ABSTRACT

PURPOSE: The purpose of the study is to evaluate contributions to postmarket safety assessments and identify potential factors for enhancing implementation and utilization of registries in regulatory decision-making. METHODS: Registry documents (e.g., protocols, reports) submitted to the FDA were identified up to January 2016 through an extensive, systematic review of internal records and resources. We characterized nonpregnancy drug exposure registries based on prespecified design elements, performance, and regulatory impact. RESULTS: A total of 65 registries were identified: 56 registries were open and 9 closed. Among open registries, 20% were pending, 14% delayed, and 16% ongoing less than ≤3 years. Most registries (82%) examined safety issues that originally arose from clinical trials; most frequent safety issues investigated included infections, gastrointestinal dysfunction, and liver toxicity. Although 74% of registries ascertained baseline health conditions and monitored concomitant medication use, fewer (45%) considered drug exposure duration or dosage. Thirty-seven percent of non pending registries had enrollment below sample size expectation. Seventeen registries published findings in journals/conference proceedings, 13 from open registries. Three closed registries generated results that contributed to product label changes. High-performance registries scored higher in design metrics related to sample size considerations (76% versus 62%) and adequate analysis plan (53% versus 35%), and interim report submission (76% versus 65%). There was a significant difference in proportion of registries with clear primary objectives between high versus not high performing registries (100% versus 78%). CONCLUSIONS: This study suggests clear objectives, patient accrual/retention efforts, adequate analysis plans, and interim reports contribute to the performance of drug exposure registries.


Subject(s)
Documentation/standards , Drug Approval , Product Surveillance, Postmarketing/standards , Registries/standards , United States Food and Drug Administration/standards , Guidelines as Topic , Sample Size , United States , United States Food and Drug Administration/legislation & jurisprudence
18.
Clin Toxicol (Phila) ; 56(7): 656-663, 2018 07.
Article in English | MEDLINE | ID: mdl-29260900

ABSTRACT

CONTEXT: Recent restrictions in access to and availability of dextromethorphan (DXM) cough and cold medications may correlate with changes in abuse exposures. OBJECTIVE: To extend and update existing knowledge about DXM abuse, we describe recent trends and patterns of calls to poison control centers involving DXM abuse, by demographics, geography, common brands, and medical outcomes. METHODS: We utilized data from the National Poison Data System (NPDS) maintained by the American Association of Poison Control Centers (AAPCC), which captures data on calls to U.S. poison centers on a near real-time basis. We analyzed demographic, geographic, brand and medical outcome data for single-substance DXM cough and cold product intentional abuse exposure calls in multiple age groups reported to NPDS from 2000 to 2015. RESULTS: The annual rate of single-substance DXM intentional abuse calls tripled from 2000 to 2006 and subsequently plateaued from 2006 to 2015. The highest abuse call rate was observed among adolescents 14-17 years old, where the mean annual number of calls was 1761 per year, corresponding to an annual rate of 103.6 calls per million population. From 2006 to 2015, the rate for single-substance DXM abuse calls among adolescents 14-17 years decreased by 56.3%, from 143.8 to 80.9 calls per million population. CONCLUSION: DXM intentional abuse exposure call rates have declined among adolescents 14-17 years, since their peak in 2006. The observed decline in DXM abuse call rates corresponds to a period of growing public health efforts to curtail the abuse of over-the-counter (OTC) DXM containing products, particularly among adolescents. Further evaluation of state-level sales and abuse trends among adolescents would be valuable to better understand how restricted availability of OTC DXM cough and cold products and other efforts may affect abuse rates.


Subject(s)
Antitussive Agents , Data Systems , Dextromethorphan , Substance-Related Disorders/epidemiology , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Male , Poison Control Centers , Time Factors , United States/epidemiology , Young Adult
19.
Am J Epidemiol ; 187(4): 786-792, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29036565

ABSTRACT

In a retrospective cohort study of patients enrolled in the UK Clinical Practice Research Datalink during 2000-2013, we evaluated long-term risks of death, stroke, and acute myocardial infarction (AMI) in adults prescribed clarithromycin. Patients were outpatients aged 40-85 years, who were prescribed clarithromycin (n = 287,748), doxycycline (n = 267,729), or erythromycin (n = 442,999), or Helicobacter pylori eradication therapy with a proton pump inhibitor, amoxicillin, and either clarithromycin (n = 27,639) or metronidazole (n = 14,863). We analyzed time to death, stroke, or AMI with Cox proportional hazards regression. The long-term hazard ratio for death following 1 clarithromycin versus 1 doxycycline prescription was 1.29 (95% confidence interval (CI): 1.21, 1.25), increasing to 1.62 (95% CI: 1.43, 1.84) for ≥5 prescriptions of clarithromycin versus ≥5 prescriptions for doxycycline. Erythromycin showed smaller risks in comparison with doxycycline. Stroke and AMI incidences were also increased after clarithromycin but with smaller hazard ratios than for mortality. For H. pylori eradication, the hazard ratio for mortality following clarithromycin versus metronidazole regimens was 1.09 (95% CI: 1.00, 1.18) overall, and it was higher (hazard ratio = 1.65, 95% CI: 0.88, 3.08) following ≥2 prescriptions in subjects not on statins at baseline. Outpatient clarithromycin use was associated with long-term mortality increases, with evidence for a similar, smaller increase with erythromycin.


Subject(s)
Anti-Bacterial Agents/adverse effects , Clarithromycin/adverse effects , Mortality/trends , Myocardial Infarction/mortality , Stroke/mortality , Adult , Aged , Aged, 80 and over , Anti-Bacterial Agents/therapeutic use , Clarithromycin/therapeutic use , Doxycycline/adverse effects , Drug Therapy, Combination , Erythromycin/adverse effects , Female , Helicobacter Infections/drug therapy , Humans , Male , Middle Aged , Proportional Hazards Models , Proton Pump Inhibitors/therapeutic use , Retrospective Studies , Time Factors , United Kingdom
20.
Pharmacoepidemiol Drug Saf ; 26(12): 1507-1512, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28984001

ABSTRACT

PURPOSE: When evaluating safety signals, there is often interest in understanding safety in all patients for whom compared treatments are reasonable alternatives, as well as in specific subgroups of interest. There are numerous ways that propensity score (PS) matching can be implemented for subgroup analyses. METHODS: We conducted a systematic literature review of methods papers that compared the performance of alternative methods to implement PS matched subgroup analyses and examined how frequently different PS matching methods have been used for subgroup analyses in applied studies. RESULTS: We identified 5 methods papers reporting small improvements in covariate balance and bias with use of a subgroup-specific PS instead of a mis-specified overall PS within subgroups. Applied research papers frequently used PS for subgroups in ways not evaluated in methods papers. Thirty three percent used PS to match in the overall cohort and broke the matched sets for subgroup analysis without further adjustment. CONCLUSIONS: While the performance of several alternative ways to use PS matching in subgroup analyses has been evaluated in methods literature, these evaluations do not include the most commonly used methods to implement PS matched subgroup analyses in applied studies. There is a need to better understand the relative performance of commonly used methods for PS matching in subgroup analyses, particularly within settings encountered during active surveillance, where there may be low exposure, infrequent outcomes, and multiple subgroups of interest.


Subject(s)
Peer Review , Propensity Score , Research Design/standards , Research/standards , Humans , Monte Carlo Method
SELECTION OF CITATIONS
SEARCH DETAIL
...