Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
Article in English | MEDLINE | ID: mdl-35329019

ABSTRACT

The COVID-19 pandemic that began at the end of 2019 has caused hundreds of millions of infections and millions of deaths worldwide. COVID-19 posed a threat to human health and profoundly impacted the global economy and people's lifestyles. The United States is one of the countries severely affected by the disease. Evidence shows that the spread of COVID-19 was significantly underestimated in the early stages, which prevented governments from adopting effective interventions promptly to curb the spread of the disease. This paper adopts a Bayesian hierarchical model to study the under-reporting of COVID-19 at the state level in the United States as of the end of April 2020. The model examines the effects of different covariates on the under-reporting and accurate incidence rates and considers spatial dependency. In addition to under-reporting (false negatives), we also explore the impact of over-reporting (false positives). Adjusting for misclassification requires adding additional parameters that are not directly identified by the observed data. Informative priors are required. We discuss prior elicitation and include R functions that convert expert information into the appropriate prior distribution.


Subject(s)
COVID-19 , Bayes Theorem , COVID-19/epidemiology , Humans , Pandemics/prevention & control , United States/epidemiology
2.
Pharm Stat ; 20(2): 245-255, 2021 03.
Article in English | MEDLINE | ID: mdl-33025743

ABSTRACT

The use of Bayesian methods to support pharmaceutical product development has grown in recent years. In clinical statistics, the drive to provide faster access for patients to medical treatments has led to a heightened focus by industry and regulatory authorities on innovative clinical trial designs, including those that apply Bayesian methods. In nonclinical statistics, Bayesian applications have also made advances. However, they have been embraced far more slowly in the nonclinical area than in the clinical counterpart. In this article, we explore some of the reasons for this slower rate of adoption. We also present the results of a survey conducted for the purpose of understanding the current state of Bayesian application in nonclinical areas and for identifying areas of priority for the DIA/ASA-BIOP Nonclinical Bayesian Working Group. The survey explored current usage, hurdles, perceptions, and training needs for Bayesian methods among nonclinical statisticians. Based on the survey results, a set of recommendations is provided to help guide the future advancement of Bayesian applications in nonclinical pharmaceutical statistics.


Subject(s)
Pharmaceutical Preparations , Research Personnel , Bayes Theorem , Drug Evaluation, Preclinical , Forecasting , Humans
3.
Pharmacoepidemiol Drug Saf ; 29(10): 1219-1227, 2020 10.
Article in English | MEDLINE | ID: mdl-32929830

ABSTRACT

PURPOSE: We review statistical methods for assessing the possible impact of bias due to unmeasured confounding in real world data analysis and provide detailed recommendations for choosing among the methods. METHODS: By updating an earlier systematic review, we summarize modern statistical best practices for evaluating and correcting for potential bias due to unmeasured confounding in estimating causal treatment effect from non-interventional studies. RESULTS: We suggest a hierarchical structure for assessing unmeasured confounding. First, for initial sensitivity analyses, we strongly recommend applying a recently developed method, the E-value, that is straightforward to apply and does not require prior knowledge or assumptions about the unmeasured confounder(s). When some such knowledge is available, the E-value could be supplemented by the rule-out or array method at this step. If these initial analyses suggest results may not be robust to unmeasured confounding, subsequent analyses could be conducted using more specialized statistical methods, which we categorize based on whether they require access to external data on the suspected unmeasured confounder(s), internal data, or no data. Other factors for choosing the subsequent sensitivity analysis methods are also introduced and discussed, including the types of unmeasured confounders and whether the subsequent sensitivity analysis is intended to provide a corrected causal treatment effect. CONCLUSION: Various analytical methods have been proposed to address unmeasured confounding, but little research has discussed a structured approach to select appropriate methods in practice. In providing practical suggestions for choosing appropriate initial and, potentially, more specialized subsequent sensitivity analyses, we hope to facilitate the widespread reporting of such sensitivity analyses in non-interventional studies. The suggested approach also has the potential to inform pre-specification of sensitivity analyses before executing the analysis, and therefore increase the transparency and limit selective study reporting.


Subject(s)
Confounding Factors, Epidemiologic , Data Interpretation, Statistical , Research Design , Bias , Causality , Humans
4.
Comput Psychiatr ; 2: 1-10, 2018 Feb.
Article in English | MEDLINE | ID: mdl-30090859

ABSTRACT

Schizophrenia is a debilitating serious mental illness characterized by a complex array of symptoms with varying severity and duration. Patients may seek treatment only intermittently, contributing to challenges diagnosing the disorder. A misdiagnosis may potentially bias and reduce study validity. Thus we developed a statistical model to assess the risk of 1-year hospitalization for patients diagnosed with schizophrenia, accounting for when schizophrenia is underreported in administrative databases. A retrospective study design identified patients seeking care during 2010 within an integrated health care system from the Health Maintenance Organization Research Network located in the southwestern United States. Bayesian analysis addressed the problem of underdiagnosed schizophrenia with a statistical measurement error model assuming varying rates of underreporting. Results were then compared to classical multivariable logistic regression. Assuming no underreporting, there was an 87% greater relative odds of hospitalization associated with schizophrenia, OR = 1.87, CI [1.08, 3.23]. Effect sizes and interval estimates representing the association between hospitalization and schizophrenia were reduced with the Bayesian approach accounting for underdiagnosis, suggesting that less severe patients may be underrepresented in studies of schizophrenia. The analytical approach has useful applications in other contexts where the identification of patients with a given condition may be underreported in administrative records.

5.
Stat Med ; 37(17): 2599-2615, 2018 07 30.
Article in English | MEDLINE | ID: mdl-29766536

ABSTRACT

In the pharmaceutical industry, the shelf life of a drug product is determined by data gathered from stability studies and is intended to provide consumers with a high degree of confidence that the drug retains its strength, quality, and purity under appropriate storage conditions. In this paper, we focus on liquid drug formulations and propose a Bayesian approach to estimate a drug product's shelf life, where prior knowledge gained from the accelerated study conducted during the drug development stage is used to inform the long-term study. Classical and nonlinear Arrhenius regression models are considered for the accelerated conditions, and two examples are given where posterior results from the accelerated study are used to construct priors for a long-term stability study.


Subject(s)
Bayes Theorem , Drug Stability , Nonlinear Dynamics , Regression Analysis , Chemistry, Pharmaceutical , Computer Simulation , Humans
6.
Comput Math Methods Med ; 2018: 3212351, 2018.
Article in English | MEDLINE | ID: mdl-29681994

ABSTRACT

Covariate misclassification is well known to yield biased estimates in single level regression models. The impact on hierarchical count models has been less studied. A fully Bayesian approach to modeling both the misclassified covariate and the hierarchical response is proposed. Models with a single diagnostic test and with multiple diagnostic tests are considered. Simulation studies show the ability of the proposed model to appropriately account for the misclassification by reducing bias and improving performance of interval estimators. A real data example further demonstrated the consequences of ignoring the misclassification. Ignoring misclassification yielded a model that indicated there was a significant, positive impact on the number of children of females who observed spousal abuse between their parents. When the misclassification was accounted for, the relationship switched to negative, but not significant. Ignoring misclassification in standard linear and generalized linear models is well known to lead to biased results. We provide an approach to extend misclassification modeling to the important area of hierarchical generalized linear models.


Subject(s)
Bayes Theorem , Models, Statistical , Bias , Computational Biology , Computer Simulation , Health Surveys/statistics & numerical data , Humans , India , Poisson Distribution , Regression Analysis
7.
Pharmacoepidemiol Drug Saf ; 27(4): 373-382, 2018 04.
Article in English | MEDLINE | ID: mdl-29383840

ABSTRACT

PURPOSE: Observational pharmacoepidemiological studies can provide valuable information on the effectiveness or safety of interventions in the real world, but one major challenge is the existence of unmeasured confounder(s). While many analytical methods have been developed for dealing with this challenge, they appear under-utilized, perhaps due to the complexity and varied requirements for implementation. Thus, there is an unmet need to improve understanding the appropriate course of action to address unmeasured confounding under a variety of research scenarios. METHODS: We implemented a stepwise search strategy to find articles discussing the assessment of unmeasured confounding in electronic literature databases. Identified publications were reviewed and characterized by the applicable research settings and information requirements required for implementing each method. We further used this information to develop a best practice recommendation to help guide the selection of appropriate analytical methods for assessing the potential impact of unmeasured confounding. RESULTS: Over 100 papers were reviewed, and 15 methods were identified. We used a flowchart to illustrate the best practice recommendation which was driven by 2 critical components: (1) availability of information on the unmeasured confounders; and (2) goals of the unmeasured confounding assessment. Key factors for implementation of each method were summarized in a checklist to provide further assistance to researchers for implementing these methods. CONCLUSION: When assessing comparative effectiveness or safety in observational research, the impact of unmeasured confounding should not be ignored. Instead, we suggest quantitatively evaluating the impact of unmeasured confounding and provided a best practice recommendation for selecting appropriate analytical methods.


Subject(s)
Confounding Factors, Epidemiologic , Observational Studies as Topic/methods , Pharmacoepidemiology/methods , Research Design , Data Interpretation, Statistical , Humans
8.
PLoS One ; 13(1): e0190422, 2018.
Article in English | MEDLINE | ID: mdl-29304143

ABSTRACT

Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.


Subject(s)
Bayes Theorem , Cost-Benefit Analysis , Humans , Likelihood Functions , Survival Analysis
9.
J Biopharm Stat ; 27(1): 159-174, 2017.
Article in English | MEDLINE | ID: mdl-26891342

ABSTRACT

Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this article, we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.


Subject(s)
Bayes Theorem , Technology, Pharmaceutical , Chemistry, Pharmaceutical , Quality Control , Sample Size
10.
PDA J Pharm Sci Technol ; 71(2): 88-98, 2017.
Article in English | MEDLINE | ID: mdl-27789802

ABSTRACT

For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known DT , z, and Fo values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion.LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion.


Subject(s)
Bayes Theorem , Drug Industry/standards , Models, Statistical , Quality Control , Steam , Sterilization/standards , Drug Industry/statistics & numerical data , Sterilization/statistics & numerical data
11.
Pharmacoepidemiol Drug Saf ; 25(9): 982-92, 2016 09.
Article in English | MEDLINE | ID: mdl-27396534

ABSTRACT

PURPOSE: Observational studies are frequently used to assess the effectiveness of medical interventions in routine clinical practice. However, the use of observational data for comparative effectiveness is challenged by selection bias and the potential of unmeasured confounding. This is especially problematic for analyses using a health care administrative database, in which key clinical measures are often not available. This paper provides an approach to conducting a sensitivity analyses to investigate the impact of unmeasured confounding in observational studies. METHODS: In a real world osteoporosis comparative effectiveness study, the bone mineral density (BMD) score, an important predictor of fracture risk and a factor in the selection of osteoporosis treatments, is unavailable in the data base and lack of baseline BMD could potentially lead to significant selection bias. We implemented Bayesian twin-regression models, which simultaneously model both the observed outcome and the unobserved unmeasured confounder, using information from external sources. A sensitivity analysis was also conducted to assess the robustness of our conclusions to changes in such external data. RESULTS: The use of Bayesian modeling in this study suggests that the lack of baseline BMD did have a strong impact on the analysis, reversing the direction of the estimated effect (odds ratio of fracture incidence at 24 months: 0.40 vs. 1.36, with/without adjusting for unmeasured baseline BMD). CONCLUSIONS: The Bayesian twin-regression models provide a flexible sensitivity analysis tool to quantitatively assess the impact of unmeasured confounding in observational studies. Copyright © 2016 John Wiley & Sons, Ltd.


Subject(s)
Bone Density Conservation Agents/therapeutic use , Observational Studies as Topic/methods , Osteoporosis/drug therapy , Research Design , Aged , Bayes Theorem , Bone Density/drug effects , Comparative Effectiveness Research/methods , Confounding Factors, Epidemiologic , Female , Humans , Middle Aged , Regression Analysis
12.
Pharm Stat ; 13(1): 94-100, 2014.
Article in English | MEDLINE | ID: mdl-24446072

ABSTRACT

Unmeasured confounding is a common problem in observational studies. Failing to account for unmeasured confounding can result in biased point estimators and poor performance of hypothesis tests and interval estimators. We provide examples of the impacts of unmeasured confounding on cost-effectiveness analyses using observational data along with a Bayesian approach to correct estimation. Assuming validation data are available, we propose a Bayesian approach to correct cost-effectiveness studies for unmeasured confounding. We consider the cases where both cost and effectiveness are assumed to have a normal distribution and when costs are gamma distributed and effectiveness is normally distributed. Simulation studies were conducted to determine the impact of ignoring the unmeasured confounder and to determine the size of the validation data required to obtain valid inferences.


Subject(s)
Bayes Theorem , Data Interpretation, Statistical , Computer Simulation , Confounding Factors, Epidemiologic , Cost-Benefit Analysis , Humans , Models, Statistical
13.
Pharm Stat ; 13(1): 13-24, 2014.
Article in English | MEDLINE | ID: mdl-23897858

ABSTRACT

Safety assessment is essential throughout medical product development. There has been increased awareness of the importance of safety trials recently, in part due to recent US Food and Drug Administration guidance related to thorough assessment of cardiovascular risk in the treatment of type 2 diabetes. Bayesian methods provide great promise for improving the conduct of safety trials. In this paper, the safety subteam of the Drug Information Association Bayesian Scientific Working Group evaluates challenges associated with current methods for designing and analyzing safety trials and provides an overview of several suggested Bayesian opportunities that may increase efficiency of safety trials along with relevant case examples.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Drug-Related Side Effects and Adverse Reactions , Research Design , Humans , Meta-Analysis as Topic , Risk Assessment , Sample Size
14.
J Biopharm Stat ; 23(4): 790-803, 2013.
Article in English | MEDLINE | ID: mdl-23786161

ABSTRACT

In clinical trials, multiple outcomes are often collected in order to simultaneously assess effectiveness and safety. We develop a Bayesian procedure for determining the required sample size in a regression model where a continuous efficacy variable and a binary safety variable are observed. The sample size determination procedure is simulation based. The model accounts for correlation between the two variables. Through examples we demonstrate that savings in total sample size are possible when the correlation between these two variables is sufficiently high.


Subject(s)
Bayes Theorem , Clinical Trials as Topic/statistics & numerical data , Models, Statistical , Treatment Outcome , Algorithms , Clinical Trials as Topic/methods , Computer Simulation , Confidence Intervals , Humans , Regression Analysis , Sample Size
15.
Value Health ; 16(2): 259-66, 2013.
Article in English | MEDLINE | ID: mdl-23538177

ABSTRACT

The quantitative assessment of the potential influence of unmeasured confounders in the analysis of observational data is rare, despite reliance on the "no unmeasured confounders" assumption. In a recent comparison of costs of care between two treatments for type 2 diabetes using a health care claims database, propensity score matching was implemented to adjust for selection bias though it was noted that information on baseline glycemic control was not available for the propensity model. Using data from a linked laboratory file, data on this potential "unmeasured confounder" were obtained for a small subset of the original sample. By using this information, we demonstrate how Bayesian modeling, propensity score calibration, and multiple imputation can utilize this additional information to perform sensitivity analyses to quantitatively assess the potential impact of unmeasured confounding. Bayesian regression models were developed to utilize the internal validation data as informative prior distributions for all parameters, retaining information on the correlation between the confounder and other covariates. While assumptions supporting the use of propensity score calibration were not met in this sample, the use of Bayesian modeling and multiple imputation provided consistent results, suggesting that the lack of data on the unmeasured confounder did not have a strong impact on the original analysis, due to the lack of strong correlation between the confounder and the cost outcome variable. Bayesian modeling with informative priors and multiple imputation may be useful tools for unmeasured confounding sensitivity analysis in these situations. Further research to understand the operating characteristics of these methods in a variety of situations, however, remains.


Subject(s)
Diabetes Mellitus, Type 2/drug therapy , Diabetes Mellitus, Type 2/economics , Drug Costs/statistics & numerical data , Insurance Claim Review/economics , Research Design/standards , Bayes Theorem , Clinical Laboratory Techniques/statistics & numerical data , Comorbidity , Confidence Intervals , Confounding Factors, Epidemiologic , Costs and Cost Analysis , Diabetes Complications/economics , Diabetes Complications/epidemiology , Diabetes Mellitus, Type 2/epidemiology , Female , Humans , Insurance Claim Review/statistics & numerical data , Insurance, Pharmaceutical Services/economics , Insurance, Pharmaceutical Services/statistics & numerical data , Male , Middle Aged , Multivariate Analysis , Propensity Score , Retrospective Studies , United States/epidemiology
16.
Cancer Epidemiol ; 37(2): 121-6, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23290580

ABSTRACT

BACKGROUND: Recent research suggests that the Bayesian paradigm may be useful for modeling biases in epidemiological studies, such as those due to misclassification and missing data. We used Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to the potential effect of these two important sources of bias. METHODS: We used data from a study of the joint associations of radiotherapy and smoking with primary lung cancer among breast cancer survivors. We used Bayesian methods to provide an operational way to combine both validation data and expert opinion to account for misclassification of the two risk factors and missing data. For comparative purposes we considered a "full model" that allowed for both misclassification and missing data, along with alternative models that considered only misclassification or missing data, and the naïve model that ignored both sources of bias. RESULTS: We identified noticeable differences between the four models with respect to the posterior distributions of the odds ratios that described the joint associations of radiotherapy and smoking with primary lung cancer. Despite those differences we found that the general conclusions regarding the pattern of associations were the same regardless of the model used. Overall our results indicate a nonsignificantly decreased lung cancer risk due to radiotherapy among nonsmokers, and a mildly increased risk among smokers. CONCLUSIONS: We described easy to implement Bayesian methods to perform sensitivity analyses for assessing the robustness of study findings to misclassification and missing data.


Subject(s)
Bayes Theorem , Bias , Breast Neoplasms/epidemiology , Confounding Factors, Epidemiologic , Lung Neoplasms/epidemiology , Models, Theoretical , Adult , Aged , Aged, 80 and over , Breast Neoplasms/classification , Case-Control Studies , Female , Humans , Lung Neoplasms/classification , Middle Aged , Risk Factors , Survivors , Validation Studies as Topic
17.
J Biopharm Stat ; 23(1): 129-45, 2013.
Article in English | MEDLINE | ID: mdl-23331227

ABSTRACT

Using meta-analysis in health care research is a common practice. Here we are interested in methods used for analysis of time-to-event data. Particularly, we are interested in their performance when there is a low event rate. We consider three methods based on the Cox proportional hazards model, including a Bayesian approach. A formal comparison of the methods is conducted using a simulation study. In our simulation we model two treatments and consider several scenarios.


Subject(s)
Meta-Analysis as Topic , Research Design , Statistics as Topic/methods , Bayes Theorem , Clinical Trials as Topic/methods , Computer Simulation/trends , Humans , Proportional Hazards Models , Time Factors
18.
Cancer Epidemiol ; 36(2): 153-60, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21856264

ABSTRACT

OBJECTIVES: A model is proposed to estimate and compare cervical cancer screening test properties for third world populations when only subjects with a positive screen receive the gold standard test. Two fallible screening tests are compared, VIA and VILI. METHODS: We extend the model of Berry et al. [1] to the multi-site case in order to pool information across sites and form better estimates for prevalences of cervical cancer, the true positive rates (TPRs), and false positive rates (FPRs). For 10 centers in five African countries and India involving more than 52,000 women, Bayesian methods were applied when gold standard results for subjects who screened negative on both tests were treated as missing. The Bayesian methods employed suitably correct for the missing screen negative subjects. The study included gold standard verification for all cases, making it possible to validate model-based estimation of accuracy using only outcomes of women with positive VIA or VILI result (ignoring verification of double negative screening test results) with the observed full data outcomes. RESULTS: Across the sites, estimates for the sensitivity of VIA ranged from 0.792 to 0.917 while for VILI sensitivities ranged from 0.929 to 0.977. False positive estimates ranged from 0.056 to 0.256 for VIA and 0.085 to 0.269 for VILI. The pooled estimates for the TPR of VIA and VILI are 0.871 and 0.968, respectively, compared to the full data values of 0.816 and 0.918. Similarly, the pooled estimates for the FPR of VIA and VILI are 0.134 and 0.146, respectively, compared to the full data values of 0.144 and 0.146. Globally, we found VILI had a statistically significant higher sensitivity but no statistical difference for the false positive rates could be determined. CONCLUSION: Hierarchical Bayesian methods provide a straight forward approach to estimate screening test properties, prevalences, and to perform comparisons for screening studies where screen negative subjects do not receive the gold standard test. The hierarchical model with random effects used to analyze the sites simultaneously resulted in improved estimates compared to the single-site analyses with improved TPR estimates and nearly identical FPR estimates to the full data outcomes. Furthermore, higher TPRs but similar FPRs were observed for VILI compared to VIA.


Subject(s)
Bayes Theorem , Early Detection of Cancer , Uterine Cervical Neoplasms/epidemiology , Africa/epidemiology , False Positive Reactions , Female , Humans , India/epidemiology , Prevalence , Sensitivity and Specificity , Uterine Cervical Neoplasms/diagnosis
19.
Comput Methods Programs Biomed ; 104(2): 271-7, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21146897

ABSTRACT

Because of the high cost and time constraints for clinical trials, researchers often need to determine the smallest sample size that provides accurate inferences for a parameter of interest. Although most experimenters have employed frequentist sample-size determination methods, the Bayesian paradigm offers a wide variety of sample-size determination methodologies. Bayesian sample-size determination methods are becoming increasingly more popular in clinical trials because of their flexibility and easy interpretation inferences. Recently, Bayesian approaches have been used to determine the sample size of a single Poisson rate parameter in a clinical trial setting. In this paper, we extend these results to the comparison of two Poisson rates and develop methods for sample-size determination for hypothesis testing in a Bayesian context. We have created functions in R to determine the parameters for the conjugate gamma prior and calculate the sample size for the average length criterion and average power methods. We also provide two examples that implement our sample-size determination methods using clinical data.


Subject(s)
Bayes Theorem , Poisson Distribution , Models, Theoretical , Sample Size
20.
Ann Epidemiol ; 20(7): 562-7, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20538200

ABSTRACT

PURPOSE: To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. METHODS: A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. RESULTS: We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. CONCLUSIONS: We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies.


Subject(s)
Bias , Cohort Studies , Cross-Sectional Studies , Humans , Logistic Models , Models, Statistical , Monte Carlo Method , Reproducibility of Results , Research Design/standards , Sample Size , Selection Bias
SELECTION OF CITATIONS
SEARCH DETAIL
...