Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
2.
Pharm Stat ; 21(5): 1005-1021, 2022 09.
Article in English | MEDLINE | ID: mdl-35373454

ABSTRACT

Pharmaceutical companies regularly need to make decisions about drug development programs based on the limited knowledge from early stage clinical trials. In this situation, eliciting the judgements of experts is an attractive approach for synthesising evidence on the unknown quantities of interest. When calculating the probability of success for a drug development program, multiple quantities of interest-such as the effect of a drug on different endpoints-should not be treated as unrelated. We discuss two approaches for establishing a multivariate distribution for several related quantities within the SHeffield ELicitation Framework (SHELF). The first approach elicits experts' judgements about a quantity of interest conditional on knowledge about another one. For the second approach, we first elicit marginal distributions for each quantity of interest. Then, for each pair of quantities, we elicit the concordance probability that both lie on the same side of their respective elicited medians. This allows us to specify a copula to obtain the joint distribution of the quantities of interest. We show how these approaches were used in an elicitation workshop that was performed to assess the probability of success of the registrational program of an asthma drug. The judgements of the experts, which were obtained prior to completion of the pivotal studies, were well aligned with the final trial results.


Subject(s)
Asthma , Drug Development , Asthma/drug therapy , Humans , Pharmaceutical Preparations , Probability
3.
Eur Child Adolesc Psychiatry ; 31(8): 1-10, 2022 Aug.
Article in English | MEDLINE | ID: mdl-33825947

ABSTRACT

The lack of consensual measures to monitor core change in Autism Spectrum Disorder (ASD) or response to interventions leads to difficulty to prove intervention efficacy on ASD core symptoms. There are no universally accepted outcome measures developed for measuring changes in core symptoms. However, the CARS (Childhood Autism Rating Scale) is one of the outcomes recommended in the EMA Guideline on the clinical development of medicinal products for the treatment of ASD. Unfortunately, there is currently no consensus on the response definition for CARS among individuals with ASD. The aim of this elicitation process was to determine an appropriate definition of a response on the CARS2 scale for interventions in patients with Autism Spectrum Disorder (ASD). An elicitation process was conducted following the Sheffield Elicitation Framework (SHELF). Five experts in the field of ASD and two experts in expert knowledge elicitation participated in an 1-day elicitation workshop. Experts in ASD were previously trained in the SHELF elicitation process and received a dossier of scientific evidence concerning the topic. The response definition was set as the mean clinically relevant improvement averaged over all patients, levels of functioning, age groups and clinicians. Based on the scientific evidence and expert judgment, a normal probability distribution was agreed to represent the state of knowledge of this response with expected value 4.03 and standard deviation 0.664. Considering the remaining uncertainty of the estimation and the available literature, a CARS-2 improvement of 4.5 points has been defined as a threshold to conclude to a response after an intervention. A CARS-2 improvement of 4.5 points could be used to evaluate interventions' meaningfulness in indivudals. This initial finding represents an important new benchmark and may aid decision makers in evaluating the efficacy of interventions in ASD.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Autism Spectrum Disorder/diagnosis , Autistic Disorder/diagnosis , Child , Consensus , Humans , Outcome Assessment, Health Care
5.
Biometrics ; 76(2): 578-587, 2020 06.
Article in English | MEDLINE | ID: mdl-32142163

ABSTRACT

Determining the sample size of an experiment can be challenging, even more so when incorporating external information via a prior distribution. Such information is increasingly used to reduce the size of the control group in randomized clinical trials. Knowing the amount of prior information, expressed as an equivalent prior effective sample size (ESS), clearly facilitates trial designs. Various methods to obtain a prior's ESS have been proposed recently. They have been justified by the fact that they give the standard ESS for one-parameter exponential families. However, despite being based on similar information-based metrics, they may lead to surprisingly different ESS for nonconjugate settings, which complicates many designs with prior information. We show that current methods fail a basic predictive consistency criterion, which requires the expected posterior-predictive ESS for a sample of size N to be the sum of the prior ESS and N. The expected local-information-ratio ESS is introduced and shown to be predictively consistent. It corrects the ESS of current methods, as shown for normally distributed data with a heavy-tailed Student-t prior and exponential data with a generalized Gamma prior. Finally, two applications are discussed: the prior ESS for the control group derived from historical data and the posterior ESS for hierarchical subgroup analyses.


Subject(s)
Models, Statistical , Randomized Controlled Trials as Topic/methods , Randomized Controlled Trials as Topic/statistics & numerical data , Sample Size , Analysis of Variance , Biometry , Data Interpretation, Statistical , Humans , Proof of Concept Study
6.
Am Stat ; 73(1): 56-68, 2019.
Article in English | MEDLINE | ID: mdl-31057338

ABSTRACT

This article resulted from our participation in the session on the "role of expert opinion and judgment in statistical inference" at the October 2017 ASA Symposium on Statistical Inference. We present a strong, unified statement on roles of expert judgment in statistics with processes for obtaining input, whether from a Bayesian or frequentist perspective. Topics include the role of subjectivity in the cycle of scientific inference and decisions, followed by a clinical trial and a greenhouse gas emissions case study that illustrate the role of judgments and the importance of basing them on objective information and a comprehensive uncertainty assessment. We close with a call for increased proactivity and involvement of statisticians in study conceptualization, design, conduct, analysis, and communication.

7.
Psychometrika ; 80(3): 601-7, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25813464

ABSTRACT

Wu and Browne (Psychometrika, 79, 2015) have proposed an innovative approach to modeling discrepancy between a covariance structure model and the population that the model is intended to represent. Their contribution is related to ongoing developments in the field of Uncertainty Quantification (UQ) on modeling and quantifying effects of model discrepancy. We provide an overview of basic principles of UQ and some relevant developments and we examine the Wu-Browne work in that context. We view the Wu-Browne contribution as a seminal development providing a foundation for further work on the critical problem of model discrepancy in statistical modeling in psychological research.


Subject(s)
Likelihood Functions , Models, Statistical , Psychometrics , Humans
8.
J Off Stat ; 31(4): 537-544, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26949283

ABSTRACT

Demographic forecasts are inherently uncertain. Nevertheless, an appropriate description of this uncertainty is a key underpinning of informed decision making. In recent decades various methods have been developed to describe the uncertainty of future populations and their structures, but the uptake of such tools amongst the practitioners of official population statistics has been lagging behind. In this letter we revisit the arguments for the practical uses of uncertainty assessments in official population forecasts, and address their implications for decision making. We discuss essential challenges, both for the forecasters and forecast users, and make recommendations for the official statistics community.

9.
Biometrics ; 70(4): 1023-32, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25355546

ABSTRACT

Historical information is always relevant for clinical trial design. Additionally, if incorporated in the analysis of a new trial, historical data allow to reduce the number of subjects. This decreases costs and trial duration, facilitates recruitment, and may be more ethical. Yet, under prior-data conflict, a too optimistic use of historical data may be inappropriate. We address this challenge by deriving a Bayesian meta-analytic-predictive prior from historical data, which is then combined with the new data. This prospective approach is equivalent to a meta-analytic-combined analysis of historical and new data if parameters are exchangeable across trials. The prospective Bayesian version requires a good approximation of the meta-analytic-predictive prior, which is not available analytically. We propose two- or three-component mixtures of standard priors, which allow for good approximations and, for the one-parameter exponential family, straightforward posterior calculations. Moreover, since one of the mixture components is usually vague, mixture priors will often be heavy-tailed and therefore robust. Further robustness and a more rapid reaction to prior-data conflicts can be achieved by adding an extra weakly-informative mixture component. Use of historical prior information is particularly attractive for adaptive trials, as the randomization ratio can then be changed in case of prior-data conflict. Both frequentist operating characteristics and posterior summaries for various data scenarios show that these designs have desirable properties. We illustrate the methodology for a phase II proof-of-concept trial with historical controls from four studies. Robust meta-analytic-predictive priors alleviate prior-data conflicts ' they should encourage better and more frequent use of historical data in clinical trials.


Subject(s)
Algorithms , Bayes Theorem , Meta-Analysis as Topic , Models, Statistical , Randomized Controlled Trials as Topic , Clinical Trials, Phase II as Topic , Computer Simulation , Data Interpretation, Statistical , Humans , Pattern Recognition, Automated/methods , Prognosis , Sample Size
10.
Value Health ; 15(5): 656-63, 2012.
Article in English | MEDLINE | ID: mdl-22867774

ABSTRACT

OBJECTIVES: To assess the accuracy and precision of inverse probability weighted (IPW) least squares regression analysis for censored cost data. METHODS: By using Surveillance, Epidemiology, and End Results-Medicare, we identified 1500 breast cancer patients who died and had complete cost information within the database. Patients were followed for up to 48 months (partitions) after diagnosis, and their actual total cost was calculated in each partition. We then simulated patterns of administrative and dropout censoring and also added censoring to patients receiving chemotherapy to simulate comparing a newer to older intervention. For each censoring simulation, we performed 1000 IPW regression analyses (bootstrap, sampling with replacement), calculated the average value of each coefficient in each partition, and summed the coefficients for each regression parameter to obtain the cumulative values from 1 to 48 months. RESULTS: The cumulative, 48-month, average cost was $67,796 (95% confidence interval [CI] $58,454-$78,291) with no censoring, $66,313 (95% CI $54,975-$80,074) with administrative censoring, and $66,765 (95% CI $54,510-$81,843) with administrative plus dropout censoring. In multivariate analysis, chemotherapy was associated with increased cost of $25,325 (95% CI $17,549-$32,827) compared with $28,937 (95% CI $20,510-$37,088) with administrative censoring and $29,593 ($20,564-$39,399) with administrative plus dropout censoring. Adding censoring to the chemotherapy group resulted in less accurate IPW estimates. This was ameliorated, however, by applying IPW within treatment groups. CONCLUSION: IPW is a consistent estimator of population mean costs if the weight is correctly specified. If the censoring distribution depends on some covariates, a model that accommodates this dependency must be correctly specified in IPW to obtain accurate estimates.


Subject(s)
Antineoplastic Agents/economics , Breast Neoplasms/economics , Health Care Costs/statistics & numerical data , Models, Statistical , Aged , Aged, 80 and over , Antineoplastic Agents/therapeutic use , Breast Neoplasms/pathology , Breast Neoplasms/therapy , Female , Follow-Up Studies , Humans , Least-Squares Analysis , Medicare/statistics & numerical data , Multivariate Analysis , Regression Analysis , SEER Program/statistics & numerical data , Time Factors , United States
11.
Pharmacoeconomics ; 30(2): 103-18, 2012 Feb 01.
Article in English | MEDLINE | ID: mdl-21967155

ABSTRACT

BACKGROUND: Granulocyte-colony stimulating factor (G-CSF) reduces the risk of severe neutropenia associated with chemotherapy, but its cost implications following chemotherapy are unknown. OBJECTIVE: Our objective was to examine associations between G-CSF use and medical costs after initial adjuvant chemotherapy in early-stage (stage I-III) breast cancer (ESBC). METHODS: Women diagnosed with ESBC from 1999 to 2005, who had an initial course of chemotherapy beginning within 180 days of diagnosis and including ≥1 highly myelosuppressive agent, were identified from the Surveillance, Epidemiology, and End Results (SEER)-Medicare database. Medicare claims were used to describe the initial chemotherapy regimen according to the classes of agents used: anthracycline ([A]: doxorubicin or epirubicin); cyclophosphamide (C); taxane ([T]: paclitaxel or docetaxel); and fluorouracil (F). Patients were classified into four study groups according to their G-CSF use: (i) primary prophylaxis, if the first G-CSF claim was within 5 days of the start of the first chemotherapy cycle; (ii) secondary prophylaxis, if the first claim was within 5 days of the start of the second or subsequent cycles; (iii) G-CSF treatment, if the first claim occurred outside of prophylactic use; and (iv) no G-CSF. Patients were described by age, race, year of diagnosis, stage, grade, estrogen (ER) and progesterone (PR) receptor status, National Cancer Institute (NCI) Co-morbidity Index, chemotherapy regimen and G-CSF use. Total direct medical costs ($US, year 2009 values) to Medicare were estimated from 4 weeks after the last chemotherapy administration up to 48 months. Medical costs included those for ESBC treatment and all other medical services received after chemotherapy. Least squares regression, using inverse probability weighting (IPW) to account for censoring within the cohort, was used to evaluate adjusted associations between G-CSF use and costs. RESULTS: A total of 7026 patients were identified, with an average age of 72 years, of which 63% had stage II disease, and 59% were ER and/or PR positive. Compared with no G-CSF, those receiving G-CSF primary prophylaxis were more likely to have stage III disease (30% vs. 16%; p < 0.0001), to be diagnosed in 2003-5 (87% vs. 26%; p < 0.0001), and to receive dose-dense AC-T (26% vs. 1%; p < 0.0001), while they were less likely to receive an F-based regimen (12% vs. 42%; p < 0.0001). Overall, the estimated average direct medical cost over 48 months after initial chemotherapy was $US 42,628. In multivariate analysis, stage II or III diagnosis (compared with stage I), NCI Co-morbidity Index score 1 or ≥2 (compared with 0), or FAC or standard AC-T (each compared with AC) were associated with significantly higher IPW 48-month costs. Adjusting for patient demographic and clinical factors, costs in the G-CSF primary prophylaxis group were not significantly different from those not receiving primary prophylaxis (the other three study groups combined). In an analysis that included four separate study groups, G-CSF treatment was associated with significantly greater costs (incremental cost = $US 2938; 95% CI 285, 5590) than no G-CSF. CONCLUSIONS: Direct medical costs after initial chemotherapy were not statistically different between those receiving G-CSF primary prophylaxis and those receiving no G-CSF, after adjusting for potential confounders.


Subject(s)
Breast Neoplasms/drug therapy , Chemotherapy, Adjuvant/adverse effects , Granulocyte Colony-Stimulating Factor/economics , Granulocyte Colony-Stimulating Factor/therapeutic use , Health Care Costs/statistics & numerical data , Neutropenia/drug therapy , Neutropenia/prevention & control , Age Factors , Aged , Aged, 80 and over , Antineoplastic Combined Chemotherapy Protocols/adverse effects , Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Breast Neoplasms/diagnosis , Breast Neoplasms/economics , Costs and Cost Analysis , Female , Filgrastim , Hospitalization/statistics & numerical data , Humans , Kaplan-Meier Estimate , Least-Squares Analysis , Long-Term Care/economics , Medicare , Neutropenia/chemically induced , Neutropenia/economics , Polyethylene Glycols , Racial Groups/statistics & numerical data , Recombinant Proteins/therapeutic use , Retrospective Studies , SEER Program , United States
12.
Pharm Stat ; 10(5): 427-32, 2011.
Article in English | MEDLINE | ID: mdl-21928323

ABSTRACT

In organ transplantation, placebo-controlled clinical trials are not possible for ethical reasons, and hence non-inferiority trials are used to evaluate new drugs. Patients with a transplanted kidney typically receive three to four immunosuppressant drugs to prevent organ rejection. In the described case of a non-inferiority trial for one of these immunosuppressants, the dose is changed, and another is replaced by an investigational drug. This test regimen is compared with the active control regimen. Justification for the non-inferiority margin is challenging as the putative placebo has never been studied in a clinical trial. We propose the use of a random-effect meta-regression, where each immunosuppressant component of the regimen enters as a covariate. This allows us to make inference on the difference between the putative placebo and the active control. From this, various methods can then be used to derive the non-inferiority margin. A hybrid of the 95/95 and synthesis approach is suggested. Data from 51 trials with a total of 17,002 patients were used in the meta-regression. Our approach was motivated by a recent large confirmatory trial in kidney transplantation. The results and the methodological documents of this evaluation were submitted to the Food and Drug Administration. The Food and Drug Administration accepted our proposed non-inferiority margin and our rationale.


Subject(s)
Kidney Transplantation/statistics & numerical data , Models, Statistical , Randomized Controlled Trials as Topic/methods , Research Design/statistics & numerical data , Bayes Theorem , Confidence Intervals , Control Groups , Drugs, Investigational/adverse effects , Drugs, Investigational/metabolism , Drugs, Investigational/pharmacology , Humans , Immunosuppressive Agents/adverse effects , Immunosuppressive Agents/metabolism , Immunosuppressive Agents/pharmacology , Likelihood Functions , Meta-Analysis as Topic , Placebos , Randomized Controlled Trials as Topic/statistics & numerical data , Treatment Outcome , United States , United States Food and Drug Administration/statistics & numerical data
13.
Health Econ ; 20(8): 897-916, 2011 Aug.
Article in English | MEDLINE | ID: mdl-20799344

ABSTRACT

We review statistical methods for analysing healthcare resource use and costs, their ability to address skewness, excess zeros, multimodality and heavy right tails, and their ease for general use. We aim to provide guidance on analysing resource use and costs focusing on randomised trials, although methods often have wider applicability. Twelve broad categories of methods were identified: (I) methods based on the normal distribution, (II) methods following transformation of data, (III) single-distribution generalized linear models (GLMs), (IV) parametric models based on skewed distributions outside the GLM family, (V) models based on mixtures of parametric distributions, (VI) two (or multi)-part and Tobit models, (VII) survival methods, (VIII) non-parametric methods, (IX) methods based on truncation or trimming of data, (X) data components models, (XI) methods based on averaging across models, and (XII) Markov chain methods. Based on this review, our recommendations are that, first, simple methods are preferred in large samples where the near-normality of sample means is assured. Second, in somewhat smaller samples, relatively simple methods, able to deal with one or two of above data characteristics, may be preferable but checking sensitivity to assumptions is necessary. Finally, some more complex methods hold promise, but are relatively untried; their implementation requires substantial expertise and they are not currently recommended for wider applied work.


Subject(s)
Data Interpretation, Statistical , Health Care Costs , Health Resources/statistics & numerical data , Linear Models , Markov Chains , Randomized Controlled Trials as Topic
14.
Stat Med ; 29(15): 1622-34, 2010 Jul 10.
Article in English | MEDLINE | ID: mdl-20209481

ABSTRACT

Cost-effectiveness analysis of alternative medical treatments relies on having a measure of effectiveness, and many regard the quality adjusted life year (QALY) to be the current 'gold standard.' In order to compute QALYs, we require a suitable system for describing a person's health state, and a utility measure to value the quality of life associated with each possible state. There are a number of different health state descriptive systems, and we focus here on one known as the EQ-5D. Data for estimating utilities for different health states have a number of features that mean care is necessary in statistical modelling.There is interest in the extent to which valuations of health may differ between different countries and cultures, but few studies have compared preference values of health states obtained from different countries. This article applies a nonparametric model to estimate and compare EQ-5D health state valuation data obtained from two countries using Bayesian methods. The data set is the US and UK EQ-5D valuation studies where a sample of 42 states defined by the EQ-5D was valued by representative samples of the general population from each country using the time trade-off technique. We estimate a utility function across both countries which explicitly accounts for the differences between them, and is estimated using the data from both countries. The article discusses the implications of these results for future applications of the EQ-5D and for further work in this field.


Subject(s)
Attitude to Health , Bayes Theorem , Health Status Indicators , Models, Statistical , Age Factors , Algorithms , Cost-Benefit Analysis , Cross-Cultural Comparison , Female , Health Status , Humans , Male , Quality of Life , Quality-Adjusted Life Years , Sex Factors , Statistics, Nonparametric , United Kingdom/epidemiology , United States/epidemiology
15.
Pharm Stat ; 8(4): 371-89, 2009.
Article in English | MEDLINE | ID: mdl-19340851

ABSTRACT

The development of a new drug is a major undertaking and it is important to consider carefully the key decisions in the development process. Decisions are made in the presence of uncertainty and outcomes such as the probability of successful drug registration depend on the clinical development programmme.The Rheumatoid Arthritis Drug Development Model was developed to support key decisions for drugs in development for the treatment of rheumatoid arthritis. It is configured to simulate Phase 2b and 3 trials based on the efficacy of new drugs at the end of Phase 2a, evidence about the efficacy of existing treatments, and expert opinion regarding key safety criteria.The model evaluates the performance of different development programmes with respect to the duration of disease of the target population, Phase 2b and 3 sample sizes, the dose(s) of the experimental treatment, the choice of comparator, the duration of the Phase 2b clinical trial, the primary efficacy outcome and decision criteria for successfully passing Phases 2b and 3. It uses Bayesian clinical trial simulation to calculate the probability of successful drug registration based on the uncertainty about parameters of interest, thereby providing a more realistic assessment of the likely outcomes of individual trials and sequences of trials for the purpose of decision making.In this case study, the results show that, depending on the trial design, the new treatment has assurances of successful drug registration in the range 0.044-0.142 for an ACR20 outcome and 0.057-0.213 for an ACR50 outcome.


Subject(s)
Clinical Trials, Phase II as Topic/statistics & numerical data , Clinical Trials, Phase III as Topic/statistics & numerical data , Drug Discovery/statistics & numerical data , Algorithms , Antirheumatic Agents/therapeutic use , Arthritis, Rheumatoid/drug therapy , Bayes Theorem , Computer Simulation/statistics & numerical data , Humans , Models, Statistical , Treatment Outcome
16.
BMC Neurol ; 9: 1, 2009 Jan 06.
Article in English | MEDLINE | ID: mdl-19126193

ABSTRACT

BACKGROUND: Risk sharing schemes represent an innovative and important approach to the problems of rationing and achieving cost-effectiveness in high cost or controversial health interventions. This study aimed to assess the feasibility of risk sharing schemes, looking at long term clinical outcomes, to determine the price at which high cost treatments would be acceptable to the NHS. METHODS: This case study of the first NHS risk sharing scheme, a long term prospective cohort study of beta interferon and glatiramer acetate in multiple sclerosis (MS) patients in 71 specialist MS centres in UK NHS hospitals, recruited adults with relapsing forms of MS, meeting Association of British Neurologists (ABN) criteria for disease modifying therapy. Outcome measures were: success of recruitment and follow up over the first three years, analysis of baseline and initial follow up data and the prospect of estimating the long term cost-effectiveness of these treatments. RESULTS: Centres consented 5560 patients. Of the 4240 patients who had been in the study for a least one year, annual review data were available for 3730 (88.0%). Of the patients who had been in the study for at least two years and three years, subsequent annual review data were available for 2055 (78.5%) and 265 (71.8%) patients respectively. Baseline characteristics and a small but statistically significant progression of disease were similar to those reported in previous pivotal studies. CONCLUSION: Successful recruitment, follow up and early data analysis suggest that risk sharing schemes should be able to deliver their objectives. However, important issues of analysis, and political and commercial conflicts of interest still need to be addressed.


Subject(s)
Interferon-beta/therapeutic use , Multiple Sclerosis/drug therapy , Multiple Sclerosis/economics , Outcome Assessment, Health Care/economics , Peptides/therapeutic use , Risk Sharing, Financial , Adult , Cost-Benefit Analysis , Female , Follow-Up Studies , Glatiramer Acetate , Health Care Costs , Humans , Immunologic Factors/therapeutic use , Immunosuppressive Agents/therapeutic use , Male , Middle Aged , Organizational Case Studies , Prospective Studies , United Kingdom
17.
Article in English | MEDLINE | ID: mdl-18400115

ABSTRACT

Pharmaceutical regulators and healthcare reimbursement authorities operate in different intellectual paradigms and adopt very different decision rules. As a result, drugs that have been licensed are often not available to all patients who could benefit because reimbursement authorities judge that the cost of therapies is greater than the health produced. This finding creates uncertainty for pharmaceutical companies planning their research and development investment, as licensing is no longer a guarantee of market access. In this study, we propose that it would be consistent with the objectives of pharmaceutical regulators to use the Net Benefit Framework of reimbursement authorities to identify those therapies that should be subject to priority review, that it is feasible to do so and that this would have several positive effects for patients, industry, and healthcare systems.


Subject(s)
Drug Approval/organization & administration , Insurance, Health, Reimbursement/economics , Technology Assessment, Biomedical/organization & administration , Cost-Benefit Analysis , Drug Approval/economics , Drug Industry/organization & administration , Humans , Public Health , Technology Assessment, Biomedical/economics
18.
Med Decis Making ; 27(4): 448-70, 2007.
Article in English | MEDLINE | ID: mdl-17761960

ABSTRACT

Partial expected value of perfect information (EVPI) calculations can quantify the value of learning about particular subsets of uncertain parameters in decision models. Published case studies have used different computational approaches. This article examines the computation of partial EVPI estimates via Monte Carlo sampling algorithms. The mathematical definition shows 2 nested expectations, which must be evaluated separately because of the need to compute a maximum between them. A generalized Monte Carlo sampling algorithm uses nested simulation with an outer loop to sample parameters of interest and, conditional upon these, an inner loop to sample remaining uncertain parameters. Alternative computation methods and shortcut algorithms are discussed and mathematical conditions for their use considered. Maxima of Monte Carlo estimates of expectations are biased upward, and the authors show that the use of small samples results in biased EVPI estimates. Three case studies illustrate 1) the bias due to maximization and also the inaccuracy of shortcut algorithms 2) when correlated variables are present and 3) when there is nonlinearity in net benefit functions. If relatively small correlation or nonlinearity is present, then the shortcut algorithm can be substantially inaccurate. Empirical investigation of the numbers of Monte Carlo samples suggests that fewer samples on the outer level and more on the inner level could be efficient and that relatively small numbers of samples can sometimes be used. Several remaining areas for methodological development are set out. A wider application of partial EVPI is recommended both for greater understanding of decision uncertainty and for analyzing research priorities.


Subject(s)
Algorithms , Decision Support Techniques , Monte Carlo Method , Health Expenditures , Humans , Models, Statistical , Selection Bias
19.
Soc Sci Med ; 64(6): 1242-52, 2007 Mar.
Article in English | MEDLINE | ID: mdl-17157971

ABSTRACT

It has long been recognised that respondent characteristics can impact on the values they give to health states. This paper reports on the findings from applying a non-parametric approach to estimate the covariates in a model of SF-6D health state values using Bayesian methods. The data set is the UK SF-6D valuation study, where a sample of 249 states defined by the SF-6D (a derivate of the SF-36) was valued by a sample of the UK general population using standard gamble. Advantages of the nonparametric model are that it can be used to predict scores in populations with different distributions of characteristics and that it allows for an impact to vary by health state (whilst ensuring that full health passes through unity). The results suggest an important age effect, with sex, class, education, employment and physical functioning probably having some effect, but the remaining covariates having no discernable effect. Adjusting for covariates in the UK sample made little difference to mean health state values. The paper discusses the implications of these results for policy.


Subject(s)
Attitude to Health , Health Status Indicators , Psychometrics/methods , Quality-Adjusted Life Years , Value of Life/economics , Adult , Age Factors , Bayes Theorem , Cost-Benefit Analysis , Female , Humans , Interviews as Topic , Male , Middle Aged , Models, Econometric , Risk Assessment , Sex Factors , United Kingdom
20.
J Health Econ ; 26(3): 597-612, 2007 May 01.
Article in English | MEDLINE | ID: mdl-17069909

ABSTRACT

This paper reports on the findings from applying a new approach to modelling health state valuation data. The approach applies a nonparametric model to estimate SF-6D health state utility values using Bayesian methods. The data set is the UK SF-6D valuation study where a sample of 249 states defined by the SF-6D (a derivative of the SF-36) was valued by a representative sample of the UK general population using standard gamble. The paper presents the results from applying the nonparametric model and comparing it to the original model estimated using a conventional parametric random effects model. The two models are compared theoretically and in terms of empirical performance. The paper discusses the implications of these results for future applications of the SF-6D and further work in this field.


Subject(s)
Bayes Theorem , Health Status Indicators , Quality of Life , Statistics, Nonparametric , Humans , Models, Statistical , Surveys and Questionnaires , United Kingdom
SELECTION OF CITATIONS
SEARCH DETAIL
...