Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Gastroenterology ; 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38971198

ABSTRACT

BACKGROUND & AIMS: Guidelines recommend use of risk stratification scores for patients presenting with gastrointestinal bleeding (GIB) to identify very-low-risk patients eligible for discharge from emergency departments. Machine learning models may outperform existing scores and can be integrated within the electronic health record (EHR) to provide real-time risk assessment without manual data entry. We present the first EHR-based machine learning model for GIB. METHODS: The training cohort comprised 2,546 patients and internal validation of 850 patients presenting with overt GIB (hematemesis, melena, hematochezia) to emergency departments of 2 hospitals from 2014-2019. External validation was performed on 926 patients presenting to a different hospital with the same EHR from 2014-2019. The primary outcome was a composite of red-blood-cell transfusion, hemostatic intervention (endoscopic, interventional radiologic, or surgical), and 30-day all-cause mortality. We used structured data fields in the EHR available within 4 hours of presentation and compared performance of machine learning models to current guideline-recommended risk scores, Glasgow-Blatchford Score (GBS) and Oakland Score. Primary analysis was area under the receiver-operating-characteristic curve (AUC). Secondary analysis was specificity at 99% sensitivity to assess proportion of patients correctly identified as very-low-risk. RESULTS: The machine learning model outperformed the GBS (AUC=0.92 vs. 0.89;p<0.001) and Oakland score (AUC=0.92 vs. 0.89;p<0.001). At the very-low-risk threshold of 99% sensitivity, the machine learning model identified more very-low-risk patients: 37.9% vs. 18.5% for GBS and 11.7% for Oakland score (p<0.001 for both comparisons). CONCLUSIONS: An EHR-based machine learning model performs better than currently recommended clinical risk scores and identifies more very-low-risk patients eligible for discharge from the emergency department.

2.
Vaccine ; 39(2): 309-316, 2021 01 08.
Article in English | MEDLINE | ID: mdl-33334616

ABSTRACT

A vaccine for COVID-19 is urgently needed. Several vaccine trial designs may significantly accelerate vaccine testing and approval, but also increase risks to human subjects. Concerns about whether the public would see such designs as ethical represent an important roadblock to their implementation; accordingly, both the World Health Organization and numerous scholars have called for consulting the public regarding them. We answered these calls by conducting a cross-national survey (n = 5920) in Australia, Canada, Hong Kong, New Zealand, South Africa, Singapore, the United Kingdom, and the United States. The survey explained key differences between traditional vaccine trials and two accelerated designs: a challenge trial or a trial integrating a Phase II safety and immunogenicity trial into a larger Phase III efficacy trial. Respondents' answers to comprehension questions indicate that they largely understood the key differences and ethical trade-offs between the designs from our descriptions. We asked respondents whether they would prefer scientists to conduct traditional trials or one of these two accelerated designs. We found broad majorities prefer for scientists to conduct challenge trials (75%) and integrated trials (63%) over standard trials. Even as respondents acknowledged the risks, they perceived both accelerated trials as similarly ethical to standard trial designs. This high support is consistent across every geography and demographic subgroup we examined, including vulnerable populations. These findings may help assuage some of the concerns surrounding accelerated designs.


Subject(s)
COVID-19 Vaccines/administration & dosage , COVID-19/prevention & control , Decision Making , Pandemics/prevention & control , Research Design , SARS-CoV-2/immunology , Vaccination/psychology , Asia/epidemiology , Australia/epidemiology , COVID-19/epidemiology , COVID-19/psychology , COVID-19/virology , COVID-19 Vaccines/biosynthesis , COVID-19 Vaccines/supply & distribution , Choice Behavior , Clinical Trials as Topic , Female , Humans , Immunity, Innate/drug effects , Immunization Schedule , Immunogenicity, Vaccine , Male , North America/epidemiology , Patient Safety , Public Health , SARS-CoV-2/pathogenicity , Surveys and Questionnaires , Time Factors , United Kingdom/epidemiology , Vaccination/methods
3.
Proc Natl Acad Sci U S A ; 116(10): 4156-4165, 2019 03 05.
Article in English | MEDLINE | ID: mdl-30770453

ABSTRACT

There is growing interest in estimating and analyzing heterogeneous treatment effects in experimental and observational studies. We describe a number of metaalgorithms that can take advantage of any supervised learning or regression method in machine learning and statistics to estimate the conditional average treatment effect (CATE) function. Metaalgorithms build on base algorithms-such as random forests (RFs), Bayesian additive regression trees (BARTs), or neural networks-to estimate the CATE, a function that the base algorithms are not designed to estimate directly. We introduce a metaalgorithm, the X-learner, that is provably efficient when the number of units in one treatment group is much larger than in the other and can exploit structural properties of the CATE function. For example, if the CATE function is linear and the response functions in treatment and control are Lipschitz-continuous, the X-learner can still achieve the parametric rate under regularity conditions. We then introduce versions of the X-learner that use RF and BART as base learners. In extensive simulation studies, the X-learner performs favorably, although none of the metalearners is uniformly the best. In two persuasion field experiments from political science, we demonstrate how our X-learner can be used to target treatment regimes and to shed light on underlying mechanisms. A software package is provided that implements our methods.

5.
Proc Natl Acad Sci U S A ; 113(27): 7369-76, 2016 07 05.
Article in English | MEDLINE | ID: mdl-27382151

ABSTRACT

Inferences from randomized experiments can be improved by blocking: assigning treatment in fixed proportions within groups of similar units. However, the use of the method is limited by the difficulty in deriving these groups. Current blocking methods are restricted to special cases or run in exponential time; are not sensitive to clustering of data points; and are often heuristic, providing an unsatisfactory solution in many common instances. We present an algorithm that implements a widely applicable class of blocking-threshold blocking-that solves these problems. Given a minimum required group size and a distance metric, we study the blocking problem of minimizing the maximum distance between any two units within the same group. We prove this is a nondeterministic polynomial-time hard problem and derive an approximation algorithm that yields a blocking where the maximum distance is guaranteed to be, at most, four times the optimal value. This algorithm runs in O(n log n) time with O(n) space complexity. This makes it, to our knowledge, the first blocking method with an ensured level of performance that works in massive experiments. Whereas many commonly used algorithms form pairs of units, our algorithm constructs the groups flexibly for any chosen minimum size. This facilitates complex experiments with several treatment arms and clustered data. A simulation study demonstrates the efficiency and efficacy of the algorithm; tens of millions of units can be blocked using a desktop computer in a few minutes.

6.
Proc Natl Acad Sci U S A ; 113(27): 7383-90, 2016 07 05.
Article in English | MEDLINE | ID: mdl-27382153

ABSTRACT

We provide a principled way for investigators to analyze randomized experiments when the number of covariates is large. Investigators often use linear multivariate regression to analyze randomized experiments instead of simply reporting the difference of means between treatment and control groups. Their aim is to reduce the variance of the estimated treatment effect by adjusting for covariates. If there are a large number of covariates relative to the number of observations, regression may perform poorly because of overfitting. In such cases, the least absolute shrinkage and selection operator (Lasso) may be helpful. We study the resulting Lasso-based treatment effect estimator under the Neyman-Rubin model of randomized experiments. We present theoretical conditions that guarantee that the estimator is more efficient than the simple difference-of-means estimator, and we provide a conservative estimator of the asymptotic variance, which can yield tighter confidence intervals than the difference-of-means estimator. Simulation and data examples show that Lasso-based adjustment can be advantageous even when the number of covariates is less than the number of observations. Specifically, a variant using Lasso for selection and ordinary least squares (OLS) for estimation performs particularly well, and it chooses a smoothing parameter based on combined performance of Lasso and OLS.


Subject(s)
Randomized Controlled Trials as Topic , Statistics as Topic , Treatment Outcome
7.
Article in English | MEDLINE | ID: mdl-27340369

ABSTRACT

Difference-in-differences (DiD) estimators provide unbiased treatment effect estimates when, in the absence of treatment, the average outcomes for the treated and control groups would have followed parallel trends over time. This assumption is implausible in many settings. An alternative assumption is that the potential outcomes are independent of treatment status, conditional on past outcomes. This paper considers three methods that share this assumption: the synthetic control method, a lagged dependent variable (LDV) regression approach, and matching on past outcomes. Our motivating empirical study is an evaluation of a hospital pay-for-performance scheme in England, the best practice tariffs programme. The conclusions of the original DiD analysis are sensitive to the choice of approach. We conduct a Monte Carlo simulation study that investigates these methods' performance. While DiD produces unbiased estimates when the parallel trends assumption holds, the alternative approaches provide less biased estimates of treatment effects when it is violated. In these cases, the LDV approach produces the most efficient and least biased estimates.

8.
Stat Methods Med Res ; 25(5): 2315-2336, 2016 10.
Article in English | MEDLINE | ID: mdl-24525488

ABSTRACT

Statistical approaches for estimating treatment effectiveness commonly model the endpoint, or the propensity score, using parametric regressions such as generalised linear models. Misspecification of these models can lead to biased parameter estimates. We compare two approaches that combine the propensity score and the endpoint regression, and can make weaker modelling assumptions, by using machine learning approaches to estimate the regression function and the propensity score. Targeted maximum likelihood estimation is a double-robust method designed to reduce bias in the estimate of the parameter of interest. Bias-corrected matching reduces bias due to covariate imbalance between matched pairs by using regression predictions. We illustrate the methods in an evaluation of different types of hip prosthesis on the health-related quality of life of patients with osteoarthritis. We undertake a simulation study, grounded in the case study, to compare the relative bias, efficiency and confidence interval coverage of the methods. We consider data generating processes with non-linear functional form relationships, normal and non-normal endpoints. We find that across the circumstances considered, bias-corrected matching generally reported less bias, but higher variance than targeted maximum likelihood estimation. When either targeted maximum likelihood estimation or bias-corrected matching incorporated machine learning, bias was much reduced, compared to using misspecified parametric models.


Subject(s)
Likelihood Functions , Models, Statistical , Aged , Bias , Computer Simulation , Confidence Intervals , Data Interpretation, Statistical , Hip Prosthesis , Humans , Machine Learning , Male , Osteoarthritis/epidemiology , Osteoarthritis/surgery , Quality of Life , Treatment Outcome
9.
Health Serv Outcomes Res Methodol ; 15(3-4): 157-181, 2015.
Article in English | MEDLINE | ID: mdl-26380564

ABSTRACT

Various approaches have been used to select control groups in observational studies: (1) from within the intervention area; (2) from a convenience sample, or randomly chosen areas; (3) from areas matched on area-level characteristics; and (4) nationally. The consequences of the decision are rarely assessed but, as we show, it can have complex impacts on confounding at both the area and individual levels. We began by reanalyzing data collected for an evaluation of a rapid response service on rates of unplanned hospital admission. Balance on observed individual-level variables was better with external than local controls, after matching. Further, when important prognostic variables were omitted from the matching algorithm, imbalances on those variables were also minimized using external controls. Treatment effects varied markedly depending on the choice of control area, but in the case study the variation was minimal after adjusting for the characteristics of areas. We used simulations to assess relative bias and means-squared error, as this could not be done in the case study. A particular feature of the simulations was unexplained variation in the outcome between areas. We found that the likely impact of unexplained variation for hospital admissions dwarfed the benefits of better balance on individual-level variables, leading us to prefer local controls in this instance. In other scenarios, in which there was less unexplained variation in the outcome between areas, bias and mean-squared error were optimized using external controls. We identify some general considerations relevant to the choice of control population in observational studies.

10.
BMJ ; 346: f1026, 2013 Feb 27.
Article in English | MEDLINE | ID: mdl-23447338

ABSTRACT

OBJECTIVE: To compare the cost effectiveness of the three most commonly chosen types of prosthesis for total hip replacement. DESIGN: Lifetime cost effectiveness model with parameters estimated from individual patient data obtained from three large national databases. SETTING: English National Health Service. PARTICIPANTS: Adults aged 55 to 84 undergoing primary total hip replacement for osteoarthritis. INTERVENTIONS: Total hip replacement using either cemented, cementless, or hybrid prostheses. MAIN OUTCOME MEASURES: Cost (£), quality of life (EQ-5D-3L, where 0 represents death and 1 perfect health), quality adjusted life years (QALYs), incremental cost effectiveness ratios, and the probability that each prosthesis type is the most cost effective at alternative thresholds of willingness to pay for a QALY gain. RESULTS: Lifetime costs were generally lowest with cemented prostheses, and postoperative quality of life and lifetime QALYs were highest with hybrid prostheses. For example, in women aged 70 mean costs were £6900 ($11 000; €8200) for cemented prostheses, £7800 for cementless prostheses, and £7500 for hybrid prostheses; mean postoperative EQ-5D scores were 0.78, 0.80, and 0.81, and the corresponding lifetime QALYs were 9.0, 9.2, and 9.3 years. The incremental cost per QALY for hybrid compared with cemented prostheses was £2500. If the threshold willingness to pay for a QALY gain exceeded £10 000, the probability that hybrid prostheses were most cost effective was about 70%. Hybrid prostheses have the highest probability of being the most cost effective in all subgroups, except in women aged 80, where cemented prostheses were most cost effective. CONCLUSIONS: Cemented prostheses were the least costly type for total hip replacement, but for most patient groups hybrid prostheses were the most cost effective. Cementless prostheses did not provide sufficient improvement in health outcomes to justify their additional costs.


Subject(s)
Arthroplasty, Replacement, Hip/economics , Bone Cements/therapeutic use , Osteoarthritis/economics , Prostheses and Implants/economics , Aged , Aged, 80 and over , Arthroplasty, Replacement, Hip/methods , Arthroplasty, Replacement, Hip/mortality , Cost-Benefit Analysis , Durapatite/therapeutic use , Female , Humans , Male , Markov Chains , Osteoarthritis/surgery , Quality of Life , Time Factors , Treatment Outcome , United Kingdom
11.
Int J Biostat ; 8(1): 25, 2012 Aug 07.
Article in English | MEDLINE | ID: mdl-22944721

ABSTRACT

Propensity score (Pscore) matching and inverse probability of treatment weighting (IPTW) can remove bias due to observed confounders, if the Pscore is correctly specified. Genetic Matching (GenMatch) matches on the Pscore and individual covariates using an automated search algorithm to balance covariates. This paper compares common ways of implementing Pscore matching and IPTW, with Genmatch for balancing time-constant baseline covariates}. The methods are considered when estimates of treatment effectiveness are required for patient subgroups, and the treatment allocation process differs by subgroup. We apply these methods in a prospective cohort study that estimates the effectiveness of Drotrecogin alfa activated, for subgroups of patients with severe sepsis. In a simulation study we compare the methods when the Pscore is correctly specified, and then misspecified by ignoring the subgroup-specific treatment allocation. The simulations also consider poor overlap in baseline covariates, and different sample sizes. In the case study, GenMatch reports better covariate balance than IPTW or Pscore matching. In the simulations with correctly specified Pscores, good overlap and reasonable sample sizes, all methods report minimal bias. When the Pscore is misspecified, GenMatch reports the least imbalance and bias. With small sample sizes, IPTW is the most efficient approach, but all methods report relatively high bias of treatment effects. This study shows that overall GenMatch achieves the best covariate balance for each subgroup, and is more robust to Pscore misspecification than common alternative Pscore approaches.


Subject(s)
Automation , Data Interpretation, Statistical , Outcome and Process Assessment, Health Care/statistics & numerical data , Propensity Score , Aged , Anti-Infective Agents/therapeutic use , Cohort Studies , Humans , Middle Aged , Monte Carlo Method , Prospective Studies , Protein C/therapeutic use , Recombinant Proteins/therapeutic use , Sepsis/drug therapy , Severity of Illness Index
12.
Med Decis Making ; 32(6): 750-63, 2012.
Article in English | MEDLINE | ID: mdl-22691446

ABSTRACT

Decision makers require cost-effectiveness estimates for patient subgroups. In nonrandomized studies, propensity score (PS) matching and inverse probability of treatment weighting (IPTW) can address overt selection bias, but only if they balance observed covariates between treatment groups. Genetic matching (GM) matches on the PS and individual covariates using an automated search algorithm to directly balance baseline covariates. This article compares these methods for estimating subgroup effects in cost-effectiveness analyses (CEA). The motivating case study is a CEA of a pharmaceutical intervention, drotrecogin alfa (DrotAA), for patient subgroups with severe sepsis (n = 2726). Here, GM reported better covariate balance than PS matching and IPTW. For the subgroup at a high level of baseline risk, the probability that DrotAA was cost-effective ranged from 30% (IPTW) to 90% (PS matching and GM), at a threshold of £20 000 per quality-adjusted life-year. We then compared the methods in a simulation study, in which initially the PS was correctly specified and then misspecified, for example, by ignoring the subgroup-specific treatment assignment. Relative performance was assessed as bias and root mean squared error (RMSE) in the estimated incremental net benefits. When the PS was correctly specified and inverse probability weights were stable, each method performed well; IPTW reported the lowest RMSE. When the subgroup-specific treatment assignment was ignored, PS matching and IPTW reported covariate imbalance and bias; GM reported better balance, less bias, and more precise estimates. We conclude that if the PS is correctly specified and the weights for IPTW are stable, each method can provide unbiased cost-effectiveness estimates. However, unlike IPTW and PS matching, GM is relatively robust to PS misspecification.


Subject(s)
Cost-Benefit Analysis , Algorithms , Automation , Humans , Monte Carlo Method , Probability , Quality-Adjusted Life Years
13.
Health Econ ; 21(6): 695-714, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21633989

ABSTRACT

In cost-effectiveness analyses (CEA) that use randomized controlled trials (RCTs), covariates of prognostic importance may be imbalanced and warrant adjustment. In CEA that use non-randomized studies (NRS), the selection on observables assumption must hold for regression and matching methods to be unbiased. Even in restricted circumstances when this assumption is plausible, a key concern is how to adjust for imbalances in observed confounders. If the propensity score is misspecified, the covariates in the matched sample will be imbalanced, which can lead to conditional bias. To address covariate imbalance in CEA based on RCTs and NRS, this paper considers Genetic Matching. This matching method uses a search algorithm to directly maximize covariate balance. We compare Genetic and propensity score matching in Monte Carlo simulations and two case studies, CEA of pulmonary artery catheterization, based on an RCT and an NRS. The simulations show that Genetic Matching reduces the conditional bias and root mean squared error compared with propensity score matching. Genetic Matching achieves better covariate balance than the unadjusted analyses of the RCT data. In the NRS, Genetic Matching improves on the balance obtained from propensity score matching and gives substantively different estimates of incremental cost-effectiveness. We conclude that Genetic Matching can improve balance on measured covariates in CEA that use RCTs and NRS, but with NRS, this will be insufficient to reduce bias; the selection on observables assumption must also hold.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Monte Carlo Method , Research Design , Catheterization, Swan-Ganz/economics , Clinical Trials as Topic/methods , Cost-Benefit Analysis/methods , Hospital Mortality , Humans , Propensity Score , Quality-Adjusted Life Years , Randomized Controlled Trials as Topic
14.
JAMA ; 306(15): 1659-68, 2011 Oct 19.
Article in English | MEDLINE | ID: mdl-21976615

ABSTRACT

CONTEXT: Extracorporeal membrane oxygenation (ECMO) can support gas exchange in patients with severe acute respiratory distress syndrome (ARDS), but its role has remained controversial. ECMO was used to treat patients with ARDS during the 2009 influenza A(H1N1) pandemic. OBJECTIVE: To compare the hospital mortality of patients with H1N1-related ARDS referred, accepted, and transferred for ECMO with matched patients who were not referred for ECMO. DESIGN, SETTING, AND PATIENTS: A cohort study in which ECMO-referred patients were defined as all patients with H1N1-related ARDS who were referred, accepted, and transferred to 1 of the 4 adult ECMO centers in the United Kingdom during the H1N1 pandemic in winter 2009-2010. The ECMO-referred patients and the non-ECMO-referred patients were matched using data from a concurrent, longitudinal cohort study (Swine Flu Triage study) of critically ill patients with suspected or confirmed H1N1. Detailed demographic, physiological, and comorbidity data were used in 3 different matching techniques (individual matching, propensity score matching, and GenMatch matching). MAIN OUTCOME MEASURE: Survival to hospital discharge analyzed according to the intention-to-treat principle. RESULTS: Of 80 ECMO-referred patients, 69 received ECMO (86.3%) and 22 died (27.5%) prior to discharge from the hospital. From a pool of 1756 patients, there were 59 matched pairs of ECMO-referred patients and non-ECMO-referred patients identified using individual matching, 75 matched pairs identified using propensity score matching, and 75 matched pairs identified using GenMatch matching. The hospital mortality rate was 23.7% for ECMO-referred patients vs 52.5% for non-ECMO-referred patients (relative risk [RR], 0.45 [95% CI, 0.26-0.79]; P = .006) when individual matching was used; 24.0% vs 46.7%, respectively (RR, 0.51 [95% CI, 0.31-0.81]; P = .008) when propensity score matching was used; and 24.0% vs 50.7%, respectively (RR, 0.47 [95% CI, 0.31-0.72]; P = .001) when GenMatch matching was used. The results were robust to sensitivity analyses, including amending the inclusion criteria and restricting the location where the non-ECMO-referred patients were treated. CONCLUSION: For patients with H1N1-related ARDS, referral and transfer to an ECMO center was associated with lower hospital mortality compared with matched non-ECMO-referred patients.


Subject(s)
Extracorporeal Membrane Oxygenation , Influenza A Virus, H1N1 Subtype , Influenza, Human/mortality , Patient Transfer , Respiratory Distress Syndrome/therapy , Adult , Case-Control Studies , Cohort Studies , Female , Hospital Mortality , Humans , Influenza, Human/complications , Influenza, Human/therapy , Intention to Treat Analysis , Male , Middle Aged , Pandemics , Referral and Consultation , Respiratory Distress Syndrome/etiology , Survival Analysis , United Kingdom/epidemiology , Young Adult
15.
Int J Biostat ; 7(1)2011.
Article in English | MEDLINE | ID: mdl-21931570

ABSTRACT

There is an active debate in the literature on censored data about the relative performance of model based maximum likelihood estimators, IPCW-estimators, and a variety of double robust semiparametric efficient estimators. Kang and Schafer (2007) demonstrate the fragility of double robust and IPCW-estimators in a simulation study with positivity violations. They focus on a simple missing data problem with covariates where one desires to estimate the mean of an outcome that is subject to missingness. Responses by Robins, et al. (2007), Tsiatis and Davidian (2007), Tan (2007) and Ridgeway and McCaffrey (2007) further explore the challenges faced by double robust estimators and offer suggestions for improving their stability. In this article, we join the debate by presenting targeted maximum likelihood estimators (TMLEs). We demonstrate that TMLEs that guarantee that the parametric submodel employed by the TMLE procedure respects the global bounds on the continuous outcomes, are especially suitable for dealing with positivity violations because in addition to being double robust and semiparametric efficient, they are substitution estimators. We demonstrate the practical performance of TMLEs relative to other estimators in the simulations designed by Kang and Schafer (2007) and in modified simulations with even greater estimation challenges.


Subject(s)
Likelihood Functions , Statistics as Topic/methods
16.
Health Serv Res ; 43(4): 1204-22, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18355261

ABSTRACT

OBJECTIVE: To demonstrate cost-effectiveness analysis (CEA) for evaluating different reimbursement models. DATA SOURCES/STUDY SETTING: The CEA used an observational study comparing fee for service (FFS) versus capitation for Medicaid cases with severe mental illness (n=522). Under capitation, services were provided either directly (direct capitation [DC]) by not-for-profit community mental health centers (CMHC), or in a joint venture between CMHCs and a for-profit managed behavioral health organization (MBHO). STUDY DESIGN: A nonparametric matching method (genetic matching) was used to identify those cases that minimized baseline differences across the groups. Quality-adjusted life years (QALYs) were reported for each group. Incremental QALYs were valued at different thresholds for a QALY gained, and combined with cost estimates to plot cost-effectiveness acceptability curves. PRINCIPAL FINDINGS: QALYs were similar across reimbursement models. Compared with FFS, the MBHO model had incremental costs of -$1,991 and the probability that this model was cost-effective exceeded 0.90. The DC model had incremental costs of $4,694; the probability that this model was cost-effective compared with FFS was <0.10. CONCLUSIONS: A capitation model with a for-profit element was more cost-effective for Medicaid patients with severe mental illness than not-for-profit capitation or FFS models.


Subject(s)
Capitation Fee , Community Mental Health Services/economics , Fee-for-Service Plans/economics , Insurance, Psychiatric/economics , Managed Care Programs/economics , Quality-Adjusted Life Years , Adult , Aged , Female , Health Care Costs , Health Services Accessibility/economics , Health Services Research , Humans , Least-Squares Analysis , Logistic Models , Male , Medicaid/economics , Mental Disorders/economics , Mental Disorders/therapy , Models, Organizational , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...