Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
2.
Drug Alcohol Rev ; 40(7): 1131-1142, 2021 11.
Article in English | MEDLINE | ID: mdl-33713037

ABSTRACT

INTRODUCTION: Wearable devices that obtain transdermal alcohol concentration (TAC) could become valuable research tools for monitoring alcohol consumption levels in naturalistic environments if the TAC they produce could be converted into quantitatively-meaningful estimates of breath alcohol concentration (eBrAC). Our team has developed mathematical models to produce eBrAC from TAC, but it is not yet clear how a variety of factors affect the accuracy of the models. Stomach content is one factor that is known to affect breath alcohol concentration (BrAC), but its effect on the BrAC-TAC relationship has not yet been studied. METHODS: We examine the BrAC-TAC relationship by having two investigators participate in four laboratory drinking sessions with varied stomach content conditions: (i) no meal, (ii) half and (iii) full meal before drinking, and (iv) full meal after drinking. BrAC and TAC were obtained every 10 min over the BrAC curve. RESULTS: Eating before drinking lowered BrAC and TAC levels, with greater variability in TAC across person-device pairings, but the BrAC-TAC relationship was not consistently altered by stomach content. The mathematical model calibration parameters, fit indices, and eBrAC curves and summary score outputs did not consistently vary based on stomach content, indicating that our models were able to produce eBrAC from TAC with similar accuracy despite variations in the shape and magnitude of the BrAC curves under different conditions. DISCUSSION AND CONCLUSIONS: This study represents the first examination of how stomach content affects our ability to model estimates of BrAC from TAC and indicates it is not a major factor.


Subject(s)
Alcohol Drinking , Gastrointestinal Contents , Breath Tests , Ethanol , Humans
3.
Seq Anal ; 39(1): 65-91, 2020.
Article in English | MEDLINE | ID: mdl-33776197

ABSTRACT

We propose a general and flexible procedure for testing multiple hypotheses about sequential (or streaming) data that simultaneously controls both the false discovery rate (FDR) and false nondiscovery rate (FNR) under minimal assumptions about the data streams which may differ in distribution, dimension, and be dependent. All that is needed is a test statistic for each data stream that controls its conventional type I and II error probabilities, and no information or assumptions are required about the joint distribution of the statistics or data streams. The procedure can be used with sequential, group sequential, truncated, or other sampling schemes. The procedure is a natural extension of Benjamini and Hochberg's (1995) widely-used fixed sample size procedure to the domain of sequential data, with the added benefit of simultaneous FDR and FNR control that sequential sampling affords. We prove the procedure's error control and give some tips for implementation in commonly encountered testing situations.

4.
Pharmaceutics ; 13(1)2020 Dec 30.
Article in English | MEDLINE | ID: mdl-33396749

ABSTRACT

Population pharmacokinetic (PK) modeling has become a cornerstone of drug development and optimal patient dosing. This approach offers great benefits for datasets with sparse sampling, such as in pediatric patients, and can describe between-patient variability. While most current algorithms assume normal or log-normal distributions for PK parameters, we present a mathematically consistent nonparametric maximum likelihood (NPML) method for estimating multivariate mixing distributions without any assumption about the shape of the distribution. This approach can handle distributions with any shape for all PK parameters. It is shown in convexity theory that the NPML estimator is discrete, meaning that it has finite number of points with nonzero probability. In fact, there are at most N points where N is the number of observed subjects. The original infinite NPML problem then becomes the finite dimensional problem of finding the location and probability of the support points. In the simplest case, each point essentially represents the set of PK parameters for one patient. The probability of the points is found by a primal-dual interior-point method; the location of the support points is found by an adaptive grid method. Our method is able to handle high-dimensional and complex multivariate mixture models. An important application is discussed for the problem of population pharmacokinetics and a nontrivial example is treated. Our algorithm has been successfully applied in hundreds of published pharmacometric studies. In addition to population pharmacokinetics, this research also applies to empirical Bayes estimation and many other areas of applied mathematics. Thereby, this approach presents an important addition to the pharmacometric toolbox for drug development and optimal patient dosing.

5.
J Environ Manage ; 192: 89-93, 2017 May 01.
Article in English | MEDLINE | ID: mdl-28142127

ABSTRACT

The United States's Clean Water Act stipulates in section 303(d) that states must identify impaired water bodies for which total maximum daily loads (TMDLs) of pollution inputs into water bodies are developed. Decision-making procedures about how to list, or delist, water bodies as impaired, or not, per Clean Water Act 303(d) differ across states. In states such as California, whether or not a particular monitoring sample suggests that water quality is impaired can be regarded as a binary outcome variable, and California's current regulatory framework invokes a version of the exact binomial test to consolidate evidence across samples and assess whether the overall water body complies with the Clean Water Act. Here, we contrast the performance of California's exact binomial test with one potential alternative, the Sequential Probability Ratio Test (SPRT). The SPRT uses a sequential testing framework, testing samples as they become available and evaluating evidence as it emerges, rather than measuring all the samples and calculating a test statistic at the end of the data collection process. Through simulations and theoretical derivations, we demonstrate that the SPRT on average requires fewer samples to be measured to have comparable Type I and Type II error rates as the current fixed-sample binomial test. Policymakers might consider efficient alternatives such as SPRT to current procedure.


Subject(s)
Probability , Water , California , United States
6.
Scand Stat Theory Appl ; 31(1): 3-19, 2016 Mar 01.
Article in English | MEDLINE | ID: mdl-26985125

ABSTRACT

We present a unifying approach to multiple testing procedures for sequential (or streaming) data by giving sufficient conditions for a sequential multiple testing procedure to control the familywise error rate (FWER). Together we call these conditions a "rejection principle for sequential tests," which we then apply to some existing sequential multiple testing procedures to give simplified understanding of their FWER control. Next the principle is applied to derive two new sequential multiple testing procedures with provable FWER control, one for testing hypotheses in order and another for closed testing. Examples of these new procedures are given by applying them to a chromosome aberration data set and to finding the maximum safe dose of a treatment.

7.
Appl Psychol Meas ; 39(4): 278-292, 2015 Jun.
Article in English | MEDLINE | ID: mdl-29881008

ABSTRACT

A well-known stopping rule in adaptive mastery testing is to terminate the assessment once the examinee's ability confidence interval lies entirely above or below the cut-off score. This article proposes new procedures that seek to improve such a variable-length stopping rule by coupling it with curtailment and stochastic curtailment. Under the new procedures, test termination can occur earlier if the probability is high enough that the current classification decision remains the same should the test continue. Computation of this probability utilizes normality of an asymptotically equivalent version of the maximum likelihood ability estimate. In two simulation sets, the new procedures showed a substantial reduction in average test length while maintaining similar classification accuracy to the original method.

8.
J Stat Plan Inference ; 153: 100-114, 2014 Oct 01.
Article in English | MEDLINE | ID: mdl-25092948

ABSTRACT

This paper addresses the following general scenario: A scientist wishes to perform a battery of experiments, each generating a sequential stream of data, to investigate some phenomenon. The scientist would like to control the overall error rate in order to draw statistically-valid conclusions from each experiment, while being as efficient as possible. The between-stream data may differ in distribution and dimension but also may be highly correlated, even duplicated exactly in some cases. Treating each experiment as a hypothesis test and adopting the familywise error rate (FWER) metric, we give a procedure that sequentially tests each hypothesis while controlling both the type I and II FWERs regardless of the between-stream correlation, and only requires arbitrary sequential test statistics that control the error rates for a given stream in isolation. The proposed procedure, which we call the sequential Holm procedure because of its inspiration from Holm's (1979) seminal fixed-sample procedure, shows simultaneous savings in expected sample size and less conservative error control relative to fixed sample, sequential Bonferroni, and other recently proposed sequential procedures in a simulation study.

9.
Stat Med ; 33(16): 2718-35, 2014 Jul 20.
Article in English | MEDLINE | ID: mdl-24577750

ABSTRACT

Recently, there has been much work on early phase cancer designs that incorporate both toxicity and efficacy data, called phase I-II designs because they combine elements of both phases. However, they do not explicitly address the phase II hypothesis test of H0 : p ≤ p0 , where p is the probability of efficacy at the estimated maximum tolerated dose η from phase I and p0 is the baseline efficacy rate. Standard practice for phase II remains to treat p as a fixed, unknown parameter and to use Simon's two-stage design with all patients dosed at η. We propose a phase I-II design that addresses the uncertainty in the estimate p=p(η) in H0 by using sequential generalized likelihood theory. Combining this with a phase I design that incorporates efficacy data, the phase I-II design provides a common framework that can be used all the way from the first dose of phase I through the final accept/reject decision about H0 at the end of phase II, utilizing both toxicity and efficacy data throughout. Efficient group sequential testing is used in phase II that allows for early stopping to show treatment effect or futility. The proposed phase I-II design thus removes the artificial barrier between phase I and phase II and fulfills the objectives of searching for the maximum tolerated dose and testing if the treatment has an acceptable response rate to enter into a phase III trial.


Subject(s)
Antineoplastic Agents/therapeutic use , Clinical Trials, Phase I as Topic , Clinical Trials, Phase II as Topic , Cytotoxins/therapeutic use , Research Design , Clinical Trials, Phase I as Topic/methods , Clinical Trials, Phase I as Topic/statistics & numerical data , Clinical Trials, Phase II as Topic/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Dose-Response Relationship, Drug , Humans , Maximum Tolerated Dose
10.
J Pharmacokinet Pharmacodyn ; 40(2): 189-99, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23404393

ABSTRACT

Population pharmacokinetic (PK) modeling methods can be statistically classified as either parametric or nonparametric (NP). Each classification can be divided into maximum likelihood (ML) or Bayesian (B) approaches. In this paper we discuss the nonparametric case using both maximum likelihood and Bayesian approaches. We present two nonparametric methods for estimating the unknown joint population distribution of model parameter values in a pharmacokinetic/pharmacodynamic (PK/PD) dataset. The first method is the NP Adaptive Grid (NPAG). The second is the NP Bayesian (NPB) algorithm with a stick-breaking process to construct a Dirichlet prior. Our objective is to compare the performance of these two methods using a simulated PK/PD dataset. Our results showed excellent performance of NPAG and NPB in a realistically simulated PK study. This simulation allowed us to have benchmarks in the form of the true population parameters to compare with the estimates produced by the two methods, while incorporating challenges like unbalanced sample times and sample numbers as well as the ability to include the covariate of patient weight. We conclude that both NPML and NPB can be used in realistic PK/PD population analysis problems. The advantages of one versus the other are discussed in the paper. NPAG and NPB are implemented in R and freely available for download within the Pmetrics package from www.lapk.org.


Subject(s)
Algorithms , Bayes Theorem , Models, Biological , Computer Simulation , Humans
12.
Biometrics ; 67(2): 596-603, 2011 Jun.
Article in English | MEDLINE | ID: mdl-20731643

ABSTRACT

A general framework is proposed for Bayesian model based designs of Phase I cancer trials, in which a general criterion for coherence (Cheung, 2005, Biometrika 92, 863-873) of a design is also developed. This framework can incorporate both "individual" and "collective" ethics into the design of the trial. We propose a new design that minimizes a risk function composed of two terms, with one representing the individual risk of the current dose and the other representing the collective risk. The performance of this design, which is measured in terms of the accuracy of the estimated target dose at the end of the trial, the toxicity and overdose rates, and certain loss functions reflecting the individual and collective ethics, is studied and compared with existing Bayesian model based designs and is shown to have better performance than existing designs.


Subject(s)
Clinical Trials as Topic/standards , Drug Dosage Calculations , Neoplasms/drug therapy , Research Design/standards , Animals , Bayes Theorem , Clinical Trials as Topic/ethics , Clinical Trials as Topic/methods , Ethics , Risk Assessment
13.
Article in English | MEDLINE | ID: mdl-22256129

ABSTRACT

We develop the methodology for hypothesis testing and model selection in nonhomogeneous Poisson processes, with an eye toward the application of modeling and variability detection in heart beat data. Modeling the process' non-constant rate function using templates of simple basis functions, we develop the generalized likelihood ratio statistic for a given template and a multiple testing scheme to model-select from a family of templates. A dynamic programming algorithm inspired by network flows is used to compute the maximum likelihood template in a multiscale manner. In a numerical example, the proposed procedure is nearly as powerful as the super-optimal procedures that know the true template size and true partition, respectively. Extensions to general history-dependent point processes is discussed.


Subject(s)
Algorithms , Models, Theoretical , Poisson Distribution , Heart Rate/physiology , Likelihood Functions
14.
Stat Med ; 27(10): 1593-611, 2008 May 10.
Article in English | MEDLINE | ID: mdl-18275090

ABSTRACT

Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Although most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group-sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. This approach not only maintains the prescribed type I error probability but also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group-sequential designs when the alternative or nuisance parameters are unknown or misspecified.


Subject(s)
Clinical Trials as Topic/methods , Research Design , Sample Size , Effect Modifier, Epidemiologic , Humans , Models, Statistical
15.
Opt Express ; 11(5): 460-75, 2003 Mar 10.
Article in English | MEDLINE | ID: mdl-19461753

ABSTRACT

We compared the ability of three model observers (nonprewhitening matched filter with an eye filter, Hotelling and channelized Hotelling) in predicting the effect of JPEG and wavelet-Crewcode image compression on human visual detection of a simulated lesion in single frame digital x-ray coronary angiograms. All three model observers predicted the JPEG superiority present in human performance, although the nonprewhitening matched filter with an eye filter (NPWE) and the channelized Hotelling models were better predictors than the Hotelling model. The commonly used root mean square error and related peak signal to noise ratio metrics incorrectly predicted a JPEG inferiority. A particular image discrimination/perceptual difference model correctly predicted a JPEG advantage at low compression ratios but incorrectly predicted a JPEG inferiority at high compression ratios. In the second part of the paper, the NPWE model was used to perform automated simulated annealing optimization of the quantization matrix of the JPEG algorithm at 25:1 compression ratio. A subsequent psychophysical study resulted in improved human detection performance for images compressed with the NPWE optimized quantization matrix over the JPEG default quantization matrix. Together, our results show how model observers can be successfully used to perform automated evaluation and optimization of diagnostic performance in clinically relevant visual tasks using real anatomic backgrounds.

SELECTION OF CITATIONS
SEARCH DETAIL
...