Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Stat Methods Med Res ; 32(7): 1377-1388, 2023 07.
Article in English | MEDLINE | ID: mdl-37278182

ABSTRACT

Statistical sequential analysis of binary data is an important tool in clinical trials such as placebo-controlled trials, where a total of K individuals are randomly allocated into two groups, one of size κ1 receiving the treatment/drug, and the other of size κ2 for placebo. The ratio z=κ2/κ1, namely "matching ratio," determines the expected proportion of adverse events from the treatment group among the κ1+κ2 individuals. Bernoulli-based designs are used for monitoring the safety of post-licensed drugs and vaccines as well. For instance, in a self-control design, z is the ratio between the risk and the control time windows. Irrespective of the type of application, the choice of z is a critical design criterion as it determines the sample size, the statistical power, the expected sample size, and the expected time to signal the sequential procedure. In this paper, we run exact calculations to offer a statistical rule of thumb for the choice of z. All the calculations and examples are performed using the R Sequential package.


Subject(s)
Research Design , Vaccines , Humans , Sample Size
2.
Stat Med ; 42(18): 3283-3301, 2023 08 15.
Article in English | MEDLINE | ID: mdl-37221996

ABSTRACT

In the postmarket drug and vaccine safety surveillance, when the number of adverse events follows a Poisson distribution, the ratio between the exposed and the unexposed person-time information is the random variable that governs the decision rule about the safety of the drug or vaccine. The probability distribution function of such a ratio is derived in this paper. Exact point and interval estimators for the relative risk are discussed as well as statistical hypothesis testing. To the best of our knowledge, this is the first paper that provides an unbiased estimator for the relative risk based on the person-time ratio. The applicability of this new distribution is illustrated through a real data analysis aimed to detect increased risk of occurrence of Myocarditis/Pericarditis following mRNA COVID-19 vaccination in Manitoba, Canada.


Subject(s)
COVID-19 , Vaccines , Humans , Adverse Drug Reaction Reporting Systems , COVID-19 Vaccines , COVID-19/epidemiology , COVID-19/prevention & control , Vaccines/adverse effects , Likelihood Functions , Vaccination , Poisson Distribution
3.
Stat Methods Med Res ; 31(12): 2323-2337, 2022 12.
Article in English | MEDLINE | ID: mdl-36120901

ABSTRACT

In sequential testing with binary data, sample size and time to detect a signal are the key performance measures to optimize. While the former should be optimized in Phase III clinical trials, minimizing the latter is of major importance in post-market drug and vaccine safety surveillance of adverse events. The precision of the relative risk estimator on termination of the analysis is a meaningful design criterion as well. This paper presents a linear programming framework to find the optimal alpha spending that minimizes expected time to signal, or expected sample size as needed. The solution enables (a) to bound the width of the confidence interval following the end of the analysis, (b) designs with outer signaling thresholds and inner non-signaling thresholds, and (c) sequential designs with variable Bernoulli probabilities. To illustrate, we use real data on the monitoring of adverse events following the H1N1 vaccination. The numerical results are obtained using the R Sequential package.


Subject(s)
Influenza A Virus, H1N1 Subtype , Influenza Vaccines , Vaccines , Confidence Intervals , Probability , Sample Size , Vaccines/adverse effects , Clinical Trials, Phase III as Topic , Influenza Vaccines/adverse effects
4.
Stat Pap (Berl) ; 63(2): 343-365, 2022.
Article in English | MEDLINE | ID: mdl-34092925

ABSTRACT

Conventional methods for testing independence between two Gaussian vectors require sample sizes greater than the number of variables in each vector. Therefore, adjustments are needed for the high-dimensional situation, where the sample size is smaller than the number of variables in at least one of the compared vectors. It is critical to emphasize that the methods available in the literature are unable to control the Type I error probability under the nominal level. This fact is evidenced through an intensive simulation study presented in this paper. To cover this lack, we introduce a valid randomized test based on the Kronecker delta covariance matrices estimator. As an empirical application, based on a sample of companies listed on the stock exchange of Brazil, we test the independence between returns of stocks of different sectors in the COVID-19 pandemic context.

5.
Stat Med ; 39(3): 340-351, 2020 02 10.
Article in English | MEDLINE | ID: mdl-31769079

ABSTRACT

Sequential analysis is used in clinical trials and postmarket drug safety surveillance to prospectively monitor efficacy and safety to quickly detect benefits and problems, while taking the multiple testing of repeated analyses into account. When there are multiple outcomes, each one may be given a weight corresponding to its severity. This paper introduces an exact sequential analysis procedure for multiple weighted binomial end points; the analysis incorporates a drug's combined benefit and safety profile. It works with a variety of alpha spending functions for continuous, group, or mixed group-continuous sequential analysis. The binomial probabilities may vary over time and do not need to be known a priori. The new method was implemented in the free R Sequential package for both one- and two-tailed sequential analysis. An example is given examining myocardial infarction and major bleeding events in patients who initiated non-steroidal antiinflammatory drugs.


Subject(s)
Biometry/methods , Endpoint Determination/methods , Computer Simulation , Humans , Probability
6.
Sci Rep ; 9(1): 1017, 2019 01 31.
Article in English | MEDLINE | ID: mdl-30705328

ABSTRACT

A social dilemma appears in the public goods problem, where the individual has to decide whether to contribute to a common resource. The total contributions to the common pool are increased by a synergy factor and evenly split among the members. The ideal outcome occurs if everyone contributes the maximum amount. However, regardless of what the others do, each individual is better off by contributing nothing. Yet, cooperation is largely observed in human society. Many mechanisms have been shown to promote cooperation in humans, alleviating, or even resolving, the social dilemma. One class of mechanisms that is under-explored is the spillover of experiences obtained from different environments. There is some evidence that positive experiences promote cooperative behaviour. Here, we address the question of how experiencing positive cooperative interactions - obtained in an environment where cooperation yields high returns - affects the level of cooperation in social dilemma interactions. In a laboratory experiment, participants played repeated public goods games (PGGs) with rounds alternating between positive interactions and social dilemma interactions. We show that, instead of promoting pro-social behaviour, the presence of positive interactions lowered the level of cooperation in the social dilemma interactions. Our analysis suggests that the high return obtained in the positive interactions sets a reference point that accentuates participants' perceptions that contributing in social dilemma interactions is a bad investment.


Subject(s)
Cooperative Behavior , Social Behavior , Game Theory , Humans , Probability , Time Factors
7.
Stat Med ; 38(12): 2126-2138, 2019 05 30.
Article in English | MEDLINE | ID: mdl-30689224

ABSTRACT

Sequential analysis hypothesis testing is now an important tool for postmarket drug and vaccine safety surveillance. When the number of adverse events accruing in time is assumed to follow a Poisson distribution, and if the baseline Poisson rate is assessed only with uncertainty, the conditional maximized sequential probability ratio test, CMaxSPRT, is a formal solution. CMaxSPRT is based on comparing monitored data with historical matched data, and it was primarily developed under a flat signaling threshold. This paper demonstrates that CMaxSPRT can be performed under nonflat thresholds too. We pose the discussion in the light of the alpha spending approach. In addition, we offer a rule of thumb for establishing the best shape of the signaling threshold in the sense of minimizing expected time to signal and expected sample size. An example involving surveillance for adverse events after influenza vaccination is used to illustrate the method.


Subject(s)
Clinical Trials as Topic/methods , Poisson Distribution , Product Surveillance, Postmarketing/methods , Adverse Drug Reaction Reporting Systems , Computer Simulation , Humans , Influenza Vaccines/adverse effects , Sample Size
8.
Seq Anal ; 38(1): 115-133, 2019.
Article in English | MEDLINE | ID: mdl-32153315

ABSTRACT

Sequential analysis is now commonly used for post-market drug and vaccine safety surveillance, and a Poisson stochastic process is typically used for rare adverse events. The conditional maximized sequential probability ratio test, CMaxSPRT, is a powerful tool when there is uncertainty in the estimated expected counts under the null hypothesis. This paper derives exact critical values for CMaxSPRT, as well as statistical power and expected time to signal. This is done for both continuous and group sequential analysis, and for different rejection boundaries. It is also shown how to adjust for covariates in the sequential design. A table of critical values is provided for selected parameters and rejection boundaries, while new functions in the R Sequential package can be used for other calculations. In addition, the method is illustrated for monitoring adverse events after pediarix vaccination data.

9.
Stat Med ; 37(1): 107-118, 2018 Jan 15.
Article in English | MEDLINE | ID: mdl-28948642

ABSTRACT

Type I error probability spending functions are commonly used for designing sequential analysis of binomial data in clinical trials, but it is also quickly emerging for near-continuous sequential analysis of post-market drug and vaccine safety surveillance. It is well known that, for clinical trials, when the null hypothesis is not rejected, it is still important to minimize the sample size. Unlike in post-market drug and vaccine safety surveillance, that is not important. In post-market safety surveillance, specially when the surveillance involves identification of potential signals, the meaningful statistical performance measure to be minimized is the expected sample size when the null hypothesis is rejected. The present paper shows that, instead of the convex Type I error spending shape conventionally used in clinical trials, a concave shape is more indicated for post-market drug and vaccine safety surveillance. This is shown for both, continuous and group sequential analysis.


Subject(s)
Adverse Drug Reaction Reporting Systems/statistics & numerical data , Product Surveillance, Postmarketing/statistics & numerical data , Vaccines/adverse effects , Adverse Drug Reaction Reporting Systems/economics , Biostatistics , Data Interpretation, Statistical , Humans , Models, Statistical , Probability , Product Surveillance, Postmarketing/economics
10.
Methodol Comput Appl Probab ; 20(2): 739-750, 2018 Jun.
Article in English | MEDLINE | ID: mdl-31889890

ABSTRACT

Statistical sequential hypothesis testing is meant to analyze cumulative data accruing in time. The methods can be divided in two types, group and continuous sequential approaches, and a question that arises is if one approach suppresses the other in some sense. For Poisson stochastic processes, we prove that continuous sequential analysis is uniformly better than group sequential under a comprehensive class of statistical performance measures. Hence, optimal solutions are in the class of continuous designs. This paper also offers a pioneer study that compares classical Type I error spending functions in terms of expected number of events to signal. This was done for a number of tuning parameters scenarios. The results indicate that a log-exp shape for the Type I error spending function is the best choice in most of the evaluated scenarios.

11.
Revstat Stat J ; 15(3): 373-394, 2017 Jul.
Article in English | MEDLINE | ID: mdl-34393695

ABSTRACT

The CDC Vaccine Safety Datalink project has pioneered the use of near real-time post-market vaccine safety surveillance for the rapid detection of adverse events. Doing weekly analyses, continuous sequential methods are used, allowing investigators to evaluate the data near-continuously while still maintaining the correct overall alpha level. With continuous sequential monitoring, the null hypothesis may be rejected after only one or two adverse events are observed. In this paper, we explore continuous sequential monitoring when we do not allow the null to be rejected until a minimum number of observed events have occurred. We also evaluate continuous sequential analysis with a delayed start until a certain sample size has been attained. Tables with exact critical values, statistical power and the average times to signal are provided. We show that, with the first option, it is possible to both increase the power and reduce the expected time to signal, while keeping the alpha level the same. The second option is only useful if the start of the surveillance is delayed for logistical reasons, when there is a group of data available at the first analysis, followed by continuous or near-continuous monitoring thereafter.

12.
Stat Med ; 35(9): 1441-53, 2016 Apr 30.
Article in English | MEDLINE | ID: mdl-26561330

ABSTRACT

Group sequential hypothesis testing is now widely used to analyze prospective data. If Monte Carlo simulation is used to construct the signaling threshold, the challenge is how to manage the type I error probability for each one of the multiple tests without losing control on the overall significance level. This paper introduces a valid method for a true management of the alpha spending at each one of a sequence of Monte Carlo tests. The method also enables the use of a sequential simulation strategy for each Monte Carlo test, which is useful for saving computational execution time. Thus, the proposed procedure allows for sequential Monte Carlo test in sequential analysis, and this is the reason that it is called 'composite sequential' test. An upper bound for the potential power losses from the proposed method is deduced. The composite sequential design is illustrated through an application for post-market vaccine safety surveillance data.


Subject(s)
Monte Carlo Method , Product Surveillance, Postmarketing/methods , Vaccines/adverse effects , Data Interpretation, Statistical , Humans , Models, Statistical , Probability , Vaccines/therapeutic use
SELECTION OF CITATIONS
SEARCH DETAIL
...