Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
2.
AAPS J ; 26(3): 50, 2024 04 17.
Article in English | MEDLINE | ID: mdl-38632178

ABSTRACT

Comparative bioavailability studies often involve multiple groups of subjects for a variety of reasons, such as clinical capacity limitations. This raises questions about the validity of pooling data from these groups in the statistical analysis and whether a group-by-treatment interaction should be evaluated. We investigated the presence or absence of group-by-treatment interactions through both simulation techniques and a meta-study of well-controlled trials. Our findings reveal that the test falsely detects an interaction when no true group-by-treatment interaction exists. Conversely, when a true group-by-treatment interaction does exist, it often goes undetected. In our meta-study, the detected group-by-treatment interactions were observed at approximately the level of the test and, thus, can be considered false positives. Testing for a group-by-treatment interaction is both misleading and uninformative. It often falsely identifies an interaction when none exists and fails to detect a real one. This occurs because the test is performed between subjects in crossover designs, and studies are powered to compare treatments within subjects. This work demonstrates a lack of utility for including a group-by-treatment interaction in the model when assessing single-site comparative bioavailability studies, and the clinical trial study structure is divided into groups.


Subject(s)
Research Design , Humans , Biological Availability , Cross-Over Studies
3.
J Pharm Pharm Sci ; 25: 285-296, 2022.
Article in English | MEDLINE | ID: mdl-36112990

ABSTRACT

PURPOSE: More than a decade ago the option to assess highly variable drugs / drug products by reference-scaled average bioequivalence was introduced in regulatory practice. Recommended approaches differ between jurisdictions and may lead to different conclusions even for the same data set. According to our knowledge, implemented methods have not been directly compared for their operating characteristics (Type I Error and power). METHODS: We performed Monte Carlo simulations to assess the consumer risk and the clinically relevant difference for the recommended regulatory settings. RESULTS: In all methods for reference-scaled average bioequivalence the Type I Error can be inflated with a consequently compromised consumer risk. Furthermore, the clinically relevant difference could vary between studies performed with the same reference product. CONCLUSIONS: Only average bioequivalence with fixed - widened - limits would both maintain the consumer risk and offer an unambiguously defined clinically not relevant difference. As long as such an approach is not implemented in regulatory practice, we recommend adjusting the level of the test a.


Subject(s)
Therapeutic Equivalency
4.
Biom J ; 63(1): 122-133, 2021 01.
Article in English | MEDLINE | ID: mdl-33000873

ABSTRACT

Bioequivalence studies are the pivotal clinical trials submitted to regulatory agencies to support the marketing applications of generic drug products. Average bioequivalence (ABE) is used to determine whether the mean values for the pharmacokinetic measures determined after administration of the test and reference products are comparable. Two-stage 2×2 crossover adaptive designs (TSDs) are becoming increasingly popular because they allow making assumptions on the clinically meaningful treatment effect and a reliable guess for the unknown within-subject variability. At an interim look, if ABE is not declared with an initial sample size, they allow to increase it depending on the estimated variability and to enroll additional subjects at a second stage, or to stop for futility in case of poor likelihood of bioequivalence. This is crucial because both parameters must clearly be prespecified in protocols, and the strategy agreed with regulatory agencies in advance with emphasis on controlling the overall type I error. We present an iterative method to adjust the significance levels at each stage which preserves the overall type I error for a wide set of scenarios which should include the true unknown variability value. Simulations showed adjusted significance levels higher than 0.0300 in most cases with type I error always below 5%, and with a power of at least 80%. TSDs work particularly well for coefficients of variation below 0.3 which are especially useful due to the balance between the power and the percentage of studies proceeding to stage 2. Our approach might support discussions with regulatory agencies.


Subject(s)
Research Design , Cross-Over Studies , Humans , Sample Size , Therapeutic Equivalency
5.
Pharm Stat ; 20(2): 272-281, 2021 03.
Article in English | MEDLINE | ID: mdl-33063443

ABSTRACT

For the clinical development of a new drug, the determination of dose-proportionality is an essential part of the pharmacokinetic evaluations, which may provide early indications of non-linear pharmacokinetics and may help to identify sub-populations with divergent clearances. Prior to making any conclusions regarding dose-proportionality, the goodness-of-fit of the model must be assessed to evaluate the model performance. We propose the use of simulation-based visual predictive checks to improve the validity of dose-proportionality conclusions for complex designs. We provide an illustrative example and include a table to facilitate review by regulatory authorities.


Subject(s)
Dose-Response Relationship, Drug , Computer Simulation , Humans
6.
AAPS J ; 22(2): 44, 2020 02 07.
Article in English | MEDLINE | ID: mdl-32034551

ABSTRACT

In order to help companies qualify and validate the software used to evaluate bioequivalence trials in a replicate design intended for average bioequivalence with expanding limits, this work aims to define datasets with known results. This paper releases 30 reference datasets into the public domain along with proposed consensus results. A proposal is made for results that should be used as validation targets. The datasets were evaluated by seven different software packages according to methods proposed by the European Medicines Agency. For the estimation of CVwR and Method A, all software packages produced results that are in agreement across all datasets. Due to different approximations of the degrees of freedom, slight differences were observed in two software packages for Method B in highly incomplete datasets. All software packages were suitable for the estimation of CVwR and Method A. For Method B, different methods for approximating the denominator degrees of freedom could lead to slight differences, which eventually could lead to contrary decisions in very rare borderline cases.


Subject(s)
Clinical Trials as Topic , Datasets as Topic , Research Design , Software Validation , Therapeutic Equivalency , Data Accuracy , Humans , Reproducibility of Results
7.
Pharm Res ; 33(11): 2805-14, 2016 11.
Article in English | MEDLINE | ID: mdl-27480875

ABSTRACT

PURPOSE: To verify previously reported findings for the European Medicines Agency's method for Average Bioequivalence with Expanding Limits (ABEL) for assessing highly variable drugs and to extend the assessment for other replicate designs in a wide range of sample sizes and CVs. To explore the properties of a new modified method which maintains the consumer risk ≤0.05 in all cases. METHODS: Monte-Carlo simulations of three different replicate designs covering a wide range of sample sizes and intra-subject variabilities were performed. RESULTS: At the switching variability of CV wR 30% the consumer risk is substantially inflated to up to 9.2%, which translates into a relative increase of up to 84%. The critical region of inflated type I errors ranges approximately from CV wR 25 up to 45%. The proposed method of iteratively adjusting α maintains the consumer risk at the desired level of ≤5% independent from design, variability, and sample size. CONCLUSIONS: Applying the European Medicines Agency's ABEL method at the nominal level of 0.05 inflates the type I error to an unacceptable degree, especially close to a CV wR of 30%. To control the type I error nominal levels ≤0.05 should be employed. Iteratively adjusting α is suggested to find optimal levels of the test.


Subject(s)
Computer Simulation , Therapeutic Equivalency , Biostatistics , Humans , Models, Biological , Monte Carlo Method , Research Design , Sample Size
8.
AAPS J ; 17(2): 400-4, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25488055

ABSTRACT

In order to help companies qualify and validate the software used to evaluate bioequivalence trials with two parallel treatment groups, this work aims to define datasets with known results. This paper puts a total 11 datasets into the public domain along with proposed consensus obtained via evaluations from six different software packages (R, SAS, WinNonlin, OpenOffice Calc, Kinetica, EquivTest). Insofar as possible, datasets were evaluated with and without the assumption of equal variances for the construction of a 90% confidence interval. Not all software packages provide functionality for the assumption of unequal variances (EquivTest, Kinetica), and not all packages can handle datasets with more than 1000 subjects per group (WinNonlin). Where results could be obtained across all packages, one showed questionable results when datasets contained unequal group sizes (Kinetica). A proposal is made for the results that should be used as validation targets.


Subject(s)
Clinical Trials as Topic/methods , Computer Simulation , Software , Datasets as Topic/statistics & numerical data , Humans , Therapeutic Equivalency , Validation Studies as Topic
9.
AAPS J ; 16(6): 1292-7, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25212768

ABSTRACT

It is difficult to validate statistical software used to assess bioequivalence since very few datasets with known results are in the public domain, and the few that are published are of moderate size and balanced. The purpose of this paper is therefore to introduce reference datasets of varying complexity in terms of dataset size and characteristics (balance, range, outlier presence, residual error distribution) for 2-treatment, 2-period, 2-sequence bioequivalence studies and to report their point estimates and 90% confidence intervals which companies can use to validate their installations. The results for these datasets were calculated using the commercial packages EquivTest, Kinetica, SAS and WinNonlin, and the non-commercial package R. The results of three of these packages mostly agree, but imbalance between sequences seems to provoke questionable results with one package, which illustrates well the need for proper software validation.


Subject(s)
Biological Availability , Computer Simulation , Datasets as Topic/statistics & numerical data , Models, Statistical , Confidence Intervals , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...