Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 83
Filter
1.
AAPS J ; 26(1): 15, 2024 01 24.
Article in English | MEDLINE | ID: mdl-38267593

ABSTRACT

On October 27-28, 2022, the US Food and Drug Administration (FDA) and the Center for Research on Complex Generics (CRCG) hosted a virtual public workshop titled "Best Practices for Utilizing Modeling Approaches to Support Generic Product Development." This report summarizes the presentations and panel discussions for a session titled "Development of Quantitative Comparative Approaches to Support Complex Generic Drug Development." This session featured speakers and panelists from both the generic industry and the FDA who described applications of advanced quantitative approaches for generic drug development and regulatory assessment within three main topics of interest: (1) API sameness assessment for complex generics, (2) particle size distribution assessment, and (3) dissolution profile similarity comparison. The key takeaways were that the analysis of complex data poses significant challenges to the application of conventional statistical bioequivalence methods, and there are various opportunities for using data analytics approaches for developing and applying suitable equivalence assessment method.


Subject(s)
Drug Development , Drugs, Generic , United States , Research Design , Therapeutic Equivalency , United States Food and Drug Administration
2.
J Biopharm Stat ; 34(1): 78-89, 2024 Jan 02.
Article in English | MEDLINE | ID: mdl-36710402

ABSTRACT

In vitro dissolution profile has been shown to be correlated with the drug absorption and has often been considered as a metric for assessing in vitro bioequivalence between a test product and corresponding reference one. Various methods have been developed to assess the similarity between two dissolution profiles. In particular, similarity factor f2 has been reviewed and discussed extensively in many statistical articles. Although the f2 lacks inferential statistical properties, the estimation of f2 and its various modified versions were the most widely used metric for comparing dissolution profiles. In this paper, we investigated performances of the naive f2 estimate method, bootstrap f2 confidence interval method and bias corrected-accelerated (BCa) bootstrap f2 confidence interval method for comparing dissolution profiles. Our studies show that naive f2 estimate method and BCa bootstrap f2 confidence interval method are unable to control the type I error rate. The bootstrap f2 confidence interval method can control the type I error rate under a specific level. However, it will cause great conservatism on the power of the test. To solve the potential issues of the previous methods, we recommended a bootstrap bias corrected (BC) f2 confidence interval method in this paper. The type I error rate, power and sensitivity among different f2 methods were compared based on simulations. The recommended bootstrap BC f2 confidence interval method shows better control of type I error than the naive f2 estimate method and BCa bootstrap f2 confidence interval method. It also provides better power than the bootstrap f2 confidence interval method.


Subject(s)
F Factor , Humans , Solubility , Therapeutic Equivalency , Bias
3.
J Biopharm Stat ; 31(2): 168-179, 2021 03.
Article in English | MEDLINE | ID: mdl-32873122

ABSTRACT

The baseline selection in concentration-QTc (C-QTc) modeling is not well studied in the literature. Time-matched baseline and pre-dose baseline have been commonly used as a covariate in C-QTc modeling for parallel and crossover study, respectively. It has been showed that the C-QTc model using time-matched baseline has a low chance of showing assay sensitivity in parallel study. To better understand the impacts of baseline section in C-QTc, we examined the original and subsampled moxifloxacin and placebo data from more than 50 of TQT studies submitted to FDA with regard to assay sensitivity. Our analyses show that baseline selection (time-matched, pre-dose, average) has an impact on prediction from C-QTc modeling and the impact depends on study design (parallel, crossover). The impact to categorical table of ΔQTc is unlikely to alter the interpretation of the outlier category (ΔQTc>60) that corresponds to the regulatory concern. The results presented here can guide C-QTc study design as well as baseline selection in C-QTc modeling.


Subject(s)
Electrocardiography , Long QT Syndrome , Biological Assay , Cross-Over Studies , Dose-Response Relationship, Drug , Fluoroquinolones , Heart Rate , Humans , Moxifloxacin , Research Design
4.
AAPS J ; 22(6): 137, 2020 10 25.
Article in English | MEDLINE | ID: mdl-33099695

ABSTRACT

Proper adhesion plays a critical role in maintaining a consistent, efficacious, and safe drug delivery profile for transdermal and topical delivery systems (TDS). As such, in vivo skin adhesion studies are recommended by regulatory agencies to support the approval of TDS in new drug applications (NDAs). A draft guidance for industry by the US Food and Drug Administration outlines a non-inferiority comparison between a test product and its reference product for generic TDS in abbreviated new drug applications (ANDAs). However, the statistical method is not applicable for evaluating adhesion of TDS for NDAs, because no reference product exists. In this article, we explore an alternative primary endpoint and a one-sided binomial test to evaluate in vivo adhesion of TDS in NDAs. Statistical considerations related to the proposed approach are discussed. To understand its potential use, the proposed approach is applied to data sets of in vivo adhesion studies from selected NDAs and ANDAs.


Subject(s)
Drug Delivery Systems/methods , Models, Biological , Transdermal Patch/standards , Adhesiveness , Administration, Cutaneous , Drug Approval , Drug Delivery Systems/standards , Drug Evaluation, Preclinical/standards , Equivalence Trials as Topic , Guidelines as Topic , Humans , Skin Absorption/physiology , United States , United States Food and Drug Administration/standards
5.
J Biopharm Stat ; 30(2): 267-276, 2020 03.
Article in English | MEDLINE | ID: mdl-31237475

ABSTRACT

Percentile is ubiquitous in statistics and plays a significant role in the day-to-day statistical application. FDA Guidance for Industry: Assay Development for Immunogenicity Testing of Therapeutic Protein Products (2016) recommends the use of a lower confidence limit of the percentile of the negative subject population as the cut point to guarantee a pre-specified false-positive rate with high confidence. Shen proposed and compared an exact t approach with some approximated approaches. However, the exact t approach might be compromised by computational time and complexity. In this article, we proposed to use a UMOVER method as a potential alternative for percentile estimation for one application to screening and confirmatory cut point determination due to its easy implementation and similar performance to the exact t approach. The applications and performance comparison with different approaches are investigated and discussed. Furthermore, we extended the proposed method for the comparison of the percentile of the test product and percentile of the reference product followed by numerical studies.


Subject(s)
Drugs, Generic , Endpoint Determination/statistics & numerical data , Statistics as Topic , Analysis of Variance , Drugs, Generic/therapeutic use , Endpoint Determination/methods , Humans , Statistics as Topic/methods , Therapeutic Equivalency
6.
J Biopharm Stat ; 29(5): 822-833, 2019.
Article in English | MEDLINE | ID: mdl-31486705

ABSTRACT

Non-inferiority comparison between binary response rates of test and reference treatments is often performed in clinical studies. The most common approach to assess non-inferiority is to compare the difference between the estimated response rates with some margin. Previous methods use a variety of margins, including fixed margin, step-wise constant margin, and piece-wise smooth margin, where the latter two are functions of the reference response rate. The fixed margin approach assumes that the margin can be determined from historical trials with the consistent difference between the reference treatment and placebo, which may not be available. The step-wise constant margin approach suffers discontinuity in the power function which can cause trouble in sample size determination. Furthermore, many methods ignore the variability in margins dependent on the estimated reference response rate, leading to poor type I error control and power function approximation. In this study, we propose a variable margin approach to overcome the difficulties in fixed and step-wise constant margin approaches. We discuss several test statistics and evaluate their performance through simulation studies.


Subject(s)
Empirical Research , Endpoint Determination/statistics & numerical data , Equivalence Trials as Topic , Endpoint Determination/methods , Humans
7.
J Biopharm Stat ; 29(6): 1068-1081, 2019.
Article in English | MEDLINE | ID: mdl-30829123

ABSTRACT

For the reference scaled equivalence hypothesis to reduce the deficiency of the current practice in analytical equivalence assessment, the Wald test with Constrained Maximum Likelihood Estimate (CMLE) of the standard error was proposed to improve the efficiency when the sample sizes of test and reference product lots are small, and variances are unequal. However, by using the Wald test with CMLE standard error, simulations show that the type I error rate is below the nominal significance level. We proposed the Modified Wald test with CMLE standard error by replacing the maximum likelihood estimate of reference standard deviation with the sample estimate (MWCMLE), resulting in further improvement of type I error rate and power over the Wald test with CMLE standard error. In this paper, we further compare the proposed MWCMLE method to the Exact-test-Based (EB) method and the Generalized Pivotal Quantity (GPQ) method with equal or unequal variances, or equal or unequal sample sizes of both product lots. The simulations show that the proposed MWCMLE method outperforms the other two methods in type I error rate control and power improvement.


Subject(s)
Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Computer Simulation , Models, Statistical , Confidence Intervals , Cross-Over Studies , Endpoint Determination , Humans , Likelihood Functions , Sample Size , Statistical Distributions , Therapeutic Equivalency
8.
J Biopharm Stat ; 29(2): 378-384, 2019.
Article in English | MEDLINE | ID: mdl-30346877

ABSTRACT

A concurrent positive control should be included in a thorough QTc clinical trial to validate the study according to ICH E14 guidance. Some pharmaceutical companies have started to use "hybrid TQT" study to meet ICH E14 regulatory requirements since the release of ICH E14 Q&A (R3). The "hybrid TQT" study includes the same treatment arms (therapeutic and/or supratherapeutic dose of investigational drug, placebo, and positive control) with sample size less than traditional TQT studies, but use concentration-QTc (C-QTc) analysis as primary analysis and assay sensitivity analysis. To better understand the statistical characteristics of assay sensitivity with a commonly used positive control - Moxifloxacin - in "hybrid TQT" studies, we examined the original and subsampled moxifloxacin and placebo data from more than a hundred of TQT studies submitted to FDA. The assay sensitivity results are quite consistent between classical E14 analysis and C-QTc analysis using the original datasets. Performance of assay sensitivity in "hybrid TQT" studies using subsampled data depends on number of moxifloxacin subjects, study design (crossover design and parallel design), and C-QTc model. The results presented here can aid the design of future "hybrid TQT" studies.


Subject(s)
Drugs, Investigational/adverse effects , Linear Models , Long QT Syndrome/chemically induced , Moxifloxacin/adverse effects , Randomized Controlled Trials as Topic/methods , Biological Assay , Control Groups , Cross-Over Studies , Dose-Response Relationship, Drug , Drugs, Investigational/administration & dosage , Drugs, Investigational/pharmacokinetics , Electrocardiography , Heart Rate/drug effects , Humans , Long QT Syndrome/diagnosis , Long QT Syndrome/metabolism , Moxifloxacin/administration & dosage , Moxifloxacin/pharmacokinetics , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design , Sensitivity and Specificity
9.
Pharm Stat ; 17(5): 607-614, 2018 09.
Article in English | MEDLINE | ID: mdl-29956449

ABSTRACT

The revised ICH E14 Question and Answer (R3) document issued in December 2015 enables pharmaceutical companies to use concentration-QTc (C-QTc) modeling as the primary analysis for assessing QTc prolongation risk of new drugs. A new approach by including the time effect into the current C-QTc model is introduced. Through a simulation study, we evaluated performances of different C-QTc modeling with different dependent variables, covariates, and covariance structures. This simulation study shows that C-QTc models with ΔQTc being dependent variable without time effect inflate false negative rate and that fitting C-QTc models with different dependent variables, covariates, and covariance structures impacts the control of false negative and false positive rates. Appropriate C-QTc modeling strategies with good control of false negative rate and false positive rate are recommended.


Subject(s)
Computer Simulation , Drug Development/methods , Long QT Syndrome/chemically induced , Models, Cardiovascular , Drug Industry/methods , Effect Modifier, Epidemiologic , Electrocardiography , False Negative Reactions , False Positive Reactions , Humans , Risk Assessment/methods , Time Factors
10.
J Biopharm Stat ; 27(2): 317-330, 2017.
Article in English | MEDLINE | ID: mdl-28055327

ABSTRACT

The equivalence test has a wide range of applications in pharmaceutical statistics which we need to test for the similarity between two groups. In recent years, the equivalence test has been used in assessing the analytical similarity between a proposed biosimilar product and a reference product. More specifically, the mean values of the two products for a given quality attribute are compared against an equivalence margin in the form of ±f × σR, where ± f × σ R is a function of the reference variability. In practice, this margin is unknown and is estimated from the sample as ±f × SR. If we use this estimated margin with the classic t-test statistic on the equivalence test for the means, both Type I and Type II error rates may inflate. To resolve this issue, we develop an exact-based test method and compare this method with other proposed methods, such as the Wald test, the constrained Wald test, and the Generalized Pivotal Quantity (GPQ) in terms of Type I error rate and power. Application of those methods on data analysis is also provided in this paper. This work focuses on the development and discussion of the general statistical methodology and is not limited to the application of analytical similarity.


Subject(s)
Pharmaceutical Preparations/standards , Research Design , Statistics as Topic , Humans
11.
J Biopharm Stat ; 27(2): 220-232, 2017.
Article in English | MEDLINE | ID: mdl-28060570

ABSTRACT

Large sample size imbalance is not uncommon in the biosimilar development. At the beginning of a product development, sample sizes of a biosimilar and a reference product may be limited. Thus, a sample size calculation may not be feasible. During the development stage, more batches of reference products may be added at a later stage to have a more reliable estimate of the reference variability. On the other hand, we also need a sufficient number of biosimilar batches in order to have a better understanding of the product. Those challenges lead to a potential sample size imbalance. In this paper, we show that large sample size imbalance may increase the power of the equivalence test in an unfavorable way, giving higher power for less similar products when the sample size of biosimilar is much smaller than that of the reference product. Thus, it is necessary to make some sample size imbalance adjustments to motivate sufficient sample size for biosimilar as well. This paper discusses two adjustment methods for the equivalence test in analytical biosimilarity studies. Please keep in mind that sufficient sample sizes for both biosimilar and reference products (if feasible) are desired during the planning stage.


Subject(s)
Biosimilar Pharmaceuticals/standards , Data Interpretation, Statistical , Research Design , Sample Size , Humans
12.
J Biopharm Stat ; 27(2): 197-205, 2017.
Article in English | MEDLINE | ID: mdl-27977326

ABSTRACT

To evaluate the analytical similarity between the proposed biosimilar product and the US-licensed reference product, a working group at Food and Drug Administration (FDA) developed a tiered approach. This proposed tiered approach starts with a criticality determination of quality attributes (QAs) based on risk ranking of their potential impact on product quality and the clinical outcomes. Those QAs characterize biological products in terms of structural, physicochemical, and functional properties. Correspondingly, we propose three tiers of statistical approaches based on the levels of stringency in requirements. The three tiers of statistical approaches will be applied to QAs based on the criticality ranking and other factors. In this article, we discuss the statistical methods applicable to the three tiers of QA. We further provide more details for the proposed equivalence test as the Tier 1 approach. We also provide some discussion on the statistical challenges of the proposed equivalence test in the context of analytical similarity assessment.


Subject(s)
Biosimilar Pharmaceuticals/standards , Research Design , Humans , Quality Control , United States , United States Food and Drug Administration
13.
J Biopharm Stat ; 27(2): 239-256, 2017.
Article in English | MEDLINE | ID: mdl-27936355

ABSTRACT

Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.


Subject(s)
Equivalence Trials as Topic , Models, Statistical , Sample Size , Humans
14.
J Biopharm Stat ; 27(2): 338-355, 2017.
Article in English | MEDLINE | ID: mdl-27922340

ABSTRACT

Ratio of means (ROM) and difference of means (DOM) are often used in a superiority, noninferiority (NI), or average bioequivalence (ABE) test to evaluate whether the test mean is superior, NI, or equivalent to the reference (placebo or active control) mean. The literature provides recommendations regarding how to choose between ROM and DOM, mainly for superiority testing. In this article, we evaluated these two measures from other perspectives and cautioned the potential impact of different scoring systems/transformation for the same outcome (which is not rarely seen in practice) on the power of a ROM or DOM test for superiority, NI, or ABE. 1) For superiority, with the same margin, power remains the same for a location, scale, or combined shift (no other transformations) to scoring systems for both measures; however, for NI and ABE, different shifts can change the power of the test significantly. 2) Direction of scores (larger or smaller value indicating desirable effects) does not change the power for a DOM superiority, NI, or ABE test, but it does change the power tremendously for a ROM, NI, or ABE test. Caution should be taken when defining scoring systems. Data transformation is not encouraged in general, and if needed, should be statistically justified.


Subject(s)
Pharmaceutical Preparations/standards , Research Design , Therapeutic Equivalency , Humans
15.
J Biopharm Stat ; 27(2): 213-219, 2017.
Article in English | MEDLINE | ID: mdl-27906604

ABSTRACT

In the evaluation of the analytical similarity data, an equivalence testing approach for most critical and quantitative quality attributes, which are assigned to Tier 1 in their proposed three-tier approach, was proposed. The Food and Drug Administration (FDA) has recommended the proposed equivalence testing approach to sponsors through meeting comments for Pre-Investigational New Drug Applications (PINDs) and Investigational New Drug Applications (INDs) since 2014. The FDA has received some feedback on the statistical issues of potentially correlated reference lot values subjected to equivalence testing since independent and identical observations (lot values) from the proposed biosimilar product and the reference product are assumed. In this article, we describe one method for correcting the estimation bias of the reference variability so as to increase the equivalence margin and its modified versions for increasing the equivalence margin and correcting the standard errors in the confidence intervals, assuming that the lot values are correlated under a few known correlation matrices. Our comparisons between these correcting methods and no correction for bias in the reference variability under several assumed correlation structures indicate that all correcting methods would increase the type I error rate dramatically but only improve the power slightly for most of the simulated scenarios. For some particular simulated cases, the type I error rate can be extremely large (e.g., 59%) if the guessed correlation is larger than the assumed correlation. Since the source of a reference drug product lot is unknown in nature, correlation between lots is a design issue. Hence, to obtain independent reference lot values by purchasing the reference lots at a wide time window often is a design remedy for correlated reference lot values.


Subject(s)
Biosimilar Pharmaceuticals/standards , Data Interpretation, Statistical , Research Design , Humans , United States , United States Food and Drug Administration
16.
J Biopharm Stat ; 27(2): 308-316, 2017.
Article in English | MEDLINE | ID: mdl-27906607

ABSTRACT

Equivalence tests may be tested with mean difference against a margin adjusted for variance. The justification of using variance adjusted non-inferiority or equivalence margin is for the consideration that a larger margin should be used with large measurement variability. However, under the null hypothesis, the test statistic does not follow a t-distribution or any well-known distribution even when the measurement is normally distributed. In this study, we investigate asymptotic tests for testing the equivalence hypothesis. We apply the Wald test statistic and construct three Wald tests that differ in their estimates of variances. These estimates of variances include the maximum likelihood estimate (MLE), the uniformly minimum variance unbiased estimate (UMVUE), and the constrained maximum likelihood estimate (CMLE). We evaluate the performance of these three tests in terms of type I error rate control and power using simulations under a variety of settings. Our empirical results show that the asymptotic normalized tests are conservative in most settings, while the Wald tests based on ML- and UMVU-method could produce inflated significance levels when group sizes are unequal. However, the Wald test based on CML-method provides an improvement in power over the other two Wald tests for medium and small sample size studies.


Subject(s)
Models, Statistical , Research Design , Humans , Likelihood Functions , Sample Size
19.
Ther Innov Regul Sci ; 49(3): 392-397, 2015 May.
Article in English | MEDLINE | ID: mdl-30222400

ABSTRACT

BACKGROUND: The coherence between the relationship of QTc and drug plasma concentration (this relationship is measured through the slope) and ICH E14 findings based on hundreds of QT study reports was studied. RESULTS: Based on ICH E14 analysis, our findings indicate that if the slope was not positive, in most cases (86%) the corresponding QT studies were also negative. If the slope was positive, 92% of the corresponding QT studies were also positive. CONCLUSIONS: In exploring whether a thorough QT (TQT) study may be needed, we recommend that the relationship analysis between QTc and drug plasma concentration be performed when proper single ascending dose (SAD) and multiple ascending dose (MAD) studies are available. If the relationship cannot be detected and the 90% upper confidence interval at a fixed concentration level (50th or 75th percentile, or mean peak plasma concentration [Cmax]) is below a certain threshold level (eg, 10 milliseconds), then a TQT study might be unnecessary. If the relationship can be established and the 90% lower confidence interval at a fixed concentration level (eg, mean Cmax) is greater than 10 milliseconds, further investigation is needed. If the signal is real, one might choose intensive safety monitoring during later drug development instead of a TQT study for a good compound. However, there are still some gray areas in which this analysis alone cannot determine the potential QT liability of the drug, and a TQT type of study might be worth considering.

20.
J Biopharm Stat ; 25(2): 317-27, 2015.
Article in English | MEDLINE | ID: mdl-25356617

ABSTRACT

In quality control of drug products, tolerance intervals are commonly used methods to assure a certain proportion of the products covered within a pre-specified acceptance interval. Depending on the nature of the quality attributes, the corresponding acceptance interval could be one-sided or two-sided. Thus, the tolerance intervals can also be one-sided or two-sided. To better utilize tolerance intervals for quality assurance, we reviewed the computation method and studied their statistical properties in terms of batch acceptance probability in this article. We also illustrate the application of one-sided and two-sided tolerance, as well as two one-sided tests through the examples of dose content uniformity test, delivered dose uniformity test, and dissolution test.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/standards , Technology, Pharmaceutical/statistics & numerical data , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Confidence Intervals , Data Interpretation, Statistical , Guidelines as Topic , Pharmaceutical Preparations/chemistry , Quality Control , Solubility , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...