Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
1.
Palliat Med ; 34(9): 1235-1240, 2020 10.
Article in English | MEDLINE | ID: mdl-32588748

ABSTRACT

BACKGROUND: Hospital clinicians have had to rapidly develop expertise in managing the clinical manifestations of COVID-19 including symptoms common at the end of life, such as breathlessness and agitation. There is limited evidence exploring whether end-of-life symptom control in this group requires new or adapted guidance. AIM: To review whether prescribing for symptom control in patients dying with COVID-19 adhered to existing local guidance or whether there was deviation which may represent a need for revised guidance or specialist support in particular patient groups. DESIGN/SETTING: A retrospective review of the electronic patient record of 61 hospital inpatients referred to the specialist palliative care team with swab-confirmed COVID-19 who subsequently died over a 1-month period. Intubated patients were excluded. RESULTS: In all, 83% (40/48) of patients were prescribed opioids at a starting dose consistent with existing local guidelines. In seven of eight patients where higher doses were prescribed, this was on specialist palliative care team advice. Mean total opioid dose required in the last 24 h of life was 14 mg morphine subcutaneous equivalent, and mean total midazolam dose was 9.5 mg. For three patients in whom non-invasive ventilation was in place higher doses were used. CONCLUSION: Prescription of end-of-life symptom control drugs for COVID-19 fell within the existing guidance when supported by specialist palliative care advice. While some patients may require increased doses, routine prescription of higher starting opioid and benzodiazepine doses beyond existing local guidance was not observed.


Subject(s)
Biopharmaceutics/statistics & numerical data , Coronavirus Infections/drug therapy , Delirium/drug therapy , Dyspnea/drug therapy , Pneumonia, Viral/drug therapy , Practice Guidelines as Topic , Terminal Care/methods , Terminal Care/standards , Adult , Aged , Aged, 80 and over , Analgesics, Opioid/therapeutic use , Betacoronavirus , COVID-19 , Female , Humans , Hypnotics and Sedatives/therapeutic use , Male , Midazolam/therapeutic use , Middle Aged , Morphine/therapeutic use , Pandemics , Retrospective Studies , SARS-CoV-2
2.
J Biopharm Stat ; 29(6): 1003-1010, 2019.
Article in English | MEDLINE | ID: mdl-31023141

ABSTRACT

The Biological Price Competition and Innovation Act (BPCI Act) of 2009 established a pathway for the approval of biosimilars and interchangeable biosimilars in the United States. The Food Drug Administration (FDA) has issued several guidances on the development and assessment of biosimilars which implement the BPCI Act. In particular, a recent draft guidance on the interchangeability of biological products presents an overview of scientific considerations on the demonstration of interchangeability with a reference product. The present communication provides a general summary of the draft guidance and briefly observes a few current issues on interchangeability.


Subject(s)
Biopharmaceutics/legislation & jurisprudence , Biosimilar Pharmaceuticals/therapeutic use , Drug Substitution/statistics & numerical data , Guidelines as Topic , Biopharmaceutics/economics , Biopharmaceutics/statistics & numerical data , Biosimilar Pharmaceuticals/economics , Drug Approval , Drug Substitution/economics , Endpoint Determination , Humans , Therapeutic Equivalency , United States , United States Food and Drug Administration
3.
J Biopharm Stat ; 29(6): 1011-1023, 2019.
Article in English | MEDLINE | ID: mdl-30712462

ABSTRACT

Parallelism in bioassay is a synonym of similarity between two concentration-response curves. Before the determination of relative potency in bioassays, it is necessary to test for and claim parallelism between the pair of concentration-response curves of reference standard and test sample. Methods for parallelism testing include p-value-based significance tests and interval-based equivalence tests. Most of the latter approaches make statistical inference about the equivalence of parameters of the concentration-response curve models. An apparent drawback of such methods is that equivalence in model parameters does not guarantee similarity between the reference and test sample. In contrast, a Bayesian method was recently proposed that directly tests the parallelism hypothesis that the concentration-response curve of the test sample is a horizontal shift of that of the reference. In other words, the testing sample is a dilution or concentration of the reference standard. The Bayesian approach is shown to protect against type I error and provides sufficient statistical power for parallelism testing. In practice, however, it is challenging to implement the method as it requires both specialized Bayesian software and a relatively long run time. In this paper, we propose a frequentist version of the test with split-second run time. The empirical properties of the frequentist parallelism test method are evaluated and compared with the original Bayesian method. It is demonstrated that the frequentist method is both fast and reliable for parallelism testing for a variety of concentration-response models.


Subject(s)
Biological Assay , Biopharmaceutics , Models, Statistical , Bayes Theorem , Biological Assay/methods , Biological Assay/statistics & numerical data , Biopharmaceutics/methods , Biopharmaceutics/statistics & numerical data , Computer Simulation , Dose-Response Relationship, Drug , Monte Carlo Method , Nonlinear Dynamics
6.
J Biopharm Stat ; 25(2): 247-59, 2015.
Article in English | MEDLINE | ID: mdl-25360720

ABSTRACT

The concept of quality by design (QbD) as published in ICH-Q8 is currently one of the most recurrent topics in the pharmaceutical literature. This guideline recommends the use of information and prior knowledge gathered during pharmaceutical development studies to provide a scientific rationale for the manufacturing process of a product and provide guarantee of future quality. This poses several challenges from a statistical standpoint and requires a shift in paradigm from traditional statistical practices. First, to provide "assurance of quality" of future lots implies the need to make predictions regarding the quality given past evidence and data. Second, the quality attributes described in the Q8 guidelines are not always a set of unique, independent measurements. In many cases, these criteria are complicated longitudinal data with successive acceptance criteria over a defined period of time. A common example is a dissolution profile for a modified or extended-release solid dosage form that must fall within acceptance limits at several time points. A Bayesian approach for longitudinal data obtained in various conditions of a design of experiment is provided to elegantly address the ICH-Q8 recommendation to provide assurance of quality and derive a scientifically sound design space.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Bayes Theorem , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Data Interpretation, Statistical , Delayed-Action Preparations , Guidelines as Topic , Kinetics , Quality Control , Solubility , Tablets , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards , Time Factors
7.
J Biopharm Stat ; 25(2): 295-306, 2015.
Article in English | MEDLINE | ID: mdl-25356500

ABSTRACT

Administration of biological therapeutics can generate undesirable immune responses that may induce anti-drug antibodies (ADAs). Immunogenicity can negatively affect patients, ranging from mild reactive effect to hypersensitivity reactions or even serious autoimmune diseases. Assessment of immunogenicity is critical as the ADAs can adversely impact the efficacy and safety of the drug products. Well-developed and validated immunogenicity assays are required by the regulatory agencies as tools for immunogenicity assessment. Key to the development and validation of an immunogenicity assay is the determination of a cut point, which serves as the threshold for classifying patients as ADA positive(reactive) or negative. In practice, the cut point is determined as either the quantile of a parametric or nonparametric empirical distribution. The parametric method, which is often based on a normality assumption, may lead to biased cut point estimates when the normality assumption is violated. The non-parametric method, which yields unbiased estimates of the cut point, may have low efficiency when the sample size is small. As the distribution of immune responses are often skewed and sometimes heavy-tailed, we propose two non-normal random effects models for cut point determination. The random effects, following a skew-t or log-gamma distribution, can incorporate the skewed and heavy-tailed responses and the correlation among repeated measurements. Simulation study is conducted to compare the proposed method with the current normal and nonparametric alternatives. The proposed models are also applied to a real dataset generated from assay validation studies.


Subject(s)
Biological Products/immunology , Biopharmaceutics/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Animals , Bayes Theorem , Biological Products/adverse effects , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Data Interpretation, Statistical , Guidelines as Topic , Humans , Numerical Analysis, Computer-Assisted , Quality Control , Reproducibility of Results , Risk Assessment , Sample Size , Statistics, Nonparametric , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
8.
J Biopharm Stat ; 25(2): 317-27, 2015.
Article in English | MEDLINE | ID: mdl-25356617

ABSTRACT

In quality control of drug products, tolerance intervals are commonly used methods to assure a certain proportion of the products covered within a pre-specified acceptance interval. Depending on the nature of the quality attributes, the corresponding acceptance interval could be one-sided or two-sided. Thus, the tolerance intervals can also be one-sided or two-sided. To better utilize tolerance intervals for quality assurance, we reviewed the computation method and studied their statistical properties in terms of batch acceptance probability in this article. We also illustrate the application of one-sided and two-sided tolerance, as well as two one-sided tests through the examples of dose content uniformity test, delivered dose uniformity test, and dissolution test.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/standards , Technology, Pharmaceutical/statistics & numerical data , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Confidence Intervals , Data Interpretation, Statistical , Guidelines as Topic , Pharmaceutical Preparations/chemistry , Quality Control , Solubility , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
9.
J Biopharm Stat ; 25(2): 339-50, 2015.
Article in English | MEDLINE | ID: mdl-25356663

ABSTRACT

Validation of linearity is a regulatory requirement. Although many methods are proposed, they suffer from several deficiencies including difficulties of setting fit-for-purpose acceptable limits, dependency on concentration levels used in linearity experiment, and challenges in implementation for statistically lay users. In this article, a statistical procedure for testing linearity is proposed. The method uses a two one-sided test (TOST) of equivalence to evaluate the bias that can result from approximating a higher-order polynomial response with a linear function. By using orthogonal polynomials and generalized pivotal quantity analysis, the method provides a closed-form solution, thus making linearity testing easy to implement.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Bias , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Confidence Intervals , Data Interpretation, Statistical , Guidelines as Topic , Linear Models , Quality Control , Reference Values , Reproducibility of Results , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
10.
J Biopharm Stat ; 25(2): 269-79, 2015.
Article in English | MEDLINE | ID: mdl-25356783

ABSTRACT

The cut point of the immunogenicity screening assay is the level of response of the immunogenicity screening assay at or above which a sample is defined to be positive and below which it is defined to be negative. The Food and Drug Administration Guidance for Industry on Assay Development for Immunogenicity Testing of Therapeutic recommends the cut point to be an upper 95 percentile of the negative control patients. In this article, we assume that the assay data are a random sample from a normal distribution. The sample normal percentile is a point estimate with a variability that decreases with the increase of sample size. Therefore, the sample percentile does not assure at least 5% false-positive rate (FPR) with a high confidence level (e.g., 90%) when the sample size is not sufficiently enough. With this concern, we propose to use a lower confidence limit for a percentile as the cut point instead. We have conducted an extensive literature review on the estimation of the statistical cut point and compare several selected methods for the immunogenicity screening assay cut-point determination in terms of bias, the coverage probability, and FPR. The selected methods evaluated for the immunogenicity screening assay cut-point determination are sample normal percentile, the exact lower confidence limit of a normal percentile (Chakraborti and Li, 2007) and the approximate lower confidence limit of a normal percentile. It is shown that the actual coverage probability for the lower confidence limit of a normal percentile using approximate normal method is much larger than the required confidence level with a small number of assays conducted in practice. We recommend using the exact lower confidence limit of a normal percentile for cut-point determination.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Proteins/immunology , Technology, Pharmaceutical/statistics & numerical data , Bias , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Confidence Intervals , Data Interpretation, Statistical , Guidelines as Topic , Humans , Monte Carlo Method , Normal Distribution , Patient Safety , Proteins/adverse effects , Proteins/standards , Quality Control , Risk Assessment , Sample Size , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
11.
J Biopharm Stat ; 25(2): 260-8, 2015.
Article in English | MEDLINE | ID: mdl-25357001

ABSTRACT

Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Bayes Theorem , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Data Interpretation, Statistical , Guidelines as Topic , Probability , Quality Control , Reproducibility of Results , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
12.
J Biopharm Stat ; 25(2): 328-38, 2015.
Article in English | MEDLINE | ID: mdl-25357132

ABSTRACT

The delivered dose uniformity is one of the most critical requirements for dry powder inhaler (DPI) and metered dose inhaler products. In 1999, the Food and Drug Administration (FDA) issued a Draft Guidance entitled Nasal Spray and Inhalation Solution, Suspension, and Spray Drug Products-Chemistry, Manufacturing and Controls Documentation and recommended a two-tier acceptance sampling plan that is a modification of the United States Pharmacopeia (USP) sampling plan of dose content uniformity (USP34<601>). This sampling acceptance plan is also applied to metered dose inhaler (MDI) and DPI drug products in general. The FDA Draft Guidance method is shown to have a near-zero probability of acceptance at the second tier. In 2000, under the request of The International Pharmaceutical Aerosol Consortium, the FDA developed a two-tier sampling acceptance plan based on two one-sided tolerance intervals (TOSTIs) for a small sample. The procedure was presented in the 2005 Advisory Committee Meeting of Pharmaceutical Science and later published in the Journal of Biopharmaceutical Statistics (Tsong et al., 2008). This proposed procedure controls the probability of the product delivering below a pre-specified effective dose and the probability of the product delivering over a pre-specified safety dose. In this article, we further propose an extension of the TOSTI procedure to single-tier procedure with any number of canisters.


Subject(s)
Biopharmaceutics/statistics & numerical data , Dry Powder Inhalers/standards , Models, Statistical , Pharmaceutical Preparations/standards , Quality Assurance, Health Care/standards , Technology, Pharmaceutical/statistics & numerical data , Administration, Inhalation , Aerosols , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Confidence Intervals , Data Interpretation, Statistical , Equipment Design , Guidelines as Topic , Humans , Pharmaceutical Preparations/administration & dosage , Pharmaceutical Preparations/chemistry , Powders , Probability , Quality Control , Sample Size , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
13.
J Biopharm Stat ; 25(2): 351-71, 2015.
Article in English | MEDLINE | ID: mdl-25357203

ABSTRACT

Dissolution (or in vitro release) studies constitute an important aspect of pharmaceutical drug development. One important use of such studies is for justifying a biowaiver for post-approval changes which requires establishing equivalence between the new and old product. We propose a statistically rigorous modeling approach for this purpose based on the estimation of what we refer to as the F2 parameter, an extension of the commonly used f2 statistic. A Bayesian test procedure is proposed in relation to a set of composite hypotheses that capture the similarity requirement on the absolute mean differences between test and reference dissolution profiles. Several examples are provided to illustrate the application. Results of our simulation study comparing the performance of f2 and the proposed method show that our Bayesian approach is comparable to or in many cases superior to the f2 statistic as a decision rule. Further useful extensions of the method, such as the use of continuous-time dissolution modeling, are considered.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/chemistry , Technology, Pharmaceutical/statistics & numerical data , Bayes Theorem , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Data Interpretation, Statistical , Guidelines as Topic , Kinetics , Monte Carlo Method , Multivariate Analysis , Pharmaceutical Preparations/standards , Quality Control , Solubility , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
14.
J Biopharm Stat ; 25(2): 234-46, 2015.
Article in English | MEDLINE | ID: mdl-25358029

ABSTRACT

We propose a method for determining the criticality of residual host cell DNA, which is characterized through two attributes, namely the size and amount of residual DNA in biopharmaceutical product. By applying a mechanistic modeling approach to the problem, we establish the linkage between residual DNA and product safety measured in terms of immunogenicity, oncogenicity, and infectivity. Such a link makes it possible to establish acceptable ranges of residual DNA size and amount. Application of the method is illustrated through two real-life examples related to a vaccine manufactured in Madin Darby Canine Kidney cell line and a monoclonal antibody using Chinese hamster ovary (CHO) cell line as host cells.


Subject(s)
Biopharmaceutics/statistics & numerical data , DNA/analysis , Drug Contamination/statistics & numerical data , Models, Statistical , Technology, Pharmaceutical/statistics & numerical data , Animals , Antibodies, Monoclonal/biosynthesis , Antibodies, Monoclonal/genetics , Biopharmaceutics/standards , CHO Cells , Chemistry, Pharmaceutical , Consumer Product Safety , Cricetulus , Data Interpretation, Statistical , Dogs , Guidelines as Topic , Humans , Influenza Vaccines/biosynthesis , Influenza Vaccines/genetics , Influenza Vaccines/standards , Madin Darby Canine Kidney Cells , Quality Control , Risk Assessment , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
15.
J Biopharm Stat ; 25(2): 307-16, 2015.
Article in English | MEDLINE | ID: mdl-25358076

ABSTRACT

One of the most challenging aspects of the pharmaceutical development is the demonstration and estimation of chemical stability. It is imperative that pharmaceutical products be stable for two or more years. Long-term stability studies are required to support such shelf life claim at registration. However, during drug development to facilitate formulation and dosage form selection, an accelerated stability study with stressed storage condition is preferred to quickly obtain a good prediction of shelf life under ambient storage conditions. Such a prediction typically uses Arrhenius equation that describes relationship between degradation rate and temperature (and humidity). Existing methods usually rely on the assumption of normality of the errors. In addition, shelf life projection is usually based on confidence band of a regression line. However, the coverage probability of a method is often overlooked or under-reported. In this paper, we introduce two nonparametric bootstrap procedures for shelf life estimation based on accelerated stability testing, and compare them with a one-stage nonlinear Arrhenius prediction model. Our simulation results demonstrate that one-stage nonlinear Arrhenius method has significant lower coverage than nominal levels. Our bootstrap method gave better coverage and led to a shelf life prediction closer to that based on long-term stability data.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/chemistry , Technology, Pharmaceutical/statistics & numerical data , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Data Interpretation, Statistical , Drug Stability , Drug Storage , Guidelines as Topic , Humidity , Nonlinear Dynamics , Pharmaceutical Preparations/standards , Quality Control , Reproducibility of Results , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards , Temperature , Time Factors
16.
J Biopharm Stat ; 25(2): 280-94, 2015.
Article in English | MEDLINE | ID: mdl-25358110

ABSTRACT

According to ICH Q6A (1999), a specification is defined as a list of tests, references to analytical procedures, and appropriate acceptance criteria, which are numerical limits, ranges, or other criteria for the tests described. For drug products, specifications usually consist of test methods and acceptance criteria for assay, impurities, pH, dissolution, moisture, and microbial limits, depending on the dosage forms. They are usually proposed by the manufacturers and subject to the regulatory approval for use. When the acceptance criteria in product specifications cannot be pre-defined based on prior knowledge, the conventional approach is to use data from a limited number of clinical batches during the clinical development phases. Often in time, such acceptance criterion is set as an interval bounded by the sample mean plus and minus two to four standard deviations. This interval may be revised with the accumulated data collected from released batches after drug approval. In this article, we describe and discuss the statistical issues of commonly used approaches in setting or revising specifications (usually tighten the limits), including reference interval, (Min, Max) method, tolerance interval, and confidence limit of percentiles. We also compare their performance in terms of the interval width and the intended coverage. Based on our study results and review experiences, we make some recommendations on how to select the appropriate statistical methods in setting product specifications to better ensure the product quality.


Subject(s)
Biopharmaceutics/statistics & numerical data , Models, Statistical , Pharmaceutical Preparations/standards , Technology, Pharmaceutical/statistics & numerical data , Biopharmaceutics/standards , Chemistry, Pharmaceutical , Computer Simulation , Confidence Intervals , Consumer Product Safety , Data Interpretation, Statistical , Guidelines as Topic , Humans , Monte Carlo Method , Pharmaceutical Preparations/chemistry , Quality Control , Reference Values , Risk Assessment , Sample Size , Technology, Pharmaceutical/methods , Technology, Pharmaceutical/standards
20.
J Biopharm Stat ; 23(4): 730-43, 2013.
Article in English | MEDLINE | ID: mdl-23799811

ABSTRACT

In this article, the use of statistical equivalence testing for providing evidence of process comparability in an accelerated stability study is advocated over the use of a test of differences. The objective of such a study is to demonstrate comparability by showing that the stability profiles under nonrecommended storage conditions of two processes are equivalent. Because it is difficult at accelerated conditions to find a direct link to product specifications, and hence product safety and efficacy, an equivalence acceptance criterion is proposed that is based on the statistical concept of effect size. As with all statistical tests of equivalence, it is important to collect input from appropriate subject-matter experts when defining the acceptance criterion.


Subject(s)
Biopharmaceutics/statistics & numerical data , Biopharmaceutics/standards , Drug Stability , Drug Storage , Models, Statistical , Therapeutic Equivalency , Computer Simulation , Drug Storage/standards , Drug Storage/statistics & numerical data , Research Design/statistics & numerical data , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...