Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
Ann Intern Med ; 173(5): 368-374, 2020 09 01.
Article in English | MEDLINE | ID: mdl-32628533

ABSTRACT

In comparative studies, treatment effect is often assessed using a binary outcome that indicates response to the therapy. Commonly used summary measures for response include the cumulative and current response rates at a specific time point. The current response rate is sometimes called the probability of being in response (PBIR), which regards a patient as a responder only if they have achieved and remain in response at present. The methods used in practice for estimating these rates, however, may not be appropriate. Moreover, whereas an effective treatment is expected to achieve a rapid and sustained response, the response at a fixed time point does not provide information about the duration of response (DOR). As an alternative, a curve constructed from the current response rates over the entire study period may be considered, which can be used for visualizing how rapidly patients responded to therapy and how long responses were sustained. The area under the PBIR curve is the mean DOR. This connection between response and DOR makes this curve attractive for assessing the treatment effect. In contrast to the conventional method for analyzing the DOR data, which uses responders only, the above procedure includes all patients in the study. Although discussed extensively in the statistical literature, estimation of the current response rate curve has garnered little attention in the medical literature. This article illustrates how to construct and analyze such a curve using data from a recent study for treating renal cell carcinoma. Clinical trialists are encouraged to consider this robust and clinically interpretable procedure as an additional tool for evaluating treatment effects in clinical studies.


Subject(s)
Comparative Effectiveness Research , Data Interpretation, Statistical , Equivalence Trials as Topic , Antineoplastic Agents/therapeutic use , Carcinoma, Renal Cell/drug therapy , Humans , Kidney Neoplasms/drug therapy , Probability , Randomized Controlled Trials as Topic , Statistics as Topic/methods , Time Factors , Treatment Outcome
2.
J Biopharm Stat ; 29(1): 189-202, 2019.
Article in English | MEDLINE | ID: mdl-29969380

ABSTRACT

One of the most critical decision points in clinical development is Go/No-Go decision-making after a proof-of-concept study. Traditional decision-making relies on a formal hypothesis testing with control of type I and type II error rates, which is limited by assessing the strength of efficacy evidence in a small isolated trial. In this article, we propose a quantitative Bayesian/frequentist decision framework for Go/No-Go criteria and sample size evaluation in Phase II randomized studies with a time-to-event endpoint. By taking the uncertainty of treatment effect into consideration, we propose an integrated quantitative approach for a program when both the Phase II and Phase III trials share a common endpoint while allowing a discount of the observed Phase II data. Our results confirm the argument that an increase in the sample size of a Phase II trial will result in greater increase in the probability of success of a Phase III trial than increasing the Phase III trial sample size by equal amount. We illustrate the steps in quantitative decision-making with a real example of a randomized Phase II study in metastatic pancreatic cancer.


Subject(s)
Biostatistics/methods , Clinical Trials, Phase II as Topic/statistics & numerical data , Clinical Trials, Phase III as Topic/statistics & numerical data , Decision Making , Endpoint Determination/statistics & numerical data , Randomized Controlled Trials as Topic/statistics & numerical data , Research Design/statistics & numerical data , Carcinoma, Pancreatic Ductal/drug therapy , Carcinoma, Pancreatic Ductal/mortality , Carcinoma, Pancreatic Ductal/secondary , Data Interpretation, Statistical , Humans , Pancreatic Neoplasms/drug therapy , Pancreatic Neoplasms/mortality , Pancreatic Neoplasms/pathology , Time Factors , Treatment Outcome
4.
J Biopharm Stat ; 27(1): 44-55, 2017.
Article in English | MEDLINE | ID: mdl-26882496

ABSTRACT

One of the main objectives in phase I oncology trials is to evaluate safety and tolerability of an experimental treatment by estimating the maximum tolerated dose (MTD) based on the rate of dose-limiting toxicities (DLT). To meet emerging challenges in dose-finding studies, over the past two decades, extensive research has been conducted by statistical and medical researchers to create innovative dose finding designs that perform better than the standard 3 + 3 design, which often exhibits undesirable statistical and operational properties. However, clinical implementation and practical usage of these new designs have been limited. This article begins with a review of the most recent literature and then provides some perspectives on implementing novel adaptive dose finding designs in oncology phase I trials from a pharmaceutical industry perspective. Statistical planning and logistical considerations on how to effectively execute such designs in multi-center clinical trials are discussed using two recent case studies.


Subject(s)
Clinical Trials, Phase I as Topic , Medical Oncology , Research Design , Dose-Response Relationship, Drug , Humans , Maximum Tolerated Dose
5.
J Biopharm Stat ; 15(6): 993-1007, 2005.
Article in English | MEDLINE | ID: mdl-16279357

ABSTRACT

Longitudinal binary data from clinical trials with missing observations are frequently analyzed by using the Last Observation Carry Forward (LOCF) method for imputing missing values at a visit (e.g., the prospectively defined primary visit time point for analysis at the end of treatment period). Usually, to understand time trend in treatment response, analyses are also performed separately on data at intermediate time points. The objective of such analyses is to estimate the proportion of "response" at a time point and then to compare two treatment groups (e.g., drug vs. placebo) by testing for the difference in the two proportions of response. The commonly used methods are Fisher's exact test, chi-squared test, Cochran-Mantel-Haenszel test, and logistic regression. Analyses based on the Observed Cases (OC) data are usually also performed and compared with those obtained by LOCF. Another approach that is gaining popularity (after the introduction of PROC GENMOD by the SAS Institute) is to use the method of Generalized Estimating Equations (GEE) with a view to include all repeated observations in the analysis in a more comprehensive manner. It is now well recognized, however, that results obtained by these methods are susceptible to bias, depending on the "missing data mechanism." Of particular concern is the bias introduced by NMAR dropouts. Because there is no one method to satisfactorily handle dropouts in data analysis, consensus is gathering toward doing analyses by several methods (including methods to handle NMAR dropouts) to evaluate sensitivity of results to model assumptions. In this article, we demonstrate application of the following methods for handling dropouts in longitudinal binary data: Generalized Linear Mixture Models (GLMM) (for handling NMAR dropouts), Weighted GEE (for handling MAR dropouts), and GEE (MCAR dropouts). The results are also compared with those obtained by logistic regression (univariate) on both LOCF and OC data.


Subject(s)
Clinical Trials as Topic/statistics & numerical data , Data Interpretation, Statistical , Longitudinal Studies , Patient Dropouts/statistics & numerical data , Linear Models , Logistic Models
SELECTION OF CITATIONS
SEARCH DETAIL
...