Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
J Biopharm Stat ; 18(4): 773-96, 2008.
Article in English | MEDLINE | ID: mdl-18607807

ABSTRACT

The problem of drug-induced QT-interval prolongation has become a major economic and health issue in the pharmaceutical industry. Little has been done to link analytically the QT-interval prolongation and the risk of torsades de pointes. This paper introduces a method for analyzing the dynamic characteristics of a 24-hour electrocardiograph and offers an algorithm to measure the long-term memory of the RR-interval history in a single statistic. The memory statistic seems to improve the discrimination between healthy normal subjects and arrhythmia cases using only beat-to-beat information from the QT and RR intervals producing an impulse response function, which is completely independent of heart rate.


Subject(s)
Electrocardiography/methods , Long QT Syndrome/physiopathology , Torsades de Pointes/physiopathology , Biomarkers , Clinical Trials, Phase I as Topic/methods , Clinical Trials, Phase I as Topic/statistics & numerical data , Cross-Over Studies , Electrocardiography/statistics & numerical data , Heart Rate/physiology , Humans , Long QT Syndrome/diagnosis , Male , Middle Aged , Multivariate Analysis , Risk Factors , Torsades de Pointes/diagnosis
2.
Stat Med ; 27(12): 2248-66, 2008 May 30.
Article in English | MEDLINE | ID: mdl-17929332

ABSTRACT

Pharmaceutical safety has received substantial attention in the recent past; however, longitudinal clinical laboratory data routinely collected during clinical trials to derive safety profiles are often used ineffectively. For example, these data are frequently summarized by comparing proportions (between treatment arms) of participants who cross pre-specified threshold values at some time during follow-up. This research is intended, in part, to encourage more effective utilization of these data by avoiding unnecessary dichotomization of continuous data, acknowledging and making use of the longitudinal follow-up, and combining data from multiple clinical trials. However, appropriate analyses require careful consideration of a number of challenges (e.g. selection, comparability of study populations, etc.). We discuss estimation strategies based on estimating equations and maximum likelihood for analyses in the presence of three response history-dependent selection mechanisms: dropout, follow-up frequency, and treatment discontinuation. In addition, because clinical trials' participants usually represent non-random samples from target populations, we describe two sensitivity analysis approaches. All discussions are motivated by an analysis that aims to characterize the dynamic relationship between concentrations of a liver enzyme (alanine aminotransferase) and three distinct doses (no drug, low dose, and high dose) of an nk-1 antagonist across four Phase II clinical trials.


Subject(s)
Biometry , Drug-Related Side Effects and Adverse Reactions , Laboratories/statistics & numerical data , Longitudinal Studies , Marketing , Selection Bias , Clinical Trials, Phase II as Topic , Humans , Models, Statistical , Placebos , Randomized Controlled Trials as Topic
3.
Toxicol Rev ; 25(1): 37-54, 2006.
Article in English | MEDLINE | ID: mdl-16856768

ABSTRACT

Clinical signal detection of drug-induced hepatic effects is a very inexact science. Ordinary clinical laboratory tests are the primary biomarkers for liver changes. Heuristic rules have been developed by clinicians for diagnosing liver disease and monitoring these changes. These are based on laboratory reference limits, which are also largely heuristic. This article reviews some of the statistical characteristics of univariate reference limits and shows how they can and should be extended to multivariate reference regions. For instance, in the univariate approach, the probability of a false positive cannot be specified and grows with increasing numbers of analytes evaluated. However, accurate reference regions require very large samples from reference populations. Although the uniformly minimum variance unbiased estimator can greatly improve the mean-squared-error efficiency relative to a maximum likelihood estimator, it still requires tens of thousands of reference samples to estimate the 95% reference region for 20 analytes to an order of 95 +/- 1%, for example. Methods for constructing the elliptical reference region estimators and for sample size determination are provided. It is not feasible for small laboratories to make these calculations unless more rigorous methods of standardisation can be imposed and data merged across institutions. Large healthcare systems with electronic medical records and large pharmaceutical companies singly or in collaboration could generate sufficient sample sizes for accurate reference regions if techniques to make inter-laboratory results comparable are implemented. Exiting a reference region, whether population-based or individualised, can only tell you when the patient has changed from steady state. The region into which the patient's results enter and dynamics of this change are likely to contain considerable biological information. An example of this is Hy's rule. As the number of new, expensive biomarkers grows, it may be more cost-effective to find better ways to use the data we already collect, using the new biomarkers for validation. Mathematics and computers can help do this.


Subject(s)
Chemical and Drug Induced Liver Injury/diagnosis , Diagnostic Errors , Data Interpretation, Statistical , Diagnostic Errors/standards , Diagnostic Errors/statistics & numerical data , Humans , Multivariate Analysis , Reference Standards
4.
IEEE Trans Inf Technol Biomed ; 10(2): 254-63, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16617614

ABSTRACT

An effective analysis of clinical trials data involves analyzing different types of data such as heterogeneous and high dimensional time series data. The current time series analysis methods generally assume that the series at hand have sufficient length to apply statistical techniques to them. Other ideal case assumptions are that data are collected in equal length intervals, and while comparing time series, the lengths are usually expected to be equal to each other. However, these assumptions are not valid for many real data sets, especially for the clinical trials data sets. An addition, the data sources are different from each other, the data are heterogeneous, and the sensitivity of the experiments varies by the source. Approaches for mining time series data need to be revisited, keeping the wide range of requirements in mind. In this paper, we propose a novel approach for information mining that involves two major steps: applying a data mining algorithm over homogeneous subsets of data, and identifying common or distinct patterns over the information gathered in the first step. Our approach is implemented specifically for heterogeneous and high dimensional time series clinical trials data. Using this framework, we propose a new way of utilizing frequent itemset mining, as well as clustering and declustering techniques with novel distance metrics for measuring similarity between time series data. By clustering the data, we find groups of analytes (substances in blood) that are most strongly correlated. Most of these relationships already known are verified by the clinical panels, and, in addition, we identify novel groups that need further biomedical analysis. A slight modification to our algorithm results an effective declustering of high dimensional time series data, which is then used for "feature selection." Using industry-sponsored clinical trials data sets, we are able to identify a small set of analytes that effectively models the state of normal health.


Subject(s)
Algorithms , Clinical Trials as Topic/methods , Database Management Systems , Databases, Factual , Information Storage and Retrieval/methods , Medical Records Systems, Computerized , Research Design , Time Factors
6.
Am J Clin Pathol ; 117(6): 851-6, 2002 Jun.
Article in English | MEDLINE | ID: mdl-12047135

ABSTRACT

Reference ranges (RRs) are frequently used for interpreting laboratory values in clinical trials, assessing abnormality of laboratory results, and combining results from different laboratories. When a clinical laboratory measure must be derived from other tests, eg, the WBC differential percentage from the WBC count and WBC differential absolute count, a derivation of the RR may also be required. A naive method for determining RRs calculates the upper and lower limits of the derived test from the upper and lower limits of the measured values using the same algebraic formula used for the derived measure. This naive method and any others that do not use probability-based transformations do not maintain the distributional characteristics of the RRs. RRs derived in such a manner are deemed uninterpretable because they do not contain a specific proportion of the distribution. We propose a probability-based approach for the interconversion of RRs for ratios of 2 log-gaussian analytes. The proposed method gives a simple algebraic formula for calculating the RRs of the derived measures while preserving the probability relationships. The nonparametric method and a parametric method that takes the log transformation, estimates an RR, and then exponentiates are provided as comparators. An example that compares the commonly used naive method and the proposed method is provided on automated leukocyte count data. This provides evidence that the proposed method maintains the distributional characteristics of the transformed RR measures while the naive method does not.


Subject(s)
Leukocyte Count/standards , Leukocytes/cytology , Adolescent , Adult , Aged , Aged, 80 and over , Automation , Child , Female , Humans , Leukocyte Count/instrumentation , Leukocyte Count/methods , Logistic Models , Male , Middle Aged , Normal Distribution , Predictive Value of Tests , Probability , Reference Values , Statistics, Nonparametric
SELECTION OF CITATIONS
SEARCH DETAIL
...