Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
Am J Ther ; 23(3): e825-36, 2016.
Article in English | MEDLINE | ID: mdl-23591025

ABSTRACT

Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.


Subject(s)
Clinical Trials as Topic/methods , Data Interpretation, Statistical , Analysis of Variance , Factor Analysis, Statistical , Humans , Least-Squares Analysis , Models, Statistical , Multivariate Analysis , Randomized Controlled Trials as Topic/methods , Treatment Outcome
2.
Am J Ther ; 23(3): e844-9, 2016.
Article in English | MEDLINE | ID: mdl-23689089

ABSTRACT

Traditionally, nonlinear relationships like the smooth shapes of airplanes, boats, and motor cars were constructed from scale models using stretched thin wooden strips, otherwise called splines. In the past decades, mechanical spline methods have been replaced with their mathematical counterparts. The objective of the study was to study whether spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial and also to study whether it can detect patterns in a trial that are relevant but go unobserved with simpler regression models. A clinical trial assessing the effect of quantity of care on quality of care was used as an example. Spline curves consistent of 4 or 5 cubic functions were applied. SPSS statistical software was used for analysis. The spline curves of our data outperformed the traditional curves because (1) unlike the traditional curves, they did not miss the top quality of care given in either subgroup, (2) unlike the traditional curves, they, rightly, did not produce sinusoidal patterns, and (3) unlike the traditional curves, they provided a virtually 100% match of the original values. We conclude that (1) spline modeling can adequately assess the relationships between exposure and outcome variables in a clinical trial; (2) spline modeling can detect patterns in a trial that are relevant but may go unobserved with simpler regression models; (3) in clinical research, spline modeling has great potential given the presence of many nonlinear effects in this field of research and given its sophisticated mathematical refinement to fit any nonlinear effect in the mostly accurate way; and (4) spline modeling should enable to improve making predictions from clinical research for the benefit of health decisions and health care. We hope that this brief introduction to spline modeling will stimulate clinical investigators to start using this wonderful method.


Subject(s)
Clinical Trials as Topic , Data Interpretation, Statistical , Models, Statistical , Nonlinear Dynamics , Clinical Trials as Topic/methods , Humans , Linear Models
3.
Am J Ther ; 23(3): e837-43, 2016.
Article in English | MEDLINE | ID: mdl-23429167

ABSTRACT

With large data files, outlier recognition requires a more sophisticated approach than the traditional data plots and regression lines. In addition, the number of outliers tends to rise linearly with the data's sample size. The objective of this study was to examine whether balanced iterative reducing and clustering using hierarchies (BIRCH) clustering is able to detect previously unrecognized outlier data.A simulated and a real data files were used as examples. SPSS statistical software was used for data analysis. In 50 mentally depressed persons, a regression analysis failed to detect any outliers. BIRCH analysis of these data showed in addition to 2 clusters a relevant outlier cluster consistent of 7 patients (14%) not fitting in the formed clusters. In 576 iatrogenic admissions, the number of comedications was not a significant loglinear predictor of the iatrogenic admission. In contrast, BIRCH analysis revealed an outlier cluster consistent of 174 patients (30%) with extremely many comedications. The conclusions were as follows: (1) A systematic assessment for outliers is important in therapeutic research with large data, because the lack of it can lead to catastrophic consequences. (2) Traditional data analysis, such as regression analysis, was unable to demonstrate outliers in our examples. (3) BIRCH cluster analysis of the examples produced relevant outlier clusters of patients not fitting in the data otherwise. (4) On theoretical grounds, BIRCH cluster analysis is, particularly, suitable for large datasets.


Subject(s)
Biomedical Research , Data Interpretation, Statistical , Statistics as Topic , Biomedical Research/methods , Humans , Regression Analysis
4.
Am J Ther ; 22(1): e1-5, 2015.
Article in English | MEDLINE | ID: mdl-23896742

ABSTRACT

Robust tests are tests that can handle the inclusion into a data file of some outliers without largely changing the overall test results. Despite the risk of non-Gaussian data in clinical trials, robust tests are virtually never performed. The objective of this study was to review important robust tests and to assess whether they provide better sensitivity of testing than standard tests do. In a 33 patient study of frailty scores, no significant t value was obtained (P = 0.067). The following 4 robust tests were performed: (1) z test for medians and median absolute deviations, (2) z test for Winsorized variances, (3) Mood test, and (4) z test for M-estimators with bootstrap standard errors. They produced P values of, respectively, <0.0001, 0.043, <0.0001, and 0.005. Robust tests are wonderful for imperfect clinical data because they often produce statistically significant results, whereas standard tests do not.


Subject(s)
Clinical Trials as Topic/methods , Data Interpretation, Statistical , Statistics as Topic/methods , Humans , Normal Distribution , Statistics, Nonparametric
5.
Am J Ther ; 21(1): 20-5, 2014.
Article in English | MEDLINE | ID: mdl-21317764

ABSTRACT

Fuzzy logic can handle questions to which the answers may be "yes" at one time and "no" at the other, or may be partially true and untrue. Pharmacodynamic data deal with questions such as "Does a patient respond to a particular drug dose or not," or "Does a drug cause the same effects at the same time in the same subject or not." Such questions are typically of a fuzzy nature and might, therefore, benefit from an analysis based on fuzzy logic.The objective was to assess whether fuzzy logic can improve the precision of predictive models for pharmacodynamic data.The methods and results were as follows: (1). The quantal pharmacodynamic effects of different induction dosages of thiopental on numbers of responding subjects were used as the first example. Regression analysis of the fuzzy-modeled outcome data on the input data provided a much better fit than did the unmodeled output values with r-square values of 0.852 (F-value = 40.34) and 0.555 (F-value = 8.74), respectively. (2). The time-response effect propranolol on peripheral arterial flow was used as a second example. Regression analysis of the fuzzy-modeled outcome data on the input data provided a better fit than did the unmodeled output values with r-square values of 0.990 (F-value = 416) and 0.977 (F-value = 168), respectively.Fuzzy modeling may better than conventional statistical method fit and predict pharmacodynamic data, such as, for example, quantal dose response and time response data. This may be relevant to future pharmacodynamic research.


Subject(s)
Fuzzy Logic , Models, Statistical , Pharmacokinetics , Adrenergic beta-Antagonists/pharmacology , Algorithms , Dose-Response Relationship, Drug , Forearm/blood supply , Humans , Nonlinear Dynamics , Predictive Value of Tests , Propranolol/pharmacology , Regional Blood Flow/drug effects , Regression Analysis
6.
Am J Ther ; 21(6): e175-80, 2014.
Article in English | MEDLINE | ID: mdl-23797755

ABSTRACT

Multistage regression is rarely used in therapeutic research, despite the multistage pattern of many medical conditions. Using an example of an efficacy study of a new laxative, path analysis and the 2-stage least square method were compared with standard linear regression. Standard linear regression showed a significant effect of the predictor "noncompliance" on drug efficacy at P=0.005. However, after adjustment for the covariate "counseling," the magnitude of the regression coefficient fell from 0.70 to 0.29, and the P value rose to 0.10. Path analysis was valid, given the significant correlation between the two predictors (P=0.024) and produced an increase of the regression coefficient between "noncompliance" and "drug efficacy" by 60.0%. The 2-stage least squares method, using counseling as instrumental variable, produced, similarly, an increase of the overall correlation by 66.7%. A bivariate path analysis with "quality of life" as the second outcome variable increased the magnitude of the path statistic further by 47.1%, and, thus, enabled to make still better use of the predicting variables. We conclude that (1) multistage regression methods, as used in the present article produced much better predictions about the drug efficacy than did standard linear regression; (2) the inclusion of additional outcome variables enables to make still better use of the predicting variables; (3) multistage regression must always be preceded by usual linear regression to exclude weak predictors. We recommend that researchers analyzing efficacy data of new treatments more often apply multistage regression.


Subject(s)
Data Interpretation, Statistical , Regression Analysis , Research Design , Humans , Laxatives/therapeutic use , Least-Squares Analysis , Linear Models , Quality of Life
7.
Am J Ther ; 20(5): 514-9, 2013.
Article in English | MEDLINE | ID: mdl-21866042

ABSTRACT

In clinical research, missing data are common. Imputed data are not real data but constructed values that should increase the sensitivity of testing. Regression substitution for the purpose of data imputation often did not provide a better sensitivity than did other methods. The objective of this study was to compare different methods of missing data imputation with that of regression substitution taking into account particular quality measures. A real data example with a 105-value file was used. After randomly removing 5 values from the file, mean imputation and hot deck imputation were compared with regression substitution, taking account of the following requirements: (1) at least 2 independent variables be present in the equation, (2) no more than 1 datum per patient be missing, (3) no more than 5% of the data be missing, (4) more than 5% of the data be missing after randomly choosing 5% for regression-substitution deletion of the remainder, (5) only statistically significant variables be present in the regression model, and (6) no random errors be added to the imputed data. The test statistics after regression substitution were much better than those after the other 2 methods with F-values of 44.1 vs 29.4 and 30.1, and t-values of 7.6 vs 5.6 and 5.7, and 3.0 vs 1.7 and 1.8. We conclude that regression substitution is a very sensitive method for imputing missing data provided particular quality measures are taken into account.


Subject(s)
Clinical Trials as Topic/methods , Clinical Trials as Topic/statistics & numerical data , Research Design , Humans , Reproducibility of Results
8.
Clin Chem Lab Med ; 50(12): 2163-9, 2012 Dec.
Article in English | MEDLINE | ID: mdl-23093263

ABSTRACT

BACKGROUND: Seasonal patterns are assumed in many fields of medicine. However, biological processes are full of variations and the possibility of chance findings can often not be ruled out. METHODS: Using simulated data we assess whether auto correlation is helpful to minimize chance findings and test to support the presence of seasonality. RESULTS: Autocorrelation required to cut time curves into pieces. These pieces were compared with one another using linear regression analysis. Four examples with imperfect data are given. In spite of substantial differences in the data between the first and second year of observation, and in spite of otherwise inconsistent patterns, significant positive autocorrelations were constantly demonstrated with correlation coefficients around 0.40 (SE 0.14). CONCLUSIONS: Our data suggest that autocorrelation is helpful to support the presence of seasonality of disease, and that it does so even with imperfect data.


Subject(s)
Biomedical Research , Seasons , Models, Theoretical
9.
Int J Clin Pharmacol Ther ; 50(2): 129-35, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22257578

ABSTRACT

BACKGROUND: Bhattacharya modeling is a Gaussian method recommended by the Food and Agricultural Organization of the United Nations Guidelines for analyzing the eco-system. It is rarely used in clinical research. OBJECTIVE: To investigate the performance of Bhattacharya modeling for clinical data analysis. METHODS: Using as examples simulated vascular lab scores we assessed the performance of the Bhattacharya method. SPSS statistical software is used. RESULTS: 1. The Bhattacharya method better fitted the data from a single sample than did the usual Gaussian curve derived from the mean and standard deviation with 15 vs. 9 cuts. 2. Bhattacharya models demonstrated a significant difference at p < 0.0001 between the data from two parallel-groups, while the usual t-test and Mann-Whitney test were insignificant at p = 0.051 and 0.085. 3. Bhattacharya modeling of a histogram suggestive of certain subsets identified three Gaussian curves. CONCLUSIONS: We recommend that Bhattacharya modeling be more often considered in clinical research for the purpose of (1) unmasking normal values of diagnostic tests, (2) improving the p-values of data testing and (3) objectively searching subsets in the data.


Subject(s)
Data Interpretation, Statistical , Diagnostic Techniques and Procedures/statistics & numerical data , Models, Statistical , Biomedical Research/methods , Computer Simulation , Humans , Software , Statistics, Nonparametric
10.
Am J Ther ; 19(2): 101-7, 2012 Mar.
Article in English | MEDLINE | ID: mdl-20975529

ABSTRACT

Noninferiority trials have been criticized for their wide margins of noninferiority, making it virtually impossible to reject noninferiority. Recommendations have been given to replace the practice of arbitrarily set margins. The objective of this study was to review various alternative methods of assessment based on statistical reasoning. Four examples are given. (1) In a 300-patient parallel group study of 2 inhalers for asthma, noninferiority was demonstrated at P = 0.0001. This result was supported by both the lack of a significant difference between the standard and new inhalers and the presence of a significant difference between the new inhaler and a placebo at P = 0.0001. (2) In a 236-patient parallel group sleeping pill study, noninferiority was demonstrated at P = 0.04. The presence of noninferiority was supported by a significant superiority of the new compound against a placebo at P = 0.021. However, the significantly worse performance against the standard treatment undermined these findings. (3) In a 200-patient hypertension study of 2 treatment groups, noninferiority was demonstrated at P = 0.028. The presence of noninferiority was supported by the lack of a significant difference between the new and the standard treatment. However, these findings were undermined by the lacking superiority of the new compound against a placebo. (4) In a 160-patients parallel group cholesterol study, noninferiority was demonstrated at P = 0.01. The presence of noninferiority was undermined by both the significant difference between the new and the standard treatment and the lack of efficacy of the new treatment against a placebo. We conclude that expert investigators traditionally set an arbitrary margin of noninferiority based on clinical arguments and that they benefit from wide margins. As an alternative and more meaningful approach to noninferiority testing, we propose to use (1) margins based on counted rather than on arbitrary criteria, (2) null hypothesis tests between the new and standard treatments, (3) null hypothesis tests between the new treatment and a placebo.


Subject(s)
Clinical Trials as Topic , Research Design , Statistics as Topic/methods , Asthma/drug therapy , Humans , Hypercholesterolemia/drug therapy , Hypertension/drug therapy , Sleep Initiation and Maintenance Disorders/drug therapy
11.
Am J Ther ; 19(1): e1-7, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21048432

ABSTRACT

In current clinical research, repeated measures in a single subject are common. The problem with repeated measures is that they are closer to one another than unrepeated measures. If this is not taken into account, then data analysis will lose power. In the past decade, user-friendly statistical software programs such as SAS and SPSS have enabled the application of mixed models as an alternative to the classical general linear model for repeated measures with, sometimes, better sensitivity. The objective was to assess whether in studies with repeated measures, designed to test between-subject differences, the mixed model performs better than does the general linear model. In a parallel group study of cholesterol-reducing treatments with 5 evaluations per patient, the mixed model performed much better than did the general linear model with P values of 0.0001 and 0.048, respectively. In a crossover study of 3 treatments for sleeplessness, the mixed model and general linear model performed similarly well with P values of 0.005 and 0.010. Mixed models do, indeed, seem to produce better sensitivity of testing, when there are small within-subject differences and large between-subject differences and when the main objective of your research is to demonstrate between- rather than within-subject differences. The novel mixed model may be more complex. Yet, with modern user-friendly statistical software, its use is straightforward, and its software commands are no more complex than they are with standard methods. We hope that this article will encourage clinical researchers to make use of its benefits more often.


Subject(s)
Biomedical Research/methods , Linear Models , Models, Statistical , Adult , Aged , Analysis of Variance , Cross-Over Studies , Double-Blind Method , Female , Humans , Male , Middle Aged , Software
12.
Am J Ther ; 19(4): 287-93, 2012 Jul.
Article in English | MEDLINE | ID: mdl-20634677

ABSTRACT

A major objective of clinical research is to study outcome effects in subgroups. Such effects generally have stepping functions that are not strictly linear. Analyzing stepping functions in linear models thus raises the risk of underestimating the effects. In the past few years, recoding subgroup properties from continuous variables into categorical ones has been recommended as a solution to the problem. The objectives of this study were to demonstrate from examples how recoding works and to show that stepping functions, if used as continuous variables, do not produce significant effects, whereas they produce very significant effects after recoding. In the first example, the effects on physical strength were assessed in 60 subjects of different races. A linear regression in SPSS with race as the independent and physical strength score as the dependent variable showed that race was not a significant predictor of physical strength. Using the process of recoding, the variable race into categorical dummy variables showed that compared with the presence of Hispanic race, the black and white races were significant positive predictors (P = 0.0001 and 0.004 respectively) and Asian race is a significant negative predictor (P = 0.050). In the second example, the effects of numbers of comedications on admissions to a hospital resulting from adverse drug effects were assessed. A logistic regression in SPSS with numbers of comedications as the independent variable showed that comedications was not a significant predictor of iatrogenic admission. Using again the process of recoding for categorical dummy variables showed that comedication was a very significant predictor of iatrogenic admission with P = 0.004. Categorical variables are currently rarely analyzed in a proper way. Mostly they are analyzed in the form of continuous variables. This approach does not always fit the data patterns causing negative results as demonstrated in the examples of this article. We recommend that such variables be recoded into categorical dummy variables.


Subject(s)
Biomedical Research/methods , Linear Models , Research Design , Drug-Related Side Effects and Adverse Reactions , Female , Hospitalization/statistics & numerical data , Humans , Male , Outcome Assessment, Health Care/methods , Racial Groups/statistics & numerical data
13.
Clin Chem Lab Med ; 50(1): 73-6, 2011 Sep 26.
Article in English | MEDLINE | ID: mdl-21942849

ABSTRACT

BACKGROUND: Qualitative diagnostic tests commonly produce false positive and false negative results. Smooth receiver operated characteristic (ROC) curves are used for assessing the performance of a new test against a standard test. This method, called c-statistic (concordance) has limitations. The aim of this study was to assess whether logistic regression with the odds of disease as an outcome and the test scores as covariate, can be used as an alternative approach, and to compare the performance of either of the two methods. METHODS: Using as examples simulated by vascular laboratory scores we assessed the performance of logistic regression as compared to c-statistics. RESULTS: The c-statistics produced areas under the curve (AUCs) of respectively 0.954 and 0.969 (standard errors 0.007 and 0.005), means difference 0.015 with a pooled standard error of 0.0086. This meant that the new test was not significantly different from the standard test at p=0.08. Logistic regression of these data with presence of disease as a dependent and vascular laboratory scores as an independent variable produced regression coefficients of 0.45 and 0.58 with standard errors of respectively 0.04 and 0.05. This meant that the new test was a significantly better predictor of disease than the standard test at p=0.04. CONCLUSIONS: Logistic regression with presence of disease as a dependent and test scores as an independent variable was better than c-statistics for assessing qualitative diagnostic tests. This may be relevant to future diagnostic research.


Subject(s)
Data Interpretation, Statistical , Diagnostic Tests, Routine/methods , Humans , Logistic Models , ROC Curve
14.
Am J Ther ; 18(6): 458-62, 2011 Nov.
Article in English | MEDLINE | ID: mdl-20535012

ABSTRACT

A nationwide survey in the Netherlands among 600 randomly sampled practitioners revealed that the advice (1) quit smoking, (2) reduce alcohol, (3) healthy diet, and (4) physical activities was only given by 76%, 26%, 44%, and 61% of the practitioners. To confirm these data, and to study the effects of the personal characteristics of the practitioners, and the effect of their participation in a survey. All general practitioners in the areas of Dordrecht in the Netherlands, with 350,000 inhabitants, were invited to participate. Self-administered questionnaires included questions about non-pharmaceutical treatment recommendations given, about blood pressure increasing factors including blood pressure increasing medicines, and healthy life style. After 1 year, the survey was repeated among the practitioners who completed the first one. The current survey produced a result largely similar to that of the nationwide survey. The combined results were as follows: among 281 practitioners a quit smoking advice was given by 82%, reduce alcohol advice by 47%, healthy diet advice by 51%, and physical activities advice by only 73% of the practitioners with 95% confidence intervals of, respectively, 75%-84%, 38%-49%, 41%-53%, and 64%-75%. Country physicians and older physicians were more active in giving nondrug treatments with P-values of <0.02 to <0.05. Increased blood pressure as a side effect of concomitant medications was virtually never addressed. After the survey, 26 practitioners (24.8%, P < 0.001) had started life style recommendations.


Subject(s)
Diet Therapy , Exercise Therapy , Hypertension/therapy , Patient Education as Topic , Practice Patterns, Physicians'/statistics & numerical data , Smoking Cessation , Age Factors , Female , General Practice/statistics & numerical data , Humans , Male , Netherlands , Risk Reduction Behavior , Rural Population , Surveys and Questionnaires , Urban Population
15.
Eur J Clin Invest ; 40(10): 911-7, 2010 Oct.
Article in English | MEDLINE | ID: mdl-20678119

ABSTRACT

BACKGROUND: Item response models using exponential modelling are more sensitive than classical linear methods for making predictions from psychological questionnaires. OBJECTIVE: To assess whether they can also be used for making predictions from quality of life questionnaires and clinical and laboratory diagnostic-tests. METHODS: Of 1000 anginal patients assessed for quality of life and 1350 patients assessed for peripheral vascular disease with diagnostic laboratory tests, items response modelling was applied using the Latent Trait Analysis program -2 of Uebersax. RESULTS: The 32 different response patterns obtained from test batteries of five items produced 32 different quality of life scores ranging from 3·4% to 74·5% and 32 different levels of peripheral vascular disease ranging from 9·9% to 83·5% with overall mean scores, by definition, of 50%, whereas the classical method of analysis produced the discrete scores of only 0-5. The item response models produced an adequate fit for the data as demonstrated by chi-square goodness of fit values/degrees of freedom of 0·86 and 0·64. CONCLUSIONS: 1 Quality of life assessments and diagnostic tests can be analysed through item response modelling, and provide more sensitivity than do classical linear models. 2 Item response modelling can change largely qualitative data into fairly accurate quantitative data, and can, even with limited sets of items, produce fairly accurate frequency distribution patterns of quality of life, severity of disease and other latent traits.


Subject(s)
Quality of Life , Research Design , Female , Humans , Male , Psychometrics , Reproducibility of Results , Surveys and Questionnaires
16.
Am J Ther ; 17(6): e202-7, 2010.
Article in English | MEDLINE | ID: mdl-20393346

ABSTRACT

Individual patients' predictors of survival may change across time, because people may change their lifestyles. Standard statistical methods do not allow adjustments for time-dependent predictors. In the past decade, time-dependent factor analysis has been introduced as a novel approach adequate for the purpose. Using examples from survival studies, we assess the performance of the novel method. SPSS statistical software is used (SPSS Inc., Chicago, IL). Cox regression is a major simplification of real life; it assumes that the ratio of the risks of dying in parallel groups is constant over time. It is, therefore, inadequate to analyze, for example, the effect of elevated low-density lipoprotein cholesterol on survival, because the relative hazard of dying is different in the first, second, and third decades. The time-dependent Cox regression model allowing for nonproportional hazards is applied and provides a better precision than the usual Cox regression (P = 0.117 versus 0.0001). Elevated blood pressure produces the highest risk at the time it is highest. An overall analysis of the effect of blood pressure on survival is not significant, but after adjustment for the periods with highest blood pressures using the segmented time-dependent Cox regression method, blood pressure is a significant predictor of survival (P = 0.04). In a long-term therapeutic study, treatment modality is a significant predictor of survival, but after the inclusion of the time-dependent low-density lipoprotein cholesterol variable, the precision of the estimate improves from a P value of 0.02 to 0.0001. Predictors of survival may change across time, e.g., the effect of smoking, cholesterol, and increased blood pressure in cardiovascular research and patients' frailty in oncology research. Analytical models for survival analysis adjusting such changes are welcome. The time-dependent and segmented time-dependent predictors are adequate for the purpose. The usual multiple Cox regression model can include both time-dependent and time-independent predictors.


Subject(s)
Biomedical Research/statistics & numerical data , Software , Survival Analysis , Data Interpretation, Statistical , Humans , Proportional Hazards Models , Time Factors
17.
Heart Int ; 5(1): e9, 2010 Jun 23.
Article in English | MEDLINE | ID: mdl-21977294

ABSTRACT

Biological processes are full of variations and so are responses to therapy as measured in clinical research. Estimators of clinical efficacy are, therefore, usually reported with a measure of uncertainty, otherwise called dispersion. This study aimed to review both the flaws of data reports without measure of dispersion and those with over-dispersion.EXAMPLES OF ESTIMATORS COMMONLY REPORTED WITHOUT A MEASURE OF DISPERSION INCLUDE: number needed to treat;reproducibility of quantitative diagnostic tests;sensitivity/specificity;Markov predictors;risk profiles predicted from multiple logistic models.Data with large differences between response magnitudes can be assessed for over-dispersion by goodness of fit tests. The χ(2) goodness of fit test allows adjustment for over-dispersion.For most clinical estimators, the calculation of standard errors or confidence intervals is possible. Sometimes, the choice is deliberately made not to use the data fully, but to skip the standard errors and to use the summary measures only. The problem with this approach is that it may suggest inflated results. We recommend that analytical methods in clinical research should always attempt to include a measure of dispersion in the data. When large differences exist in the data, the presence of over-dispersion should be assessed and appropriate adjustments made.

18.
Clin Chem Lab Med ; 48(2): 159-65, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20001439

ABSTRACT

BACKGROUND: Back propagation (BP) artificial neural networks are a distribution-free method for data analysis based on layers of artificial neurons that transduce imputed information. It has been recognized as having a number of advantages compared to traditional methods including the possibility to process imperfect data, and complex non-linear data. The objective of this study was to review the principles, procedures, and limitations of BP artificial neural networks for a non-mathematical readership. METHODS: A real data sample of weight, height and measured body surface area from 90 individuals was used as an example. SPSS 17.0 with neural network add-on was used for the analysis. The predicted body surface from a two hidden layer BP neural network was compared to the body surface calculated by the Haycock equation. RESULTS: Both the predicted values from the neural network and from the Haycock equation were close to the measured values. A linear regression analysis with neural network as predictor produced an r(2)-value of 0.983, while the Haycock equation produced an r(2)-value of 0.995 (r(2)>0.95 is a criterion for accurate diagnostic testing). CONCLUSIONS: BP neural networks may, sometimes, predict clinical diagnoses with accuracies similar to those of other methods. However, traditional statistical procedures, such as regression analyses need to be added for testing their accuracies against alternative methods. Nonetheless, BP neural networks have great potential through their ability to learn by example instead of learning by theory.


Subject(s)
Artificial Intelligence , Diagnostic Techniques and Procedures , Neural Networks, Computer , Predictive Value of Tests , Prognosis , Regression Analysis
19.
Clin Chem Lab Med ; 47(11): 1351-4, 2009.
Article in English | MEDLINE | ID: mdl-19817647

ABSTRACT

BACKGROUND: Diagnostic reviews often include the sensitivity/specificity results of individual studies. A problem occurs when these data are pooled because the correlation between sensitivity and specificity is generally strongly negative, causing overestimation of the pooled results. The diagnostic odds ratio (DOR), defined as the odds of true positives vs. that of false positives, may avoid this problem. The aim of the study was to review the advantages and limitations of the DORs. METHODS: A systematic review of 44 previously published diagnostic studies was used as an example. RESULTS: DORs can be readily implemented in diagnostic research. Advantages include: (1) they adjust for the negative and curvilinear correlations between sensitivities and specificities, (2) they take account of the heterogeneity between studies with respect to the different thresholds chosen by the investigators in the original studies, and (3) it is easy to extend the model with covariates representing between-study differences in design. Limitations include: 1) the outcome parameter is a summary estimate of both sensitivity and specificity, and 2) the magnitude of the studies included is not taken into account. CONCLUSIONS: Reported sensitivities and specificities of different studies assessing similar diagnostic tests are not only negatively correlated, but also negatively correlated in a curvilinear manner. It is appropriate to take this negative curvilinear correlation into account in the data pooling of such meta-analyses. The DORs can be applied for that purpose.


Subject(s)
Diagnostic Techniques and Procedures , Humans , Odds Ratio , ROC Curve , Sensitivity and Specificity
20.
J Card Fail ; 15(4): 305-9, 2009 May.
Article in English | MEDLINE | ID: mdl-19398078

ABSTRACT

BACKGROUND: Despite recent successes in improving mortality from congestive heart failure (CHF) with drugs and devices, several reports suggest increased mortality among CHF subjects with diabetes. Our objective was to conduct a meta-analysis to determine aggregate risk of mortality and hospitalization in CHF from systolic dysfunction and diabetes. METHODS AND RESULTS: Observational and randomized trials reporting on CHF and mortality in diabetes since 2001 were identified through MEDLINE and Cochrane database searches and hand searching of cross-references. Minimum follow-up of the study cohort should have been at least 6 months. Studies with very small sample size (n < 200) were excluded. Major outcome measure of mortality and secondary outcome measure of CHF hospitalization were extracted from published results. Analysis was done for composite mortality and hospitalization risk, heterogeneity, robustness, and publication bias. A total of 17 trials (n = 39,505 subjects) were eligible. There were a total of 10,068 deaths, with 3615 among diabetics, from available data. The relative risk was significantly higher for diabetics by 28% (95% CI 1.22-1.34, P < .0001). Similarly pooled relative risk for hospitalization was significantly higher for diabetics by 36% (95% CI 1.26-1.48, P < .0001). Heterogeneity was present (P < .01) and accounted for by observational studies. There was no significant publication bias and lack of robustness was not obvious. CONCLUSIONS: Aggregate mortality and recurrent hospitalization risk for diabetic subjects with CHF is 28% and 36% higher than for nondiabetic subjects. Future trials should specifically focus on improving survival in these subjects.


Subject(s)
Diabetes Mellitus, Type 2/mortality , Heart Failure, Systolic/mortality , Diabetes Mellitus, Type 2/therapy , Heart Failure/etiology , Heart Failure/mortality , Heart Failure/therapy , Heart Failure, Systolic/complications , Heart Failure, Systolic/therapy , Hospitalization/trends , Humans , Prospective Studies , Randomized Controlled Trials as Topic/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...