Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
1.
Sichuan Mental Health ; (6): 418-423, 2022.
Article in Chinese | WPRIM | ID: wpr-987373

ABSTRACT

The purpose of this paper was to introduce how to set the options of variable levels and multimodal covariates, and to demonstrate the causal mediation effect analysis method with odds ratio (OR) and excess relative risk (ERR) as evaluation indicators through examples. For treatment variables, mediator variables and covariates, the variable-level options of them could be set through the evaluate statement. For categorical variables and their interaction terms, they could be treated as multimodal covariates, and the variable levels could also be set for them by using the evaluate statement. Through an example, this paper used SAS to realize the causal mediation effect analysis and the decomposition of effect components with OR and ERR as the evaluation indicators.

2.
Sichuan Mental Health ; (6): 412-417, 2022.
Article in Chinese | WPRIM | ID: wpr-987372

ABSTRACT

The purpose of this paper was to introduce the setting method of the three types of variable levels in the causal mediation effect analysis and the implementing calculation method under the condition of stratification by using SAS. The setting of the three types of variable levels referred to the setting of the levels of treatment variable, the mediator variable and the covariate. Besides, a specific level combination could also be set for two variables. Through an example, with the help of the enveluate statement in proc causualmed procedure, this paper used an example to conduct the causal mediation effect based on different variable stratification, and gave the output results and explanations.

3.
Sichuan Mental Health ; (6): 404-410, 2021.
Article in Chinese | WPRIM | ID: wpr-987479

ABSTRACT

The purpose of the paper was to introduce the three special tests of the survival data and the SAS implementation. Specifically, it was the multiple comparisons, the trend test and the covariate test of the survival data. The multiple comparisons involved two situations: "the pairwise comparison" and "the comparison with control group". In the trend test, it involved two algorithms: "the log-rank test" and "the Wilcoxon test". In the covariate test, it involved "the single covariate test method" and "the multi-covariate test method of adding one covariate step by step". With the help of the SAS software and based on an example, this article implemented the three special tests mentioned above, explained the output results, and made statistical and professional conclusions.

4.
Chinese Critical Care Medicine ; (12): 1237-1242, 2021.
Article in Chinese | WPRIM | ID: wpr-931755

ABSTRACT

Objective:To study the influence of time-dependent acute physiology and chronic health evaluation Ⅱ (APACHE Ⅱ) score on 14-day death risk in patients with severe stroke, and to provide reference for clinical diagnosis and treatment.Methods:Data of 3 229 patients with severe stroke were enrolled from Medical Information Mart for Intensive Care-Ⅲ (MIMIC-Ⅲ). According to the main types of stroke, the patients were divided into subarachnoid hemorrhage (SAH), intracerebral hemorrhage (ICH), ischemic stroke (IS) and other groups. According to age, patients were divided into > 60 years old and ≤ 60 years old subgroups. According to the baseline of sequential organ failure assessment (SOFA) score, they were divided into subgroups of > 3 and ≤ 3. The daily measured values of APACHE Ⅱ scores in each patient were recorded. And all-cause death within 14 days after admission to intensive care unit (ICU) was used as the outcome index to obtain the survival status and survival time of patients. Joint models for longitudinal and time-to-event data were established to evaluate the effect of APACHE Ⅱ score measured at multiple time points on the death risk of patients, and a subgroup analysis was performed.Results:Among the joint models, the one which include APACHE Ⅱ score, and the interaction items between APACHE Ⅱ and age showed the better fitting. Further analysis showed that APACHE Ⅱ score was affected by age, gender, hospital admission, baseline SOFA score and smoking history. After controlling for these confounding factors, APACHE Ⅱ score was significantly associated with 14-day all-cause death in patients with severe stroke [hazard ratio ( HR) = 1.48, 95% confidence interval (95% CI) was 1.31-1.66, P < 0.001], which indicated that the risk of death increased by 48% (95% CI was 31%-66%) for each 1-point increase in APACHE Ⅱ score. Subgroup analysis showed that for different types of severe stroke patients, APACHE Ⅱ score had a greater impact on the risk of 14-day death in SAH patients ( HR = 1.43, 95% CI was 1.10-1.85), but had a smaller impact on ICH and IS groups [HR (95% CI) was 1.37 (1.15-1.64) and 1.35 (1.06-1.71), respectively]. There was no significant difference in APACHE Ⅱ score on the risk of 14-day death between the patients aged > 60 years old and those aged ≤ 60 years old [ HR (95% CI): 1.37 (1.08-1.72) vs. 1.35 (1.07-1.70), respectively]. Compared with patients with SOFA score > 3, APACHE Ⅱ score had a greater impact on the risk of 14-day death in patients with SOFA score ≤ 3 [ HR (95% CI): 1.40 (1.16-1.70) vs. 1.34 (1.16-1.55)]. Conclusion:Time-dependent APACHE Ⅱ score is an important indicator to evaluate the risk of death in patients with severe stroke.

5.
Chinese Journal of Epidemiology ; (12): 111-114, 2020.
Article in Chinese | WPRIM | ID: wpr-787699

ABSTRACT

In prospective cohort study, multi follow up is often necessary for study subjects, and the observed values are correlated with each other, usually resulting in time-dependent confounding. In this case, the data generally do not meet the application conditions of traditional multivariate regression analysis. Sequential conditional mean model (SCMM) is a new approach that can deal with time-dependent confounding. This paper mainly summarizes the basic theory, steps and characteristics of SCMM.

6.
Chinese Journal of Clinical Pharmacology and Therapeutics ; (12): 546-549, 2020.
Article in Chinese | WPRIM | ID: wpr-855854

ABSTRACT

Population pharmacokinetics (PopPK) is an analytical method that can quantify the variability of drug concentration among individuals. It is widely used in various stages of new drug researches from non-clinic to clinic. With the rapid development of PopPK, more and more sponsors are keen to comprehensively analyze the in vivo processes of new drugs as well as its influencing factors using modeling and simulation methods. Several guidelines have been issued to recommend the use of PopPK in China. However, no explicit requirement of PopPK study report has been issued for regulatory application. This article conducts a preliminary discussion on new drug PopPK study and its reporting format and content, with reference to the requirements in relevant guidelines as well as previous review experiences, for the discussion or reference of industries and researchers.

7.
Chinese Journal of Epidemiology ; (12): 111-114, 2020.
Article in Chinese | WPRIM | ID: wpr-798891

ABSTRACT

In prospective cohort study, multi follow up is often necessary for study subjects, and the observed values are correlated with each other, usually resulting in time-dependent confounding. In this case, the data generally do not meet the application conditions of traditional multivariate regression analysis. Sequential conditional mean model (SCMM) is a new approach that can deal with time-dependent confounding. This paper mainly summarizes the basic theory, steps and characteristics of SCMM.

8.
Translational and Clinical Pharmacology ; : 141-148, 2019.
Article in English | WPRIM | ID: wpr-786680

ABSTRACT

The accuracy and predictability of mixture models in NONMEM® may change depending on the relative size of inter-individual differences and the size of the differences in the parameters between subpopulations. This study explored the accuracy of mixture models when dealing with missing a categorical covariate under various situations that may occur in reality. We generated simulation data under various scenarios where genotypes representing extensive metabolizers (EM) and poor metabolizers (PM) of drug-metabolizing enzymes affect the clearance of a drug by different degrees, and the inter-individual variations in clearance are different for each scenario. From each simulated datum, a specific proportion of the covariate (genotype information) was randomly removed. Based on these simulation data, the proportion of each individual subpopulation and the clearance were estimated using a mixture model. Overall, the clearance estimate was more accurate when the difference in clearance between subpopulations was large, and the inter-individual variations were small. In some scenarios that showed higher ETA or epsilon shrinkage, the clearance estimates were significantly biased. The mixture model made better predictions for individuals in the EM subpopulation than for individuals in the PM subpopulation. However, the estimated values were not significantly affected by the tested ratio, if the sample size was secured to some extent. The current simulation study suggests that when the coefficient of variation of inter-individual variations of clearance exceeds 40%, the mixture model should be used carefully, and it should be taken into account that shrinkage can bias the results.


Subject(s)
Bias , Genotype , Sample Size
9.
Chinese Journal of Epidemiology ; (12): 86-89, 2018.
Article in Chinese | WPRIM | ID: wpr-737922

ABSTRACT

In the studies of modem epidemiology,exposure in a short term cannot fully elaborate the mechanism of the development of diseases or health-related events.Thus,lights have been shed on to life course epidemiology,which studies the exposures in early life time and their effects related to the development of chronic diseases.When exploring the mechanism leading from one exposure to an outcome and its effects through other factors,due to the existence of time-variant effects,conventional statistic methods could not meet the needs of etiological analysis in life course epidemiology.This paper summarizes the dynamic path analysis model,including the model structure and significance,and its application in life course epidemiology.Meanwhile,the procedure of data processing and etiology analyzing were introduced.In conclusion,dynamic path analysis is a useful tool which can be used to better elucidate the mechanisms that underlie the etiology of chronic diseases.

10.
Chinese Journal of Epidemiology ; (12): 86-89, 2018.
Article in Chinese | WPRIM | ID: wpr-736454

ABSTRACT

In the studies of modem epidemiology,exposure in a short term cannot fully elaborate the mechanism of the development of diseases or health-related events.Thus,lights have been shed on to life course epidemiology,which studies the exposures in early life time and their effects related to the development of chronic diseases.When exploring the mechanism leading from one exposure to an outcome and its effects through other factors,due to the existence of time-variant effects,conventional statistic methods could not meet the needs of etiological analysis in life course epidemiology.This paper summarizes the dynamic path analysis model,including the model structure and significance,and its application in life course epidemiology.Meanwhile,the procedure of data processing and etiology analyzing were introduced.In conclusion,dynamic path analysis is a useful tool which can be used to better elucidate the mechanisms that underlie the etiology of chronic diseases.

11.
Asian Pacific Journal of Tropical Biomedicine ; (12): 354-359, 2015.
Article in Chinese | WPRIM | ID: wpr-951008

ABSTRACT

ABSTRACT: Randomised controlled trials (RCT s) are gold standard in the evaluation of treatment efficacy in medical investigations, only if well designed and implemented. Till date, distorted views and misapplications of statistical procedures involved in RCTs are still in practice. Hence, clarification of concepts and acceptable practices related to certain statistical issues involved in the design, conduct and reporting of randomised controlled trials is needed. This narrative synthesis aimed at providing succinct but clear information on the concepts and practices of selected statistical issues in RCT s to inform correct applications. The use of tests of significance is no longer acceptable as means to compare baseline similarity between treatment groups and in determining which covariate(s) should be included in the model for adjustment. Distribution of baseline attributes simply presented in tabular form is however, rather preferred. Regarding covariate selection, such approach that makes use of information on the degree of correlation between the covariate(s) and the outcome variable is more in tandem with statistical principle(s) than that based on tests of significance. Stratification and minimisation are not alternatives to covariate adjusted analysis; in fact they establish the need for one. Intention-to-treat is the preferred approach for the evaluation of primary outcome measures and researchers have responsibility to report whether or not the procedure was followed. A major use of results from subgroup analysis is to generate hypothesis for future clinical trials. Since RCT s are gold standard in the comparison of medical interventions, researchers cannot afford the practices of distorted allocation or statistical procedures in this all important experimental design method.

12.
Translational and Clinical Pharmacology ; : 31-34, 2015.
Article in English | WPRIM | ID: wpr-28184

ABSTRACT

One of the important purposes in population pharmacokinetic studies is to investigate the relationships between parameters and covariates to describe parameter variability. The purpose of this study is to evaluate the model's ability to correctly detect the parameter-covariate relationship that can be observed in phase I clinical trials. Data were simulated from a two-compartment model with zero-order absorption and first-order elimination, which was built from valsartan's concentration data collected from a previously conducted study. With creatinine clearance (CLCR) being used as a covariate to be tested, 3 different significance levels of 0.001

Subject(s)
Absorption , Clinical Trials, Phase I as Topic , Creatinine , Dataset , Healthy Volunteers , Hope
13.
Journal of the Korean Society of Magnetic Resonance in Medicine ; : 294-307, 2013.
Article in Korean | WPRIM | ID: wpr-98237

ABSTRACT

PURPOSE: To investigate the correlations between Seoul Neuropsychological Screening Battery (SNSB) scores and the gray matter volumes (GMV) in patients with Alzheimer's disease (AD) and mild cognitive impairment (MCI) and cognitively normal (CN) elderly subjects with correcting the genotypes. MATERIALS AND METHODS: Total 75 subjects were enrolled with 25 subjects for each group. The apolipoprotein E (APOE) epsilon genotypes, SNSB scores, and the 3D T1-weighted images were obtained from all subjects. Correlations between SNSB scores and GMV were investigated with the multiple regression method for each subject group using both voxel-based and region-of-interest-based analyses with covariates of age, gender, and the genotype. RESULTS: In the AD group, Rey Complex Figure Test (RCFT) delayed recall scores were positively correlated with GMV. In the MCI group, Seoul Verbal Learning Test (SVLT) scores were positively correlated with GMV. In the CN group, GMV negatively correlated with Boston Naming Test (K-BNT) scores and Mini-Mental State Examimation (K-MMSE) scores, but positively correlated with RCFT scores. CONCLUSION: When we used covariates of age, gender, and the genotype, we found statistically significant correlations between some SNSB scores and GMV at some brain regions. It may be necessary to further investigate a longitudinal study to understand the correlation.


Subject(s)
Aged , Humans , Alzheimer Disease , Apolipoproteins , Brain , Genotype , Mass Screening , Methods , Cognitive Dysfunction , Seoul , Verbal Learning
14.
Chinese Journal of Clinical Pharmacology and Therapeutics ; (12)2000.
Article in Chinese | WPRIM | ID: wpr-677727

ABSTRACT

Some non treatment variables that affect the outcome of a disease are often called covariates. These covariables should be considered in the design and analysis of clinical trials to obtain unbias conclusion. To ensure that any observed treatment effect is not influenced by an imbalances in baseline characteristics, both preadjustment and postadjustment are provided in the design stage and analysis stage of the trials respectively. They can improve the credibility of the trial results and increase the statistical efficiency. Based on a few papers published about adjustment for covariates and some documentations of the International Conference on Harmonization (ICH), we review the concepts, methods and procedures for adjustment of treatment effects for the influence of covariates. The statistical issues on the application of adjustment are especially discussed in great depth.

SELECTION OF CITATIONS
SEARCH DETAIL