Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Biom J ; 65(5): e2200194, 2023 06.
Article in English | MEDLINE | ID: mdl-36960489

ABSTRACT

The power prior has been widely used to discount the amount of information borrowed from historical data in the design and analysis of clinical trials. It is realized by raising the likelihood function of the historical data to a power parameter δ ∈ [ 0 , 1 ] $\delta \in [0, 1]$ , which quantifies the heterogeneity between the historical and the new study. In a fully Bayesian approach, a natural extension is to assign a hyperprior to δ such that the posterior of δ can reflect the degree of similarity between the historical and current data. To comply with the likelihood principle, an extra normalizing factor needs to be calculated and such prior is known as the normalized power prior. However, the normalizing factor involves an integral of a prior multiplied by a fractional likelihood and needs to be computed repeatedly over different δ during the posterior sampling. This makes its use prohibitive in practice for most elaborate models. This work provides an efficient framework to implement the normalized power prior in clinical studies. It bypasses the aforementioned efforts by sampling from the power prior with δ = 0 $\delta = 0$ and δ = 1 $\delta = 1$ only. Such a posterior sampling procedure can facilitate the use of a random δ with adaptive borrowing capability in general models. The numerical efficiency of the proposed method is illustrated via extensive simulation studies, a toxicological study, and an oncology study.


Subject(s)
Models, Statistical , Research Design , Bayes Theorem , Computer Simulation , Sample Size , Likelihood Functions
2.
J Mech Behav Biomed Mater ; 131: 105254, 2022 07.
Article in English | MEDLINE | ID: mdl-35537361

ABSTRACT

Trabecular bone is a random cellular solid with an interconnected network of plate-like and rod-like components. However, the structural randomness and complexity have hindered rigorous mathematical modeling of trabecular bone microarchitecture. Recent advancements in imaging processing techniques have enabled us to define the size, orientation, and spatial location of individual trabecular plates and rods in trabecular bone. Based on the essential information, this study proposed a probability-based approach to define the size, orientation, and spatial distributions of trabecular plates and rods for trabecular bone cubes (N = 547) acquired from six human cadaver proximal femurs. Using two groups of probability-based parameters, it was attempted to capture microarchitectural details, which could not be captured by the existing histomorphometric parameters, but crucial to the elastic properties of trabecular bone. The elastic properties of the trabecular bone cubes in three principal axes were estimated using microCT based finite element (FE) simulations. Based on the results of multivariate multiple regression modeling, the efficacy of the two groups of probability-based parameters in prediction of the elastic properties was verified in comparison with that of the existing histomorphometric parameters (BV/TV, Tb.Th, Tb.Sp, DA, EF.Med, and Conn.D). The results indicated that the regression models trained using the probability-based parameters had a comparable and even better accuracy (rMSE = 0.621 and 0.548) than that of the histomorphometric parameters (rMSE = 0.647). More importantly, the probability-based parameters could provide more insights into some unexplored microarchitectural features, such as individual trabecular size, orientation, and spatial distributions, which are also critical to the elastic properties of trabecular bone.


Subject(s)
Cancellous Bone , Image Processing, Computer-Assisted , Cancellous Bone/diagnostic imaging , Femur/diagnostic imaging , Humans , Probability , X-Ray Microtomography
3.
J Am Heart Assoc ; 3(2): e000759, 2014 Apr 14.
Article in English | MEDLINE | ID: mdl-24732920

ABSTRACT

BACKGROUND: Identifying the best markers to judge the adequacy of lipid-lowering treatment is increasingly important for coronary heart disease (CHD) prevention given that several novel, potent lipid-lowering therapies are in development. Reductions in LDL-C, non-HDL-C, or apoB can all be used but which most closely relates to benefit, as defined by the reduction in events on statin treatment, is not established. METHODS AND RESULTS: We performed a random-effects frequentist and Bayesian meta-analysis of 7 placebo-controlled statin trials in which LDL-C, non-HDL-C, and apoB values were available at baseline and at 1-year follow-up. Summary level data for change in LDL-C, non-HDL-C, and apoB were related to the relative risk reduction from statin therapy in each trial. In frequentist meta-analyses, the mean CHD risk reduction (95% CI) per standard deviation decrease in each marker across these 7 trials were 20.1% (15.6%, 24.3%) for LDL-C; 20.0% (15.2%, 24.7%) for non-HDL-C; and 24.4% (19.2%, 29.2%) for apoB. Compared within each trial, risk reduction per change in apoB averaged 21.6% (12.0%, 31.2%) greater than changes in LDL-C (P<0.001) and 24.3% (22.4%, 26.2%) greater than changes in non-HDL-C (P<0.001). Similarly, in Bayesian meta-analyses using various prior distributions, Bayes factors (BFs) favored reduction in apoB as more closely related to risk reduction from statins compared with LDL-C or non-HDL-C (BFs ranging from 484 to 2380). CONCLUSIONS: Using both a frequentist and Bayesian approach, relative risk reduction across 7 major placebo-controlled statin trials was more closely related to reductions in apoB than to reductions in either non-HDL-C or LDL-C.


Subject(s)
Apolipoproteins B/blood , Cardiovascular Diseases/prevention & control , Cholesterol, HDL/blood , Cholesterol/blood , Dyslipidemias/drug therapy , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Bayes Theorem , Biomarkers/blood , Cardiovascular Diseases/blood , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/etiology , Down-Regulation , Dyslipidemias/blood , Dyslipidemias/complications , Dyslipidemias/diagnosis , Endpoint Determination , Humans , Randomized Controlled Trials as Topic , Risk Factors , Time Factors , Treatment Outcome
4.
Stat Appl Genet Mol Biol ; 8: Article23, 2009.
Article in English | MEDLINE | ID: mdl-19409067

ABSTRACT

Multiple hypothesis testing is commonly used in genome research such as genome-wide studies and gene expression data analysis (Lin, 2005). The widely used Bonferroni procedure controls the family-wise error rate (FWER) for multiple hypothesis testing, but has limited statistical power as the number of hypotheses tested increases. The power of multiple testing procedures can be increased by using weighted p-values (Genovese et al., 2006). The weights for the p-values can be estimated by using certain prior information. Wasserman and Roeder (2006) described a weighted Bonferroni procedure, which incorporates weighted p-values into the Bonferroni procedure, and Rubin et al. (2006) and Wasserman and Roeder (2006) estimated the optimal weights that maximize the power of the weighted Bonferroni procedure under the assumption that the means of the test statistics in the multiple testing are known (these weights are called optimal Bonferroni weights). This weighted Bonferroni procedure controls FWER and can have higher power than the Bonferroni procedure, especially when the optimal Bonferroni weights are used. To further improve the power of the weighted Bonferroni procedure, first we propose a weighted Sidák procedure that incorporates weighted p-values into the Sidák procedure, and then we estimate the optimal weights that maximize the average power of the weighted Sidák procedure under the assumption that the means of the test statistics in the multiple testing are known (these weights are called optimal Sidák weights). This weighted Sidák procedure can have higher power than the weighted Bonferroni procedure. Second, we develop a generalized sequential (GS) Sidák procedure that incorporates weighted p-values into the sequential Sidák procedure (Scherrer, 1984). This GS idák procedure is an extension of and has higher power than the GS Bonferroni procedure of Holm (1979). Finally, under the assumption that the means of the test statistics in the multiple testing are known, we incorporate the optimal Sidák weights and the optimal Bonferroni weights into the GS Sidák procedure and the GS Bonferroni procedure, respectively. Theoretical proof and/or simulation studies show that the GS Sidák procedure can have higher power than the GS Bonferroni procedure when their corresponding optimal weights are used, and that both of these GS procedures can have much higher power than the weighted Sidák and the weighted Bonferroni procedures. All proposed procedures control the FWER well and are useful when prior information is available to estimate the weights.


Subject(s)
Data Interpretation, Statistical , Gene Expression Profiling/methods , Genome-Wide Association Study , Models, Theoretical , Computer Simulation , False Positive Reactions , Humans
5.
Biometrics ; 63(4): 1031-7, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17425640

ABSTRACT

Estimating the number of clusters in a data set is a crucial step in cluster analysis. In this article, motivated by the gap method (Tibshirani, Walther, and Hastie, 2001, Journal of the Royal Statistical Society B63, 411-423), we propose the weighted gap and the difference of difference-weighted (DD-weighted) gap methods for estimating the number of clusters in data using the weighted within-clusters sum of errors: a measure of the within-clusters homogeneity. In addition, we propose a "multilayer" clustering approach, which is shown to be more accurate than the original gap method, particularly in detecting the nested cluster structure of the data. The methods are applicable when the input data contain continuous measurements and can be used with any clustering method. Simulation studies and real data are investigated and compared among these proposed methods as well as with the original gap method.


Subject(s)
Algorithms , Biometry/methods , Cluster Analysis , Data Interpretation, Statistical , Models, Biological , Models, Statistical , Pattern Recognition, Automated/methods , Computer Simulation
6.
Anat Rec A Discov Mol Cell Evol Biol ; 288(12): 1303-9, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17075842

ABSTRACT

Valproic acid, a drug commonly used to treat seizures and other psychiatric disorders, causes neural tube defects (NTDs) in exposed fetuses at a rate 20 times higher than in the general population. Failure of the neural tube to close during development results in exencephaly or anencephaly, as well as spina bifida. In mice, nonspecific activation of the maternal immune system can reduce fetal abnormalities caused by diverse etiologies, including diabetes-induced NTDs. We hypothesized that nonspecific activation of the maternal immune system with interferon-gamma (IFN-gamma) and granulocyte-macrophage colony-stimulating factor (GM-CSF) could reduce valproic acid (VA)-induced defects as well. Female CD-1 mice were given immune stimulant prebreeding: either IFN-gamma or GM-CSF. Approximately half of the control and immune-stimulated pregnant females were then exposed to 500 mg/kg VA on the morning of gestational day 8. The incidence of developmental defects was determined on gestational day 17 from at least eight litters in each of the following treatment groups: control, VA only, IFN-gamma only, IFN-gamma+VA, GM-CSF only, and GM-CSF+VA. The incidence of NTDs was 18% in fetuses exposed to VA alone, compared to 3.7% and 2.9% in fetuses exposed to IFN-gamma+VA, or GM-CSF+VA respectively. Ocular defects were also significantly reduced from 28.0% in VA exposed groups to 9.8% in IFN-gamma+VA and 12.5% in GM-CSF+VA groups. The mechanisms by which maternal immune stimulation prevents birth defects remain unclear, but may involve maternal or fetal production of cytokines or growth factors which protect the fetus from the dysregulatory effects of teratogens.


Subject(s)
Abnormalities, Drug-Induced/prevention & control , Adjuvants, Immunologic/pharmacology , Anticonvulsants/toxicity , Granulocyte-Macrophage Colony-Stimulating Factor/pharmacology , Interferon-gamma/pharmacology , Neural Tube Defects/prevention & control , Valproic Acid/toxicity , Adjuvants, Immunologic/therapeutic use , Animals , Dose-Response Relationship, Drug , Eyelid Diseases/chemically induced , Eyelid Diseases/prevention & control , Eyelids/abnormalities , Eyelids/drug effects , Female , Fetal Death , Fetal Resorption , Fetal Weight/drug effects , Gestational Age , Granulocyte-Macrophage Colony-Stimulating Factor/therapeutic use , Interferon-gamma/therapeutic use , Maternal Exposure , Maternal-Fetal Exchange , Mice , Neural Tube Defects/chemically induced , Placenta/drug effects , Pregnancy , Time Factors
7.
Vet Immunol Immunopathol ; 105(3-4): 187-96, 2005 May 15.
Article in English | MEDLINE | ID: mdl-15808300

ABSTRACT

Common goals of microarray experiments are the detection of genes that are differentially expressed between several biological types and the construction of classifiers that predict biological type of samples. Here we consider a situation where there is no training data. There is considerable interest in comparing expression profiles associated with successful pregnancies (SP) and unsuccessful pregnancies (UP) in model and farm animals. Successful pregnancy rate is known to be much higher in embryos generated by in vitro fertilization (IVF) than in nuclear transfer (NT) embryos, and higher under induced ovulation for large follicles (LF) than for small follicles (SF). The tasks of identifying genes differentially expressed between SP and UP, and predicting SP for future samples are not well accomplished by comparing IVF and NT, or LF and SF. A suitable method is finite mixture model analysis (FMMA), which models each observed class (IVF and NT, or LF and SF) as a mixture of two distributions, one for SP and one for UP, with different known or unknown proportions (here known to be 0.50 SP for IVF and 0.02 SP for NT). The means of the two distributions differ for the differentially expressed genes, which we identify via a likelihood ratio test. We confirm by simulation that FMMA strongly outperforms hierarchical clustering and linear discriminant analysis using the known class labels (NT, IVF). We apply FMMA to a real data set on IVF and NT embryos, and compute their posterior probabilities of SP, which confirm our prior knowledge of the SP proportions for IVF and NT.


Subject(s)
Gene Expression Profiling/methods , Oligonucleotide Array Sequence Analysis/methods , Animals , Cloning, Organism , Computer Simulation , Data Interpretation, Statistical , Embryo, Mammalian , Embryo, Nonmammalian , Embryonic Development/genetics , Female , Fertilization in Vitro/veterinary , Gene Expression Regulation, Developmental , Models, Statistical , Pregnancy , Reproducibility of Results
8.
Mol Cell Probes ; 18(3): 207-9, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15135457

ABSTRACT

Several biotechnology companies have recently introduced novel quencher fluors for use with dual-labeled fluorogenic hydrolysis probes. The Epoch Dark Quencher trade mark fluorochrome consists of a non-fluorescent moiety capable of absorption at higher wavelengths (400-650 nm). The aim of this study was to: (1) evaluate the feasibility of using Epoch Dark Quencher fluorochromes in real-time PCR pathogen detection assays that were previously optimized with TaqMan (TAMRA) quenching fluors, and (2) compare the sensitivity based on cycle threshold (CT) between probes containing either TaqMan or Epoch Dark Quencher fluors. Our data indicate Epoch Dark Quencher probes can be used in place of TaqMan probes and their performance was not better than traditional TaqMan (TAMRA) quenchers. Marginal differences observed between quenching fluorochromes may arise from concentration differences during probe synthesis.


Subject(s)
Reverse Transcriptase Polymerase Chain Reaction/methods , Taq Polymerase/metabolism , Fluorescent Dyes/metabolism , Influenza A virus/genetics , RNA, Viral/genetics , Reverse Transcriptase Polymerase Chain Reaction/instrumentation , Sensitivity and Specificity
9.
Bioinformatics ; 19 Suppl 2: ii122-9, 2003 Oct.
Article in English | MEDLINE | ID: mdl-14534181

ABSTRACT

MOTIVATION: Large-scale gene expression profiling generates data sets that are rich in observed features but poor in numbers of observations. The analysis of such data sets is a challenge that has been object of vigorous research. The algorithms in use for this purpose have been poorly documented and rarely compared objectively, posing a problem of uncertainty about the outcomes of the analyses. One way to objectively test such analysis algorithms is to apply them on computational gene network models for which the mechanisms are completely know. RESULTS: We present a system that generates random artificial gene networks according to well-defined topological and kinetic properties. These are used to run in silico experiments simulating real laboratory microarray experiments. Noise with controlled properties is added to the simulation results several times emulating measurement replicates, before expression ratios are calculated. AVAILABILITY: The data sets and kinetic models described here are available from http://www.vbi.vt.edu/~mendes/AGN/as biochemical dynamic models in SBML and Gepasi formats.


Subject(s)
Algorithms , Gene Expression Profiling/methods , Gene Expression/physiology , Models, Biological , Proteome/metabolism , Signal Transduction/physiology , Software Validation , Computer Simulation , Sensitivity and Specificity , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...