Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 41
Filter
1.
Biostatistics ; 24(1): 108-123, 2022 12 12.
Article in English | MEDLINE | ID: mdl-34752610

ABSTRACT

Multimorbidity constitutes a serious challenge on the healthcare systems in the world, due to its association with poorer health-related outcomes, more complex clinical management, increases in health service utilization and costs, but a decrease in productivity. However, to date, most evidence on multimorbidity is derived from cross-sectional studies that have limited capacity to understand the pathway of multimorbid conditions. In this article, we present an innovative perspective on analyzing longitudinal data within a statistical framework of survival analysis of time-to-event recurrent data. The proposed methodology is based on a joint frailty modeling approach with multivariate random effects to account for the heterogeneous risk of failure and the presence of informative censoring due to a terminal event. We develop a generalized linear mixed model method for the efficient estimation of parameters. We demonstrate the capacity of our approach using a real cancer registry data set on the multimorbidity of melanoma patients and document the relative performance of the proposed joint frailty model to the natural competitor of a standard frailty model via extensive simulation studies. Our new approach is timely to advance evidence-based knowledge to address increasingly complex needs related to multimorbidity and develop interventions that are most effective and viable to better help a large number of individuals with multiple conditions.


Subject(s)
Frailty , Humans , Cross-Sectional Studies , Survival Analysis , Computer Simulation , Linear Models
2.
J Appl Stat ; 47(5): 804-826, 2020.
Article in English | MEDLINE | ID: mdl-35707324

ABSTRACT

This paper proposes a new regression model for the analysis of spatial panel data in the case of spatial heterogeneity and non-normality. In empirical economic research, the normality of error components is a routine assumption for the models with continuous responses. However, such an assumption may not be appropriate in many applications. This work relaxes the normality assumption by using a multivariate skew-normal distribution, which includes the normal distribution as a special case. The methodology is illustrated through a simulation study and application to insurance and gasoline demand data sets. In these analyses, a simple Bayesian framework that implements a Markov chain Monte Carlo algorithm is derived for parameter estimation and inference.

3.
Stat Methods Med Res ; 29(5): 1368-1385, 2020 05.
Article in English | MEDLINE | ID: mdl-31293217

ABSTRACT

Many medical studies yield data on recurrent clinical events from populations which consist of a proportion of cured patients in the presence of those who experience the event at several times (uncured). A frailty mixture cure model has recently been postulated for such data, with an assumption that the random subject effect (frailty) of each uncured patient is constant across successive gap times between recurrent events. We propose two new models in a more general setting, assuming a multivariate time-varying frailty with an AR(1) correlation structure for each uncured patient and addressing multilevel recurrent event data originated from multi-institutional (multi-centre) clinical trials, using extra random effect terms to adjust for institution effect and treatment-by-institution interaction. To solve the difficulties in parameter estimation due to these highly complex correlation structures, we develop an efficient estimation procedure via an EM-type algorithm based on residual maximum likelihood (REML) through the generalised linear mixed model (GLMM) methodology. Simulation studies are presented to assess the performances of the models. Data sets from a colorectal cancer study and rhDNase multi-institutional clinical trial were analyzed to exemplify the proposed models. The results demonstrate a large positive AR(1) correlation among frailties across successive gap times, indicating a constant frailty may not be realistic in some situations. Comparisons of findings with existing frailty models are discussed.


Subject(s)
Frailty , Models, Statistical , Humans , Survival Analysis , Computer Simulation , Linear Models
4.
Biometrics ; 76(3): 753-766, 2020 09.
Article in English | MEDLINE | ID: mdl-31863594

ABSTRACT

In the study of multiple failure time data with recurrent clinical endpoints, the classical independent censoring assumption in survival analysis can be violated when the evolution of the recurrent events is correlated with a censoring mechanism such as death. Moreover, in some situations, a cure fraction appears in the data because a tangible proportion of the study population benefits from treatment and becomes recurrence free and insusceptible to death related to the disease. A bivariate joint frailty mixture cure model is proposed to allow for dependent censoring and cure fraction in recurrent event data. The latency part of the model consists of two intensity functions for the hazard rates of recurrent events and death, wherein a bivariate frailty is introduced by means of the generalized linear mixed model methodology to adjust for dependent censoring. The model allows covariates and frailties in both the incidence and the latency parts, and it further accounts for the possibility of cure after each recurrence. It includes the joint frailty model and other related models as special cases. An expectation-maximization (EM)-type algorithm is developed to provide residual maximum likelihood estimation of model parameters. Through simulation studies, the performance of the model is investigated under different magnitudes of dependent censoring and cure rate. The model is applied to data sets from two colorectal cancer studies to illustrate its practical value.


Subject(s)
Frailty , Computer Simulation , Humans , Models, Statistical , Recurrence , Survival Analysis
5.
Stat Med ; 38(6): 1036-1055, 2019 03 15.
Article in English | MEDLINE | ID: mdl-30474216

ABSTRACT

We present a multilevel frailty model for handling serial dependence and simultaneous heterogeneity in survival data with a multilevel structure attributed to clustering of subjects and the presence of multiple failure outcomes. One commonly observes such data, for example, in multi-institutional, randomized placebo-controlled trials in which patients suffer repeated episodes (eg, recurrent migraines) of the disease outcome being measured. The model extends the proportional hazards model by incorporating a random covariate and unobservable random institution effect to respectively account for treatment-by-institution interaction and institutional variation in the baseline risk. Moreover, a random effect term with correlation structure driven by a first-order autoregressive process is attached to the model to facilitate estimation of between patient heterogeneity and serial dependence. By means of the generalized linear mixed model methodology, the random effects distribution is assumed normal and the residual maximum likelihood and the maximum likelihood methods are extended for estimation of model parameters. Simulation studies are carried out to evaluate the performance of the residual maximum likelihood and the maximum likelihood estimators and to assess the impact of misspecifying random effects distribution on the proposed inference. We demonstrate the practical feasibility of the modeling methodology by analyzing real data from a double-blind randomized multi-institutional clinical trial, designed to examine the effect of rhDNase on the occurrence of respiratory exacerbations among patients with cystic fibrosis.


Subject(s)
Cluster Analysis , Models, Statistical , Survival Analysis , Cystic Fibrosis/complications , Cystic Fibrosis/drug therapy , Data Interpretation, Statistical , Deoxyribonuclease I/therapeutic use , Humans , Proportional Hazards Models , Randomized Controlled Trials as Topic/methods , Recombinant Proteins/therapeutic use , Respiratory Tract Diseases/etiology , Respiratory Tract Diseases/prevention & control , Treatment Failure
6.
IEEE Trans Neural Netw Learn Syst ; 29(11): 5581-5591, 2018 11.
Article in English | MEDLINE | ID: mdl-29993871

ABSTRACT

Finite mixtures of skew distributions provide a flexible tool for modeling heterogeneous data with asymmetric distributional features. However, parameter estimation via the Expectation-Maximization (EM) algorithm can become very time consuming due to the complicated expressions involved in the E-step that are numerically expensive to evaluate. While parallelizing the EM algorithm can offer considerable speedup in time performance, current implementations focus almost exclusively on distributed platforms. In this paper, we consider instead the most typical operating environment for users of mixture models-a standalone multicore machine and the R programming environment. We develop a block implementation of the EM algorithm that facilitates the calculations on the E- and M-steps to be spread across a number of threads. We focus on the fitting of finite mixtures of multivariate skew normal and skew distributions, and show that both the E- and M-steps in the EM algorithm can be modified to allow the data to be split into blocks. Our approach is easy to implement and provides immediate benefits to users of multicore machines. Experiments were conducted on two real data sets to demonstrate the effectiveness of the proposed approach.

7.
Stat Anal Data Min ; 11(1): 5-16, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29725490

ABSTRACT

Calcium is a ubiquitous messenger in neural signaling events. An increasing number of techniques are enabling visualization of neurological activity in animal models via luminescent proteins that bind to calcium ions. These techniques generate large volumes of spatially correlated time series. A model-based functional data analysis methodology via Gaussian mixtures is suggested for the clustering of data from such visualizations is proposed. The methodology is theoretically justified and a computationally efficient approach to estimation is suggested. An example analysis of a zebrafish imaging experiment is presented.

8.
Neural Comput ; 29(4): 990-1020, 2017 04.
Article in English | MEDLINE | ID: mdl-28095191

ABSTRACT

Mixture of autoregressions (MoAR) models provide a model-based approach to the clustering of time series data. The maximum likelihood (ML) estimation of MoAR models requires evaluating products of large numbers of densities of normal random variables. In practical scenarios, these products converge to zero as the length of the time series increases, and thus the ML estimation of MoAR models becomes infeasible without the use of numerical tricks. We propose a maximum pseudolikelihood (MPL) estimation approach as an alternative to the use of numerical tricks. The MPL estimator is proved to be consistent and can be computed with an EM (expectation-maximization) algorithm. Simulations are used to assess the performance of the MPL estimator against that of the ML estimator in cases where the latter was able to be calculated. An application to the clustering of time series data arising from a resting state fMRI experiment is presented as a demonstration of the methodology.

9.
Methods Mol Biol ; 1549: 109-117, 2017.
Article in English | MEDLINE | ID: mdl-27975287

ABSTRACT

Comparative profiling proteomics experiments are important tools in biological research. In such experiments, tens to hundreds of thousands of peptides are measured simultaneously, with the goal of inferring protein abundance levels. Statistical evaluation of these datasets are required to determine proteins that are differentially abundant between the test samples. Previously we have reported the non-normal distribution of SILAC datasets, and demonstrated the permutation test to be a superior method for the statistical evaluation of non-normal peptide ratios. This chapter outlines the steps and the R scripts that can be used for performing permutation analysis with false discovery rate control via the Benjamini-Yekutieli method.


Subject(s)
Computational Biology/methods , Data Interpretation, Statistical , Proteins , Proteome , Proteomics/methods , Amino Acids , Isotope Labeling , Mutation , Proteins/genetics , Proteins/metabolism , Software , Tandem Mass Spectrometry , Web Browser
10.
Neural Comput ; 28(12): 2585-2593, 2016 12.
Article in English | MEDLINE | ID: mdl-27626962

ABSTRACT

The mixture-of-experts (MoE) model is a popular neural network architecture for nonlinear regression and classification. The class of MoE mean functions is known to be uniformly convergent to any unknown target function, assuming that the target function is from a Sobolev space that is sufficiently differentiable and that the domain of estimation is a compact unit hypercube. We provide an alternative result, which shows that the class of MoE mean functions is dense in the class of all continuous functions over arbitrary compact domains of estimation. Our result can be viewed as a universal approximation theorem for MoE models. The theorem we present allows MoE users to be confident in applying such models for estimation when data arise from nonlinear and nondifferentiable generative processes.

11.
Biometrics ; 72(4): 1255-1265, 2016 12.
Article in English | MEDLINE | ID: mdl-27123964

ABSTRACT

Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization-maximization (MM) algorithm with a Nelder-Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.


Subject(s)
Brachyura/growth & development , Fisheries/statistics & numerical data , Models, Biological , Models, Statistical , Algorithms , Animals , Normal Distribution , Time Factors
12.
Cytometry A ; 89(1): 30-43, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26492316

ABSTRACT

We present an algorithm for modeling flow cytometry data in the presence of large inter-sample variation. Large-scale cytometry datasets often exhibit some within-class variation due to technical effects such as instrumental differences and variations in data acquisition, as well as subtle biological heterogeneity within the class of samples. Failure to account for such variations in the model may lead to inaccurate matching of populations across a batch of samples and poor performance in classification of unlabeled samples. In this paper, we describe the Joint Clustering and Matching (JCM) procedure for simultaneous segmentation and alignment of cell populations across multiple samples. Under the JCM framework, a multivariate mixture distribution is used to model the distribution of the expressions of a fixed set of markers for each cell in a sample such that the components in the mixture model may correspond to the various populations of cells, which have similar expressions of markers (that is, clusters), in the composition of the sample. For each class of samples, an overall class template is formed by the adoption of random-effects terms to model the inter-sample variation within a class. The construction of a parametric template for each class allows for direct quantification of the differences between the template and each sample, and also between each pair of samples, both within or between classes. The classification of a new unclassified sample is then undertaken by assigning the unclassified sample to the class that minimizes the distance between its fitted mixture density and each class density as provided by the class templates. For illustration, we use a symmetric form of the Kullback-Leibler divergence as a distance measure between two densities, but other distance measures can also be applied. We show and demonstrate on four real datasets how the JCM procedure can be used to carry out the tasks of automated clustering and alignment of cell populations, and supervised classification of samples.


Subject(s)
Biomarkers/blood , Computational Biology/methods , Electronic Data Processing/methods , Flow Cytometry/methods , Membrane Proteins/analysis , Pattern Recognition, Automated/methods , Algorithms , Cluster Analysis , Data Interpretation, Statistical , Humans , Leukemia, Myeloid, Acute/diagnosis , Lymphoma, Follicular/diagnosis , Models, Theoretical , West Nile Fever/diagnosis
13.
Comput Stat Data Anal ; 104: 79-90, 2016 Dec.
Article in English | MEDLINE | ID: mdl-28496285

ABSTRACT

The statistical matching problem involves the integration of multiple datasets where some variables are not observed jointly. This missing data pattern leaves most statistical models unidentifiable. Statistical inference is still possible when operating under the framework of partially identified models, where the goal is to bound the parameters rather than to estimate them precisely. In many matching problems, developing feasible bounds on the parameters is equivalent to finding the set of positive-definite completions of a partially specified covariance matrix. Existing methods for characterising the set of possible completions do not extend to high-dimensional problems. A Gibbs sampler to draw from the set of possible completions is proposed. The variation in the observed samples gives an estimate of the feasible region of the parameters. The Gibbs sampler extends easily to high-dimensional statistical matching problems.

14.
PLoS One ; 10(12): e0144370, 2015.
Article in English | MEDLINE | ID: mdl-26689369

ABSTRACT

It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.


Subject(s)
Models, Genetic , Plant Breeding , Plants/genetics
15.
Biostatistics ; 16(1): 98-112, 2015 Jan.
Article in English | MEDLINE | ID: mdl-24963011

ABSTRACT

The detection of differentially expressed (DE) genes, that is, genes whose expression levels vary between two or more classes representing different experimental conditions (say, diseases), is one of the most commonly studied problems in bioinformatics. For example, the identification of DE genes between distinct disease phenotypes is an important first step in understanding and developing treatment drugs for the disease. We present a novel approach to the problem of detecting DE genes that is based on a test statistic formed as a weighted (normalized) cluster-specific contrast in the mixed effects of the mixture model used in the first instance to cluster the gene profiles into a manageable number of clusters. The key factor in the formation of our test statistic is the use of gene-specific mixed effects in the cluster-specific contrast. It thus means that the (soft) assignment of a given gene to a cluster is not crucial. This is because in addition to class differences between the (estimated) fixed effects terms for a cluster, gene-specific class differences also contribute to the cluster-specific contributions to the final form of the test statistic. The proposed test statistic can be used where the primary aim is to rank the genes in order of evidence against the null hypothesis of no DE. We also show how a P-value can be calculated for each gene for use in multiple hypothesis testing where the intent is to control the false discovery rate (FDR) at some desired level. With the use of publicly available and simulated datasets, we show that the proposed contrast-based approach outperforms other methods commonly used for the detection of DE genes both in a ranking context with lower proportion of false discoveries and in a multiple hypothesis testing context with higher power for a specified level of the FDR.


Subject(s)
Cluster Analysis , Data Interpretation, Statistical , Gene Expression Profiling/statistics & numerical data , Gene Expression/genetics , Models, Genetic , Breast Neoplasms/genetics , Female , Humans
16.
PLoS One ; 9(7): e100334, 2014.
Article in English | MEDLINE | ID: mdl-24983991

ABSTRACT

In biomedical applications, an experimenter encounters different potential sources of variation in data such as individual samples, multiple experimental conditions, and multivariate responses of a panel of markers such as from a signaling network. In multiparametric cytometry, which is often used for analyzing patient samples, such issues are critical. While computational methods can identify cell populations in individual samples, without the ability to automatically match them across samples, it is difficult to compare and characterize the populations in typical experiments, such as those responding to various stimulations or distinctive of particular patients or time-points, especially when there are many samples. Joint Clustering and Matching (JCM) is a multi-level framework for simultaneous modeling and registration of populations across a cohort. JCM models every population with a robust multivariate probability distribution. Simultaneously, JCM fits a random-effects model to construct an overall batch template--used for registering populations across samples, and classifying new samples. By tackling systems-level variation, JCM supports practical biomedical applications involving large cohorts. Software for fitting the JCM models have been implemented in an R package EMMIX-JCM, available from http://www.maths.uq.edu.au/~gjm/mix_soft/EMMIX-JCM/.


Subject(s)
Computational Biology/methods , Flow Cytometry , Software , Algorithms , Cluster Analysis , Computer Simulation , Humans
17.
IEEE Trans Med Imaging ; 33(8): 1735-48, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24816549

ABSTRACT

Magnetic resonance imaging (MRI) is widely used to study population effects of factors on brain morphometry. Inference from such studies often require the simultaneous testing of millions of statistical hypotheses. Such scale of inference is known to lead to large numbers of false positive results. Control of the false discovery rate (FDR) is commonly employed to mitigate against such outcomes. However, current methodologies in FDR control only account for the marginal significance of hypotheses, and are not able to explicitly account for spatial relationships, such as those between MRI voxels. In this article, we present novel methods that incorporate spatial dependencies into the process of controlling FDR through the use of Markov random fields. Our method is able to automatically estimate the relationships between spatially dependent hypotheses by means of maximum pseudo-likelihood estimation and the pseudo-likelihood information criterion. We show that our methods have desirable statistical properties with regards to FDR control and are able to outperform noncontexual methods in simulations of dependent hypothesis scenarios. Our method is applied to investigate the effects of aging on brain morphometry using data from the PATH study. Evidence of whole brain and component level effects that correspond to similar findings in the literature is found in our investigation.


Subject(s)
Diagnostic Errors/prevention & control , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Adult , Aged , Aged, 80 and over , Algorithms , Brain/anatomy & histology , Computer Simulation , Humans , Markov Chains , Middle Aged , Young Adult
18.
Brief Bioinform ; 14(4): 402-10, 2013 Jul.
Article in English | MEDLINE | ID: mdl-22988257

ABSTRACT

We consider the classification of microarray gene-expression data. First, attention is given to the supervised case, where the tissue samples are classified with respect to a number of predefined classes and the intent is to assign a new unclassified tissue to one of these classes. The problems of forming a classifier and estimating its error rate are addressed in the context of there being a relatively small number of observations (tissue samples) compared to the number of variables (that is, the genes, which can number in the tens of thousands). We then proceed to the unsupervised case and consider the clustering of the tissue samples and also the clustering of the gene profiles. Both problems can be viewed as being non-standard ones in statistics and we address some of the key issues involved. The focus is on the use of mixture models to effect the clustering for both problems.


Subject(s)
Gene Expression , Genomics/methods , Oligonucleotide Array Sequence Analysis/methods , Child , Cluster Analysis , Databases, Genetic , Humans , Organ Specificity , Precursor Cell Lymphoblastic Leukemia-Lymphoma/metabolism , Transcriptome
19.
BMC Bioinformatics ; 13: 300, 2012 Nov 14.
Article in English | MEDLINE | ID: mdl-23151154

ABSTRACT

BACKGROUND: Time-course gene expression data such as yeast cell cycle data may be periodically expressed. To cluster such data, currently used Fourier series approximations of periodic gene expressions have been found not to be sufficiently adequate to model the complexity of the time-course data, partly due to their ignoring the dependence between the expression measurements over time and the correlation among gene expression profiles. We further investigate the advantages and limitations of available models in the literature and propose a new mixture model with autoregressive random effects of the first order for the clustering of time-course gene-expression profiles. Some simulations and real examples are given to demonstrate the usefulness of the proposed models. RESULTS: We illustrate the applicability of our new model using synthetic and real time-course datasets. We show that our model outperforms existing models to provide more reliable and robust clustering of time-course data. Our model provides superior results when genetic profiles are correlated. It also gives comparable results when the correlation between the gene profiles is weak. In the applications to real time-course data, relevant clusters of coregulated genes are obtained, which are supported by gene-function annotation databases. CONCLUSIONS: Our new model under our extension of the EMMIX-WIRE procedure is more reliable and robust for clustering time-course data because it adopts a random effects model that allows for the correlation among observations at different time points. It postulates gene-specific random effects with an autocorrelation variance structure that models coregulation within the clusters. The developed R package is flexible in its specification of the random effects through user-input parameters that enables improved modelling and consequent clustering of time-course data.


Subject(s)
Gene Expression Profiling/statistics & numerical data , Oligonucleotide Array Sequence Analysis/statistics & numerical data , Software , Transcriptome , Algorithms , Cell Cycle/genetics , Cluster Analysis , Databases, Factual , Gene Expression , Models, Genetic , Saccharomyces cerevisiae/genetics
20.
Proc Natl Acad Sci U S A ; 109(16): E944-53, 2012 Apr 17.
Article in English | MEDLINE | ID: mdl-22451944

ABSTRACT

Evolutionary change in gene expression is generally considered to be a major driver of phenotypic differences between species. We investigated innate immune diversification by analyzing interspecies differences in the transcriptional responses of primary human and mouse macrophages to the Toll-like receptor (TLR)-4 agonist lipopolysaccharide (LPS). By using a custom platform permitting cross-species interrogation coupled with deep sequencing of mRNA 5' ends, we identified extensive divergence in LPS-regulated orthologous gene expression between humans and mice (24% of orthologues were identified as "divergently regulated"). We further demonstrate concordant regulation of human-specific LPS target genes in primary pig macrophages. Divergently regulated orthologues were enriched for genes encoding cellular "inputs" such as cell surface receptors (e.g., TLR6, IL-7Rα) and functional "outputs" such as inflammatory cytokines/chemokines (e.g., CCL20, CXCL13). Conversely, intracellular signaling components linking inputs to outputs were typically concordantly regulated. Functional consequences of divergent gene regulation were confirmed by showing LPS pretreatment boosts subsequent TLR6 responses in mouse but not human macrophages, in keeping with mouse-specific TLR6 induction. Divergently regulated genes were associated with a large dynamic range of gene expression, and specific promoter architectural features (TATA box enrichment, CpG island depletion). Surprisingly, regulatory divergence was also associated with enhanced interspecies promoter conservation. Thus, the genes controlled by complex, highly conserved promoters that facilitate dynamic regulation are also the most susceptible to evolutionary change.


Subject(s)
Gene Expression Profiling , Genetic Variation , Macrophages/metabolism , Toll-Like Receptor 4/genetics , Animals , Cell Line , Cells, Cultured , Chemokine CCL20/genetics , Chemokine CXCL13/genetics , Evolution, Molecular , Female , Gene Expression Regulation/drug effects , Host-Pathogen Interactions , Humans , Lipopolysaccharides/pharmacology , Macrophages/drug effects , Macrophages/microbiology , Male , Mice , Mice, Inbred BALB C , Mice, Inbred C57BL , Mice, Knockout , Oligonucleotide Array Sequence Analysis , Reverse Transcriptase Polymerase Chain Reaction , Salmonella typhimurium/physiology , Species Specificity , Swine , Toll-Like Receptor 4/agonists
SELECTION OF CITATIONS
SEARCH DETAIL
...