Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Phytopathology ; 94(9): 1027-30, 2004 Sep.
Article in English | MEDLINE | ID: mdl-18943083

ABSTRACT

ABSTRACT Bayesian methods are currently much discussed and applied in several disciplines from molecular biology to engineering. Bayesian inference is the process of fitting a probability model to a set of data and summarizing the results via probability distributions on the parameters of the model and unobserved quantities such as predictions for new observations. In this paper, after a short introduction of Bayesian inference, we present the basic features of Bayesian methodology using examples from sequencing genomic fragments and analyzing microarray gene-expressing levels, reconstructing disease maps, and designing experiments.

2.
Phytopathology ; 94(1): 102-10, 2004 Jan.
Article in English | MEDLINE | ID: mdl-18943826

ABSTRACT

ABSTRACT Regional prevalence of soybean Sclerotinia stem rot (SSR), caused by Sclerotinia sclerotiorum, was modeled using tillage practices, soil texture, and weather variables (monthly air temperature and monthly precipitation from April to August) as inputs. Logistic regression was used to estimate the probability of stem rot prevalence with historical disease data from four states of the north-central region of the United States. Potential differences in disease prevalence between states in the region were addressed using regional indicator variables. Two models were developed: model I used spring (April) weather conditions and model II used summer (July and August) weather conditions as input variables. Both models had high explanatory power (78.5 and 77.8% for models I and II, respectively). To investigate the explanatory power of the models, each of the four states was divided into small geographic areas, and disease prevalence in each area was estimated using both models. The R(2) value of the regression analysis between observed and estimated SSR prevalence was 0.65 and 0.71 for models I and II, respectively. The same input variables were tested for their significance to explain the within-field SSR incidence by using Poisson regression analysis. Although all input variables were significant, only a small amount of variation of SSR incidence was explained, because R(2) of the regression analysis between observed and estimated SSR incidence was 0.065. Incorporation of available site-specific information (i.e., fungicide seed treatment, weed cultivation, and manure and fertilizer applications in a field) improved slightly the explained amount of SSR incidence (R(2) = 0.076). Predicted values of field incidence generally were overestimated in both models compared with the observed incidence. Our results suggest that preseason prediction of regional prevalence would be feasible. However, prediction of field incidence would not, and a different site-specific approach should be followed.

3.
Phytopathology ; 93(6): 758-64, 2003 Jun.
Article in English | MEDLINE | ID: mdl-18943065

ABSTRACT

ABSTRACT Bayesian ideas have recently gained considerable ground in several scientific fields mainly due to the rapid progress in computing resources. Nevertheless, in plant epidemiology, Bayesian methodology is not yet commonly discussed or applied. Results of a logistic regression analysis of a 4-year data set collected between 1995 and 1998 on soybean Sclerotinia stem rot (SSR) prevalence in the north-central region of the United States were reexamined with Bayesian methodology. The objective of this study was to use Bayesian methodology to explore the level of uncertainty associated with the parameter estimates derived from the logistic regression analysis of SSR prevalence. Our results suggest that the 4-year data set used in the logistic regression analysis of SSR prevalence in the north-central region of the United States may not be informative enough to produce reliable estimates of the effect of some explanatory variables on SSR prevalence. Such confident estimations are necessary for deriving robust conclusions and high quality predictions.

4.
Plant Dis ; 87(9): 1048-1058, 2003 Sep.
Article in English | MEDLINE | ID: mdl-30812817

ABSTRACT

Regional prevalence of soybean Sclerotinia stem rot (SSR), caused by Sclerotinia sclerotiorum, was modeled using management practices (tillage, herbicide, manure and fertilizer application, and seed treatment with fungicide) and summer weather variables (mean monthly air temperature and precipitation for the months of June, July, August, and September) as inputs. Logistic regression analysis was used to estimate the probability of stem rot prevalence with disease data from four states in the north-central region of the United States (Illinois, Iowa, Minnesota, and Ohio). Goodness-of-fit criteria indicated that the resulting model explained well the observed frequency of occurrence. The relationship of management practices and weather variables with soybean yield was examined using multiple linear regression (R 2 = 0.27). Variables significant to SSR prevalence, including average air temperature during July and August, precipitation during July, tillage, seed treatment, liquid manure, fertilizer, and herbicide applications, were also associated with high attainable yield. The results suggested that SSR occurrence in the north-central region of the United States was associated with environments of high potential yield. Farmers' decisions about SSR management, when the effect of management practices on disease prevalence and expected attainable yield was taken into account, were examined. Bayesian decision procedures were used to combine information from our model (prediction) with farmers' subjective estimation of SSR incidence (personal estimate, based on farmers' previous experience with SSR incidence). MAXIMIN and MAXIMAX criteria were used to incorporate farmers' site-specific past experience with SSR incidence, and optimum actions were derived using the criterion of profit maximization. Our results suggest that management practices should be applied to increase attainable yield despite their association with high disease risk.

5.
Genet Sel Evol ; 33(4): 337-67, 2001.
Article in English | MEDLINE | ID: mdl-11559482

ABSTRACT

Markov chain Monte Carlo (MCMC) methods have been proposed to overcome computational problems in linkage and segregation analyses. This approach involves sampling genotypes at the marker and trait loci. Scalar-Gibbs is easy to implement, and it is widely used in genetics. However, the Markov chain that corresponds to scalar-Gibbs may not be irreducible when the marker locus has more than two alleles, and even when the chain is irreducible, mixing has been observed to be slow. These problems do not arise if the genotypes are sampled jointly from the entire pedigree. This paper proposes a method to jointly sample genotypes. The method combines the Elston-Stewart algorithm and iterative peeling, and is called the ESIP sampler. For a hypothetical pedigree, genotype probabilities are estimated from samples obtained using ESIP and also scalar-Gibbs. Approximate probabilities were also obtained by iterative peeling. Comparisons of these with exact genotypic probabilities obtained by the Elston-Stewart algorithm showed that ESIP and iterative peeling yielded genotypic probabilities that were very close to the exact values. Nevertheless, estimated probabilities from scalar-Gibbs with a chain of length 235 000, including a burn-in of 200 000 steps, were less accurate than probabilities estimated using ESIP with a chain of length 10 000, with a burn-in of 5 000 steps. The effective chain size (ECS) was estimated from the last 25 000 elements of the chain of length 125 000. For one of the ESIP samplers, the ECS ranged from 21 579 to 22 741, while for the scalar-Gibbs sampler, the ECS ranged from 64 to 671. Genotype probabilities were also estimated for a large real pedigree consisting of 3 223 individuals. For this pedigree, it is not feasible to obtain exact genotype probabilities by the Elston-Stewart algorithm. ESIP and iterative peeling yielded very similar results. However, results from scalar-Gibbs were less accurate.


Subject(s)
Genotype , Pedigree , Sampling Studies , Algorithms , Animals , Dogs , Gene Order , Genetic Linkage , Genetic Markers/genetics , Humans , Markov Chains , Monte Carlo Method , Nuclear Family , Reproducibility of Results
6.
Public Health Nutr ; 2(1): 23-33, 1999 Mar.
Article in English | MEDLINE | ID: mdl-10452728

ABSTRACT

OBJECTIVE: To describe an approach for assessing the prevalence of nutrient inadequacy in a group, using daily intake data and the new Estimated Average Requirement (EAR). DESIGN: Observing the proportion of individuals in a group whose usual intake of a nutrient is below their requirement for the nutrient is not possible in general. We argue that this proportion can be well approximated in many cases by counting, instead, the number of individuals in the group whose intakes are below the EAR for the nutrient. SETTING: This is a methodological paper, and thus emphasis is not on analysing specific data sets. For illustration of one of the statistical methods presented herein, we have used the 1989-91 Continuing Survey on Food Intakes by Individuals. RESULTS: We show that the EAR and a reliable estimate of the usual intake distribution in the group of interest can be used to assess the proportion of individuals in the group whose usual intakes are not meeting their requirements. This approach, while simple, does not perform well in every case. For example, it cannot be used on energy, since intakes and requirements for energy are highly correlated. Similarly, iron in menstruating women presents some difficulties, due to the fact that the distribution of iron requirements in this group is known to be skewed. CONCLUSIONS: The apparently intractable problem of assessing the proportion of individuals in a group whose usual intakes of a nutrient are not meeting their requirements can be solved by comparing usual intakes to the EAR for the nutrient, as long as some conditions are met. These are: (1) intakes and requirements for the nutrient must be independent, (2) the distribution of requirements must be approximately symmetric around its mean, the EAR, and (3) the variance of the distribution of requirements should be smaller than the variance of the usual intake distribution.


Subject(s)
Models, Statistical , Nutrition Disorders/epidemiology , Population Surveillance/methods , Confidence Intervals , Energy Intake , Humans , Linear Models , Monte Carlo Method , Nutrition Disorders/prevention & control , Nutritional Requirements , Prevalence , Risk
7.
J Nutr ; 127(6): 1106-12, 1997 Jun.
Article in English | MEDLINE | ID: mdl-9187624

ABSTRACT

Assessment of the dietary intake of a population must consider the large within-person variation in daily intakes. A 1986 report by the National Academy of Sciences (NAS), commissioned by the U.S. Department of Agriculture (USDA), marked an important milestone in the history of this issue. Since that time, USDA has been working cooperatively with statisticians at Iowa State University (ISU), who have further developed the measurement error model approach proposed by NAS. The method developed by the ISU statisticians can be used to estimate usual dietary intake distributions for a population but not for specific individuals. It is based on the assumption that an individual can more accurately recall and describe the foods eaten yesterday than foods eaten at an earlier time. The method requires as few as two independent days of nutrient intake information or three consecutive days for at least a subsample of the individuals. It removes biases of subsequent reporting days compared with the first day, and temporal effects such as day-of-the-week and seasonal effects can be easily removed. The method developed at ISU is described conceptually and applied to data collected in the 1989-91 USDA Continuing Survey of Food intakes by individuals to estimate the proportion of men and women age 20 y and older having "usual" (long-run average) intakes below 30% of energy from fat, below the 1989 Recommended Dietary Allowances for vitamin A and folate, and above 1000 micrograms for folate. These results were compared with the results from the distributions of 1-d intakes and of 3-d mean intakes to demonstrate the effect of within-person variation and asymmetry on usual nutrient intakes in a population.


Subject(s)
Diet Surveys , Adult , Dietary Fats/administration & dosage , Energy Intake , Female , Folic Acid/administration & dosage , Humans , Male , Mental Recall , Nutrition Assessment , Nutritional Requirements , Population Surveillance/methods , United States , Vitamin A/administration & dosage
8.
Bull Math Biol ; 53(4): 579-89, 1991.
Article in English | MEDLINE | ID: mdl-1933030

ABSTRACT

A mathematical model (Kliemann, W. 1987. Bull. math. Biol. 49, 135-152.) that predicts the quantitative branching pattern of dendritic tree was evaluated using the apical and basal dendrites of rat hippocampal neurons. The Wald statistic for chi 2-test was developed for the branching pattern of dendritic trees and for the distribution of the maximal order of the tree. Using this statistic, we obtained a reasonable, but not excellent, fit of the mathematical model for the dendritic data. The model's predictability of branching pattern was greatly enhanced by replacing one of the assumptions used for the original method "splitting of branches for all dendritic orders is stochastically independent", with a new assumption "branches are more likely to split in areas where there is already a high density of branches". The modified model delivered an excellent fit for basal dendrites and for the apical dendrites of hippocampal neurons from young rats (30-34 days postpartum). This indicates that for these cells the development of dendritic patterns is the result of a purely random and a systematic component, where the latter one depends on the density of dendritic branches in the brain area considered. For apical dendrites there is a trend towards decreasing pattern predictability with increasing age. This appears to reflect the late arrival of afferents and subsequent synaptogenesis proximal on the apical dendritic tree of hippocampal neurons.


Subject(s)
Dendrites/ultrastructure , Animals , Evaluation Studies as Topic , Models, Neurological , Models, Theoretical , Rats , Rats, Inbred Strains , Stochastic Processes
9.
Biometrics ; 43(4): 929-39, 1987 Dec.
Article in English | MEDLINE | ID: mdl-3509964

ABSTRACT

A mixed-model procedure for analysis of censored data assuming a multivariate normal distribution is described. A Bayesian framework is adopted which allows for estimation of fixed effects and variance components and prediction of random effects when records are left-censored. The procedure can be extended to right- and two-tailed censoring. The model employed is a generalized linear model, and the estimation equations resemble those arising in analysis of multivariate normal or categorical data with threshold models. Estimates of variance components are obtained using expressions similar to those employed in the EM algorithm for restricted maximum likelihood (REML) estimation under normality.


Subject(s)
Animal Husbandry , Models, Biological , Reproduction , Analysis of Variance , Animals , Biometry , Female , Male , Mice
SELECTION OF CITATIONS
SEARCH DETAIL
...