Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
1.
Aust N Z J Stat ; 58(1): 99-119, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27478405

ABSTRACT

Quadratic forms capture multivariate information in a single number, making them useful, for example, in hypothesis testing. When a quadratic form is large and hence interesting, it might be informative to partition the quadratic form into contributions of individual variables. In this paper it is argued that meaningful partitions can be formed, though the precise partition that is determined will depend on the criterion used to select it. An intuitively reasonable criterion is proposed and the partition to which it leads is determined. The partition is based on a transformation that maximises the sum of the correlations between individual variables and the variables to which they transform under a constraint. Properties of the partition, including optimality properties, are examined. The contributions of individual variables to a quadratic form are less clear-cut when variables are collinear, and forming new variables through rotation can lead to greater transparency. The transformation is adapted so that it has an invariance property under such rotation, whereby the assessed contributions are unchanged for variables that the rotation does not affect directly. Application of the partition to Hotelling's one- and two-sample test statistics, Mahalanobis distance and discriminant analysis is described and illustrated through examples. It is shown that bootstrap confidence intervals for the contributions of individual variables to a partition are readily obtained.

2.
PLoS One ; 11(8): e0160759, 2016.
Article in English | MEDLINE | ID: mdl-27513749

ABSTRACT

A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace.


Subject(s)
Algorithms , Disease Outbreaks/statistics & numerical data , Models, Statistical , Public Health Surveillance/methods , England , False Positive Reactions , Humans
3.
Comput Stat Data Anal ; 99: 115-130, 2016 Jul.
Article in English | MEDLINE | ID: mdl-27375307

ABSTRACT

Mahalanobis distance may be used as a measure of the disparity between an individual's profile of scores and the average profile of a population of controls. The degree to which the individual's profile is unusual can then be equated to the proportion of the population who would have a larger Mahalanobis distance than the individual. Several estimators of this proportion are examined. These include plug-in maximum likelihood estimators, medians, the posterior mean from a Bayesian probability matching prior, an estimator derived from a Taylor expansion, and two forms of polynomial approximation, one based on Bernstein polynomial and one on a quadrature method. Simulations show that some estimators, including the commonly-used plug-in maximum likelihood estimators, can have substantial bias for small or moderate sample sizes. The polynomial approximations yield estimators that have low bias, with the quadrature method marginally to be preferred over Bernstein polynomials. However, the polynomial estimators sometimes yield infeasible estimates that are outside the 0-1 range. While none of the estimators are perfectly unbiased, the median estimators match their definition; in simulations their estimates of the proportion have a median error close to zero. The standard median estimator can give unrealistically small estimates (including 0) and an adjustment is proposed that ensures estimates are always credible. This latter estimator has much to recommend it when unbiasedness is not of paramount importance, while the quadrature method is recommended when bias is the dominant issue.

6.
Emerg Infect Dis ; 19(1): 35-42, 2013 Jan.
Article in English | MEDLINE | ID: mdl-23260848

ABSTRACT

Outbreak detection systems for use with very large multiple surveillance databases must be suited both to the data available and to the requirements of full automation. To inform the development of more effective outbreak detection algorithms, we analyzed 20 years of data (1991-2011) from a large laboratory surveillance database used for outbreak detection in England and Wales. The data relate to 3,303 distinct types of infectious pathogens, with a frequency range spanning 6 orders of magnitude. Several hundred organism types were reported each week. We describe the diversity of seasonal patterns, trends, artifacts, and extra-Poisson variability to which an effective multiple laboratory-based outbreak detection system must adjust. We provide empirical information to guide the selection of simple statistical models for automated surveillance of multiple organisms, in the light of the key requirements of such outbreak detection systems, namely, robustness, flexibility, and sensitivity.


Subject(s)
Bacterial Infections/epidemiology , Biosurveillance/methods , Disease Outbreaks , Mycoses/epidemiology , Public Health Informatics/statistics & numerical data , Virus Diseases/epidemiology , Algorithms , Automation , Bacteria/growth & development , Bacterial Load , Colony Count, Microbial , England/epidemiology , Fungi/growth & development , Humans , Incidence , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity , Viruses/growth & development , Wales/epidemiology
7.
Clin Neuropsychol ; 26(7): 1154-65, 2012.
Article in English | MEDLINE | ID: mdl-22985303

ABSTRACT

Point and interval estimates of percentile ranks are useful tools in assisting with the interpretation of neurocognitive test results. We provide percentile ranks for raw subscale scores on the Texas Functional Living Scale (TFLS; Cullum, Weiner, & Saine, 2009) using the TFLS standardization sample data (N = 800). Percentile ranks with interval estimates are also provided for the overall TFLS T score. Conversion tables are provided along with the option of obtaining the point and interval estimates using a computer program written to accompany this paper (TFLS_PRs.exe). The percentile ranks for the subscales offer an alternative to using the cumulative percentage tables in the test manual and provide a useful and quick way for neuropsychologists to assimilate information on the case's profile of scores on the TFLS subscales. The provision of interval estimates for the percentile ranks is in keeping with the contemporary emphasis on the use of confidence intervals in psychological statistics.


Subject(s)
Activities of Daily Living/psychology , Data Interpretation, Statistical , Neuropsychological Tests , Psychometrics , Reference Standards , Humans , Models, Statistical , Neuropsychological Tests/standards , Neuropsychological Tests/statistics & numerical data , Psychometrics/instrumentation , Psychometrics/standards , Psychometrics/statistics & numerical data
8.
Psychol Assess ; 24(4): 801-14, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22449035

ABSTRACT

Regression equations have many useful roles in psychological assessment. Moreover, there is a large reservoir of published data that could be used to build regression equations; these equations could then be employed to test a wide variety of hypotheses concerning the functioning of individual cases. This resource is currently underused because (a) not all psychologists are aware that regression equations can be built not only from raw data but also using only basic summary data for a sample, and (b) the computations involved are tedious and prone to error. In an attempt to overcome these barriers, Crawford and Garthwaite (2007) provided methods to build and apply simple linear regression models using summary statistics as data. In the present study, we extend this work to set out the steps required to build multiple regression models from sample summary statistics and the further steps required to compute the associated statistics for drawing inferences concerning an individual case. We also develop, describe, and make available a computer program that implements these methods. Although there are caveats associated with the use of the methods, these need to be balanced against pragmatic considerations and against the alternative of either entirely ignoring a pertinent data set or using it informally to provide a clinical "guesstimate." Upgraded versions of earlier programs for regression in the single case are also provided; these add the point and interval estimates of effect size developed in the present article.


Subject(s)
Data Interpretation, Statistical , Psychology/methods , Regression Analysis , Humans
9.
J Neuropsychol ; 6(2): 192-211, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22257377

ABSTRACT

OBJECTIVES: To develop supplementary methods for the analysis of the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) in neuropsychological assessment. DESIGN AND METHODS: Psychometric. RESULTS: The following methods are made available: (a) provision of traditional confidence intervals (CIs) on index scores, (b) expression of the endpoints of CIs as percentile ranks; (c) quantification of the number of abnormally low index scores exhibited by a case and accompanying estimate of the percentage of the normative population expected to exhibit at least this number of low scores; (d) quantification of the reliability and abnormality of index score deviations from an individual's index score mean (thereby offering an alternative to the pairwise approach to index score comparisons available in the WAIS-IV manual); (e) provision of CIs on an individual's deviation scores or pairwise difference scores, (f) estimation of the percentage of the normative population expected to exhibit at least as many abnormal deviations or abnormal pairwise differences as a case; and (g) calculation of a case's Mahalanobis distance index (MDI), thereby providing a multivariate estimate of the overall abnormality of an index score profile. With the exception of the MDI, all the methods can be applied using tables provided in this paper. However, for ease and speed of application, and to reduce the possibility of clerical error, all the methods have also been implemented in a computer program. CONCLUSIONS: The methods are useful for neuropsychological interpretation of the WAIS-IV.


Subject(s)
Cognition Disorders/diagnosis , Neuropsychological Tests , Wechsler Scales , Confidence Intervals , Humans , Intelligence Tests , Psychometrics , Reproducibility of Results , Software
10.
Psychol Assess ; 24(2): 365-74, 2012 Jun.
Article in English | MEDLINE | ID: mdl-21942233

ABSTRACT

Supplementary methods for the analysis of the Repeatable Battery for the Assessment of Neuropsychological Status are made available, including (a) quantifying the number of abnormally low Index scores and abnormally large differences exhibited by a case and accompanying this with estimates of the percentages of the normative population expected to exhibit at least this number of low scores and large differences, (b) estimating the overall abnormality of a case's Index score profile using the Mahalanobis Distance Index (MDI), (c) reporting confidence limits on differences between a case's Index scores, and (d) offering the option of applying a sequential Bonferroni correction when testing for reliable differences. With the exception of the MDI, all the methods can be obtained using the formulas and tables provided in this article. However, for the convenience of clinicians, and to reduce the possibility of clerical error, the methods have also been implemented in a computer program. More importantly, the program allows the methods to be applied when only a subset of the Indexes is available. The program can be downloaded from www.abdn.ac.uk/~psy086/dept/RBANS_Supplementary_Analysis.htm


Subject(s)
Cognition Disorders/diagnosis , Data Interpretation, Statistical , Neuropsychological Tests/statistics & numerical data , Software , Adult , Aged , Aged, 80 and over , Health/statistics & numerical data , Humans , Middle Aged , Monte Carlo Method , Psychometrics , Reproducibility of Results , Severity of Illness Index , Young Adult
11.
Cortex ; 48(8): 1009-16, 2012 Sep.
Article in English | MEDLINE | ID: mdl-21843884

ABSTRACT

Five inferential methods employed in single-case studies to compare a case to controls are examined; all of these make use of a t-distribution. It is shown that three of these ostensibly different methods are in fact strictly equivalent and are not fit for purpose; they are associated with grossly inflated Type I errors (these exceed even the error rate obtained when a case's score is converted to a z score and the latter used as a test statistic). When used as significance tests, the two remaining methods (Crawford and Howell's method and a prediction interval method first used by Barton and colleagues) are also equivalent and achieve control of the Type I error rate (the two methods do differ however in other important aspects). A number of broader issues also arise from the present findings, namely: (a) they underline the value of accompanying significance test results with the effect size for the difference between a case and controls, (b) they suggest that less care is often taken over statistical methods than over other aspects of single-case studies, and (c) they indicate that some neuropsychologists have a distorted conception of the nature of hypothesis testing in single-case research (it is argued that this may stem from a failure to distinguish between group studies and single-case studies).


Subject(s)
Neuropsychology/methods , Analysis of Variance , Models, Neurological , Monte Carlo Method , Research Design , Statistics as Topic
12.
Psychol Assess ; 23(4): 888-98, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21574720

ABSTRACT

Supplementary methods for the analysis of the Delis-Kaplan Executive Function System (Delis, Kaplan, & Kramer, 2001) are made available, including (a) quantifying the number of abnormally low achievement scores exhibited by an individual and accompanying this with an estimate of the percentage of the normative population expected to exhibit at least this number of low scores; (b) estimating the overall abnormality of an individual's achievement score profile with the Mahalanobis distance index; (c) calculating a composite executive function index score for an individual and providing accompanying confidence limits; and (d) providing the percentile ranks for an individual's achievement scores and executive index score (in the latter case, confidence limits on scores are also expressed as percentile ranks). With the exception of the Mahalanobis distance index, all the methods can be obtained with the equations and tables provided in this article. However, for the convenience of clinicians and to reduce the possibility of clerical error, the methods have also been implemented in a computer program. More important, the program allows the methods to be applied when only a subset of scores is available. The program can be downloaded (as a zip file) from this article's supplemental materials or from www.abdn.ac.uk/~psy086/dept/DKEFS_Supplementary_Analysis.htm.


Subject(s)
Executive Function/physiology , Neuropsychological Tests/statistics & numerical data , Psychometrics , Software , Statistics as Topic , Achievement , Adolescent , Adult , Aged , Aged, 80 and over , Child , Cognition Disorders/diagnosis , Diagnosis, Computer-Assisted , Humans , Mental Disorders/diagnosis , Mental Disorders/epidemiology , Middle Aged , Reference Values , Young Adult
13.
Cortex ; 47(10): 1166-78, 2011.
Article in English | MEDLINE | ID: mdl-21458788

ABSTRACT

Existing inferential methods of testing for a deficit or dissociation in the single case are extended to allow researchers to control for the effects of covariates. The new (bayesian) methods provide a significance test, point and interval estimates of the effect size for the difference between the case and controls, and point and interval estimates of the abnormality of a case's score, or standardized score difference. The methods have a wide range of potential applications, e.g., they can provide a means of increasing the statistical power to detect deficits or dissociations, or can be used to test whether differences between a case and controls survive partialling out the effects of potential confounding variables. The methods are implemented in a series of computer programs for PCs (these can be downloaded from www.abdn.ac.uk/∼psy086/dept/Single_Case_Covariates.htm). Illustrative examples of the methods are provided.


Subject(s)
Bayes Theorem , Case-Control Studies , Dissociative Disorders/diagnosis , Nervous System Diseases/diagnosis , Neuropsychology/methods , Algorithms , Data Interpretation, Statistical , Dissociative Disorders/psychology , Humans , Models, Statistical , Multivariate Analysis , Nervous System Diseases/psychology , Software
14.
Cogn Neuropsychol ; 27(3): 245-60, 2010 May.
Article in English | MEDLINE | ID: mdl-20936548

ABSTRACT

It is increasingly common for group studies in neuropsychology to report effect sizes. In contrast this is rarely done in single-case studies (at least in those studies that employ a case-controls design). The present paper sets out the advantages of reporting effect sizes, derives suitable effect size indexes for use in single-case studies, and develops methods of supplementing point estimates of effect sizes with interval estimates. Computer programs that implement existing classical and Bayesian inferential methods for the single case (as developed by Crawford, Garthwaite, Howell, and colleagues) are upgraded to provide these point and interval estimates. The upgraded programs can be downloaded from www.abdn.ac.uk/~psy086/dept/Single_Case_Effect_Sizes.htm.


Subject(s)
Case-Control Studies , Neuropsychology/methods , Neuropsychology/statistics & numerical data , Research Design/standards , Bayes Theorem , Humans , Monte Carlo Method , Software
15.
Cogn Neuropsychol ; 27(5): 377-400, 2010 Jul.
Article in English | MEDLINE | ID: mdl-21718213

ABSTRACT

In neuropsychological single-case studies, it is not uncommon for researchers to compare the scores of two single cases. Classical (and Bayesian) statistical methods are developed for such problems, which, unlike existing methods, refer the scores of the two single cases to a control sample. These methods allow researchers to compare two cases' scores, with or without allowing for the effects of covariates. The methods provide a hypothesis test (one- or two-tailed), point and interval estimates of the effect size of the difference, and point and interval estimates of the percentage of pairs of controls that will exhibit larger differences than the cases. Monte Carlo simulations demonstrate that the statistical theory underlying the methods is sound and that the methods are robust in the face of departures from normality. The methods have been implemented in computer programs, and these are described and made available (to download, go to http://www.abdn.ac.uk/~psy086/dept/Compare_Two_Cases.htm).


Subject(s)
Case-Control Studies , Models, Statistical , Neuropsychological Tests/statistics & numerical data , Psychometrics/statistics & numerical data , Bayes Theorem , Computer Simulation/statistics & numerical data , Humans , Monte Carlo Method
16.
Neuropsychologia ; 47(13): 2690-5, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19383506

ABSTRACT

Corballis [Corballis, M. C. (2009). Comparing a single case with a control sample: Refinements and extensions. Neuropsychologia] offers an interesting position paper on statistical inference in single-case studies. The following points arise: (1) Testing whether we can reject the null hypothesis that a patient's score is an observation from the population of control scores can be a legitimate aim for single-case researchers, not just clinicians. (2) Counter to the claim made by Corballis [Corballis, M. C. (2009). Comparing a single case with a control sample: Refinements and extensions. Neuropsychologia], Crawford and Howell's [Crawford, J. R., & Howell, D. C. (1998). Comparing an individual's test score against norms derived from small samples. The Clinical Neuropsychologist, 12, 482-486] method does test whether we can reject the above null hypothesis. (3) In all but the most unusual of circumstances Crawford and Howell's method can also safely be used to test whether the mean of a notional patient population is lower than that of a control population, should neuropsychologists wish to construe the test in this way. (4) In contrast, the method proposed by Corballis is not legitimate for either of these purposes because it fails to allow for uncertainty over the control mean (as a result Type I errors will not be under control). (5) The use of a mixed ANOVA design to compare a case to controls (with or without the adjustment proposed by Corballis) is beset with problems but these can be overcome using alternative methods.


Subject(s)
Case-Control Studies , Models, Statistical , Neuropsychology/methods , Statistics as Topic , Analysis of Variance , Humans , Monte Carlo Method
17.
Clin Neuropsychol ; 23(7): 1173-95, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19322734

ABSTRACT

Normative data for neuropsychological tests are often presented in the form of percentiles. One problem when using percentile norms stems from uncertainty over the definitional formula for a percentile. (There are three co-existing definitions and these can produce substantially different results.) A second uncertainty stems from the use of a normative sample to estimate the standing of a raw score in the normative population. This uncertainty is unavoidable but its extent can be captured using methods developed in the present paper. A set of reporting standards for the presentation of percentile norms in neuropsychology is proposed. An accompanying computer program (available to download) implements these standards and generates tables of point and interval estimates of percentile ranks for new or existing normative data.


Subject(s)
Neuropsychological Tests/statistics & numerical data , Neuropsychological Tests/standards , Neuropsychology/standards , Research Design/standards , Humans , Models, Statistical , Neuropsychology/statistics & numerical data , Psychometrics/standards , Psychometrics/statistics & numerical data , Research Design/statistics & numerical data , Uncertainty
18.
Clin Neuropsychol ; 23(4): 624-44, 2009 May.
Article in English | MEDLINE | ID: mdl-19235634

ABSTRACT

Most neuropsychologists are aware that, given the specificity and sensitivity of a test and an estimate of the base rate of a disorder, Bayes' theorem can be used to provide a post-test probability for the presence of the disorder given a positive test result (and a post-test probability for the absence of a disorder given a negative result). However, in the standard application of Bayes' theorem the three quantities (sensitivity, specificity, and the base rate) are all treated as fixed, known quantities. This is very unrealistic as there may be considerable uncertainty over these quantities and therefore even greater uncertainty over the post-test probability. Methods of obtaining interval estimates on the specificity and sensitivity of a test are set out. In addition, drawing and extending upon work by Mossman and Berger (2001), a Monte Carlo method is used to obtain interval estimates for post-test probabilities. All the methods have been implemented in a computer program, which is described and made available (www.abdn.ac.uk/~psy086/dept/BayesPTP.htm). When objective data on the base rate are lacking (or have limited relevance to the case at hand) the program elicits opinion for the pre-test probability.


Subject(s)
Bayes Theorem , Diagnosis, Computer-Assisted , Neuropsychology , False Negative Reactions , False Positive Reactions , Humans , Monte Carlo Method , Predictive Value of Tests , Probability , Psychometrics , Sensitivity and Specificity
19.
Clin Neuropsychol ; 23(2): 193-204, 2009 Feb.
Article in English | MEDLINE | ID: mdl-18609335

ABSTRACT

Many commentators on neuropsychological assessment stress the disadvantages of expressing test scores in the form of percentile ranks. As a result, there is a danger of losing sight of the fundamentals: percentile ranks express scores in a form that is of greater relevance to the neuropsychologist than any alternative metric because they tell us directly how common or uncommon such scores are in the normative population. We advocate that, in addition to expressing scores on a standard metric, neuropsychologists should also routinely record the percentile rank of all test scores so that the latter are available when attempting to reach a formulation. In addition, it is argued that the current practice of expressing confidence limits on test scores on a standard score metric should be supplemented with confidence limits expressed as percentile ranks, because the latter provide a more direct and tangible indication of the uncertainty surrounding an observed score. Computer programs accompany this paper and can be used to obtain percentile rank confidence limits for Index scores (and FSIQs) on the WAIS-III or WISC-IV (these can be downloaded from the following web page: http://www.abdn.ac.uk/~psy086/dept/PRCLME.htm).


Subject(s)
Models, Statistical , Neuropsychological Tests/statistics & numerical data , Psychometrics/methods , Humans , Neuropsychological Tests/standards , Reference Values
20.
Br J Clin Psychol ; 48(Pt 2): 163-80, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19054433

ABSTRACT

BACKGROUND: A series of recent papers have reported normative data from the general adult population for commonly used self-report mood scales. AIMS: To bring together and supplement these data in order to provide a convenient means of obtaining percentile norms for the mood scales. METHOD: A computer program was developed that provides point and interval estimates of the percentile rank corresponding to raw scores on the various self-report scales. RESULTS: The program can be used to obtain point and interval estimates of the percentile rank of an individual's raw scores on the DASS, DASS-21, HADS, PANAS, and sAD mood scales, based on normative sample sizes ranging from 758 to 3822. The interval estimates can be obtained using either classical or Bayesian methods as preferred. CONCLUSION: The computer program (which can be downloaded at www.abdn.ac.uk/~psy086/dept/MoodScore.htm) provides a convenient and reliable means of supplementing existing cut-off scores for self-report mood scales.


Subject(s)
Mood Disorders/diagnosis , Personality Inventory/statistics & numerical data , Software , Statistics as Topic/methods , Stress, Psychological/diagnosis , Adult , Affect , Anxiety Disorders/diagnosis , Anxiety Disorders/psychology , Bayes Theorem , Confidence Intervals , Depressive Disorder/diagnosis , Depressive Disorder/psychology , Female , Humans , Male , Mood Disorders/psychology , Psychometrics , Reference Values , Stress, Psychological/psychology
SELECTION OF CITATIONS
SEARCH DETAIL
...