Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Br J Math Stat Psychol ; 74 Suppl 1: 1-23, 2021 07.
Article in English | MEDLINE | ID: mdl-32729636

ABSTRACT

Estimates of subgroup differences are routinely used as part of a comprehensive validation system, and these estimates serve a critical role, including evaluating adverse impact. Unfortunately, under direct range restriction, a selected mean ( µÌ‚t' ) is a biased estimator of the population mean µx as well as the selected true score mean µt' . This is due partly to measurement bias. This bias, as we show, is a factor of the selection ratio, the reliability of the measure, and the variance of the distribution. This measurement bias renders a subgroup comparison questionable when the subgroups have different selection ratios. The selected subgroup comparison is further complicated by the fact that the subgroup variances will be unequal in most situations where the selection ratios are not equal. We address these problems and present a corrected estimate of the mean difference, as well as an estimate of Cohen's d* that estimates the true score difference between two selected populations, (µtA'-µtB')/σt . In addition, we show that the measurement bias is not present under indirect range restriction. Thus, the observed selected mean µÌ‚t' is an unbiased estimator of selected true score mean µty' . However, it is not an unbiased estimator of the population mean µy . These results have important implications for selection research, particularly when validating instruments.


Subject(s)
Reproducibility of Results , Bias
2.
Sci Rep ; 10(1): 7543, 2020 05 05.
Article in English | MEDLINE | ID: mdl-32372001

ABSTRACT

The detection and analysis of circulating tumor cells (CTCs) may enable a broad range of cancer-related applications, including the identification of acquired drug resistance during treatments. However, the non-scalable fabrication, prolonged sample processing times, and the lack of automation, associated with most of the technologies developed to isolate these rare cells, have impeded their transition into the clinical practice. This work describes a novel membrane-based microfiltration device comprised of a fully automated sample processing unit and a machine-vision-enabled imaging system that allows the efficient isolation and rapid analysis of CTCs from blood. The device performance was characterized using four prostate cancer cell lines, including PC-3, VCaP, DU-145, and LNCaP, obtaining high assay reproducibility and capture efficiencies greater than 93% after processing 7.5 mL blood samples spiked with 100 cancer cells. Cancer cells remained viable after filtration due to the minimal shear stress exerted over cells during the procedure, while the identification of cancer cells by immunostaining was not affected by the number of non-specific events captured on the membrane. We were also able to identify the androgen receptor (AR) point mutation T878A from 7.5 mL blood samples spiked with 50 LNCaP cells using RT-PCR and Sanger sequencing. Finally, CTCs were detected in 8 out of 8 samples from patients diagnosed with metastatic prostate cancer (mean ± SEM = 21 ± 2.957 CTCs/mL, median = 21 CTCs/mL), demonstrating the potential clinical utility of this device.


Subject(s)
Cell Separation/instrumentation , Filtration/instrumentation , Neoplastic Cells, Circulating , Prostatic Neoplasms/blood , Adult , Aged , Aged, 80 and over , Biomarkers, Tumor/metabolism , Biomedical Engineering , Cell Line, Tumor , Cell Separation/methods , Filtration/methods , Humans , Male , Middle Aged , Mutation , Neoplasm Metastasis , Pattern Recognition, Automated , Polymethyl Methacrylate/chemistry , Prostatic Neoplasms/genetics , Receptors, Androgen/genetics , Reproducibility of Results
3.
Multivariate Behav Res ; 52(2): 164-177, 2017.
Article in English | MEDLINE | ID: mdl-27997223

ABSTRACT

A common form of missing data is caused by selection on an observed variable (e.g., Z). If the selection variable was measured and is available, the data are regarded as missing at random (MAR). Selection biases correlation, reliability, and effect size estimates when these estimates are computed on listwise deleted (LD) data sets. On the other hand, maximum likelihood (ML) estimates are generally unbiased and outperform LD in most situations, at least when the data are MAR. The exception is when we estimate the partial correlation. In this situation, LD estimates are unbiased when the cause of missingness is partialled out. In other words, there is no advantage of ML estimates over LD estimates in this situation. We demonstrate that under a MAR condition, even ML estimates may become biased, depending on how partial correlations are computed. Finally, we conclude with recommendations about how future researchers might estimate partial correlations even when the cause of missingness is unknown and, perhaps, unknowable.


Subject(s)
Data Interpretation, Statistical , Likelihood Functions , Multivariate Analysis , Algorithms , Computer Simulation , Educational Status , Humans , Monte Carlo Method , Reproducibility of Results , Socioeconomic Factors , Students , Universities
4.
Multivariate Behav Res ; 49(6): 597-613, 2014.
Article in English | MEDLINE | ID: mdl-26735360

ABSTRACT

Much research has been directed at the validity of fit indices in Path Analysis and Structural Equation Modeling (e.g., Browne, MacCallum, Kim, Andersen, & Glaser, 2002 ; Heene, Hilbert, Draxler, Ziegler, & Bühner, 2011 ; Hu & Bentler, 1999 ; Marsh, Hau, & Wen, 2004 ). Recent developments (e.g., Preacher, 2006 ; Roberts & Pashler, 2000 , 2002 ) have encouraged researchers to investigate other criteria for comparing models, including model complexity. What has not been investigated is the inherent ability of a particular data set to be fitted with a constrained set of randomly generated linear models, which we call Model Conditioned Data Elasticity (DE). In this article we show how DE can be compared with the problem of equivalent models and a more general problem of the "confoundability" of data/model combinations (see MacCallum, Wegener, Uchino, & Fabrigar, 1993 ). Using the DE package in R, we show how DE can be assessed through automated computer searches. Finally, we discuss how DE fits within the controversy surrounding the use of fit statistics.

5.
Psychon Bull Rev ; 21(3): 620-8, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24307249

ABSTRACT

The question of whether computerized cognitive training leads to generalized improvements of intellectual abilities has been a popular, yet contentious, topic within both the psychological and neurocognitive literatures. Evidence for the effective transfer of cognitive training to nontrained measures of cognitive abilities is mixed, with some studies showing apparent successful transfer, while others have failed to obtain this effect. At the same time, several authors have made claims about both successful and unsuccessful transfer effects on the basis of a form of responder analysis, an analysis technique that shows that those who gain the most on training show the greatest gains on transfer tasks. Through a series of Monte Carlo experiments and mathematical analyses, we demonstrate that the apparent transfer effects observed through responder analysis are illusory and are independent of the effectiveness of cognitive training. We argue that responder analysis can be used neither to support nor to refute hypotheses related to whether cognitive training is a useful intervention to obtain generalized cognitive benefits. We end by discussing several proposed alternative analysis techniques that incorporate training gain scores and argue that none of these methods are appropriate for testing hypotheses regarding the effectiveness of cognitive training.


Subject(s)
Data Interpretation, Statistical , Intelligence/physiology , Memory, Short-Term/physiology , Transfer, Psychology/physiology , Humans
6.
Br J Math Stat Psychol ; 66(3): 521-42, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23046339

ABSTRACT

In 2004, Hunter and Schmidt proposed a correction (called Case IV) that seeks to estimate disattenuated correlations when selection is made on an unmeasured variable. Although Case IV is an important theoretical development in the range restriction literature, it makes an untestable assumption, namely that the partial correlation between the unobserved selection variable and the performance measure is zero. We show in this paper why this assumption may be difficult to meet and why previous simulations have failed to detect the full extent of bias. We use meta-analytic literature to investigate the plausible range of bias. We also show how Case IV performs in terms of standard errors. Finally, we give practical recommendations about how the contributions of Hunter and Schmidt (2004) can be extended without making such stringent assumptions.


Subject(s)
Bias , Models, Psychological , Psychometrics/statistics & numerical data , Humans
8.
Psychol Methods ; 12(4): 414-433, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18179352

ABSTRACT

This article proposes 2 new approaches to test a nonzero population correlation (rho): the hypothesis-imposed univariate sampling bootstrap (HI) and the observed-imposed univariate sampling bootstrap (OI). The authors simulated correlated populations with various combinations of normal and skewed variates. With alpha set=.05, N> or =10, and rho< or =0.4, empirical Type I error rates of the parametric r and the conventional bivariate sampling bootstrap reached .168 and .081, respectively, whereas the largest error rates of the HI and the OI were .079 and .062. On the basis of these results, the authors suggest that the OI is preferable in alpha control to parametric approaches if the researcher believes the population is nonnormal and wishes to test for nonzero rhos of moderate size.


Subject(s)
Models, Psychological , Monte Carlo Method , Psychology/methods , Humans , Sampling Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...