Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
J Patient Exp ; 7(6): 1341-1348, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33457585

ABSTRACT

Many hospitals face a common challenge: limited space for a high number of patients. This has led to quick patient throughput, which can impact patient perception of discharge readiness. This study examined whether a poster highlighting tasks to complete as part of the discharge process improved caregiver perception of readiness to transition home. Using a sequential, exploratory mixed methods design, focus groups were convened to explore clinical staff perspective on the discharge process on 3 pediatric inpatient units at a large, urban, pediatric academic medical center in the United States. Analysis of this content informed the design of a poster intervention to "nudge" caregivers (eg, parents, legal guardians) toward readiness and self-efficacy that was then tested in a randomized, controlled experiment. The poster focused on practical knowledge for specific areas of transition adjustment, such as medication and care recipient recovery behaviors, barriers, and enablers. Caregivers (n = 135) completed surveys at discharge indicating their perceived readiness to transition home with their child. Analysis of covariance was used to test the effect of the poster condition (poster vs no poster) on caregiver readiness, preparedness, and confidence for discharge while controlling for previous admission history. Significant effects for poster presence were found on caregivers' perceived readiness for discharge, F 1,125 = 7.75, P = .006, Cohen's d = 0.44; and caregivers' perceived preparedness for the transition home, F 1,121 =7.24, P = .008, Cohen's d = 0.44. Only a marginal effect was found for poster condition on caregivers' confidence ratings, F 1,125 = 2.93, P = .090, Cohen's d = 0.29. The results suggest that simple nudges in the patient care environment may yield measurable improvements in caregiver outcomes.

2.
Exp Psychol ; 61(6): 417-38, 2014.
Article in English | MEDLINE | ID: mdl-24962121

ABSTRACT

We developed a novel four-dimensional spatial task called Shapebuilder and used it to predict performance on a wide variety of cognitive tasks. In six experiments, we illustrate that Shapebuilder: (1) Loads on a common factor with complex working memory (WM) span tasks and that it predicts performance on quantitative reasoning tasks and Ravens Progressive Matrices (Experiment 1), (2) Correlates well with traditional complex WM span tasks (Experiment 2), predicts performance on the conditional go/no go task (Experiment 3) and N-back (Experiment 4), and showed weak or nonsignificant correlations with the Attention Networks Task (Experiment 5), and task switching (Experiment 6). Shapebuilder shows that it exhibits minimal skew and kurtosis, and shows good reliability. We argue that Shapebuilder has many advantages over existing measures of WM, including the fact that it is largely language independent, is not prone to ceiling effects, and take less than 6 min to complete on average.


Subject(s)
Cognition , Memory, Short-Term , Task Performance and Analysis , Adolescent , Adult , Attention , Female , Humans , Male , Reproducibility of Results , Stroop Test , Young Adult
3.
Psychon Bull Rev ; 21(2): 309-11, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24614967

ABSTRACT

Established psychological results have been called into question by demonstrations that statistical significance is easy to achieve, even in the absence of an effect. One often-warned-against practice, choosing when to stop the experiment on the basis of the results, is guaranteed to produce significant results. In response to these demonstrations, Bayes factors have been proposed as an antidote to this practice, because they are invariant with respect to how an experiment was stopped. Should researchers only care about the resulting Bayes factor, without concern for how it was produced? Yu, Sprenger, Thomas, and Dougherty (2014) and Sanborn and Hills (2014) demonstrated that Bayes factors are sometimes strongly influenced by the stopping rules used. However, Rouder (2014) has provided a compelling demonstration that despite this influence, the evidence supplied by Bayes factors remains correct. Here we address why the ability to influence Bayes factors should still matter to researchers, despite the correctness of the evidence. We argue that good frequentist properties mean that results will more often agree with researchers' statistical intuitions, and good frequentist properties control the number of studies that will later be refuted. Both help raise confidence in psychological results.


Subject(s)
Bayes Theorem , Data Interpretation, Statistical , Models, Statistical , Research Design/standards , Humans
4.
Psychon Bull Rev ; 21(2): 268-82, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24002963

ABSTRACT

The ongoing discussion among scientists about null-hypothesis significance testing and Bayesian data analysis has led to speculation about the practices and consequences of "researcher degrees of freedom." This article advances this debate by asking the broader questions that we, as scientists, should be asking: How do scientists make decisions in the course of doing research, and what is the impact of these decisions on scientific conclusions? We asked practicing scientists to collect data in a simulated research environment, and our findings show that some scientists use data collection heuristics that deviate from prescribed methodology. Monte Carlo simulations show that data collection heuristics based on p values lead to biases in estimated effect sizes and Bayes factors and to increases in both false-positive and false-negative rates, depending on the specific heuristic. We also show that using Bayesian data collection methods does not eliminate these biases. Thus, our study highlights the little appreciated fact that the process of doing science is a behavioral endeavor that can bias statistical description and inference in a manner that transcends adherence to any particular statistical framework.


Subject(s)
Bayes Theorem , Data Collection/standards , Decision Making , Research Design/standards , Science/standards , Statistics as Topic/standards , Adult , Humans , Science/methods
5.
Front Psychol ; 2: 129, 2011.
Article in English | MEDLINE | ID: mdl-21734897

ABSTRACT

We tested the predictions of HyGene (Thomas et al., 2008) that both divided attention at encoding and judgment should affect the degree to which participants' probability judgments violate the principle of additivity. In two experiments, we showed that divided attention during judgment leads to an increase in subadditivity, suggesting that the comparison process for probability judgments is capacity limited. Contrary to the predictions of HyGene, a third experiment revealed that divided attention during encoding leads to an increase in later probability judgment made under full attention. The effect of divided attention during encoding on judgment was completely mediated by the number of hypotheses participants generated, indicating that limitations in both encoding and recall can cascade into biases in judgments.

6.
Psychol Rev ; 115(1): 155-85, 2008 Jan.
Article in English | MEDLINE | ID: mdl-18211189

ABSTRACT

Diagnostic hypothesis-generation processes are ubiquitous in human reasoning. For example, clinicians generate disease hypotheses to explain symptoms and help guide treatment, auditors generate hypotheses for identifying sources of accounting errors, and laypeople generate hypotheses to explain patterns of information (i.e., data) in the environment. The authors introduce a general model of human judgment aimed at describing how people generate hypotheses from memory and how these hypotheses serve as the basis of probability judgment and hypothesis testing. In 3 simulation studies, the authors illustrate the properties of the model, as well as its applicability to explaining several common findings in judgment and decision making, including how errors and biases in hypothesis generation can cascade into errors and biases in judgment.


Subject(s)
Judgment , Memory, Short-Term , Models, Psychological , Environment , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...