Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 14941, 2024 06 28.
Article in English | MEDLINE | ID: mdl-38942811

ABSTRACT

Metacognitive biases have been repeatedly associated with transdiagnostic psychiatric dimensions of 'anxious-depression' and 'compulsivity and intrusive thought', cross-sectionally. To progress our understanding of the underlying neurocognitive mechanisms, new methods are required to measure metacognition remotely, within individuals over time. We developed a gamified smartphone task designed to measure visuo-perceptual metacognitive (confidence) bias and investigated its psychometric properties across two studies (N = 3410 unpaid citizen scientists, N = 52 paid participants). We assessed convergent validity, split-half and test-retest reliability, and identified the minimum number of trials required to capture its clinical correlates. Convergent validity of metacognitive bias was moderate (r(50) = 0.64, p < 0.001) and it demonstrated excellent split-half reliability (r(50) = 0.91, p < 0.001). Anxious-depression was associated with decreased confidence (ß = - 0.23, SE = 0.02, p < 0.001), while compulsivity and intrusive thought was associated with greater confidence (ß = 0.07, SE = 0.02, p < 0.001). The associations between metacognitive biases and transdiagnostic psychiatry dimensions are evident in as few as 40 trials. Metacognitive biases in decision-making are stable within and across sessions, exhibiting very high test-retest reliability for the 100-trial (ICC = 0.86, N = 110) and 40-trial (ICC = 0.86, N = 120) versions of Meta Mind. Hybrid 'self-report cognition' tasks may be one way to bridge the recently discussed reliability gap in computational psychiatry.


Subject(s)
Metacognition , Humans , Metacognition/physiology , Female , Male , Adult , Psychometrics/methods , Reproducibility of Results , Middle Aged , Young Adult , Depression/diagnosis , Depression/psychology , Bias , Anxiety/psychology , Smartphone , Cross-Sectional Studies
2.
Int J Eat Disord ; 55(2): 278-281, 2022 02.
Article in English | MEDLINE | ID: mdl-35005784

ABSTRACT

Online methods have become a powerful research tool, allowing us to conduct well-powered studies, to explore and replicate effects, and to recruit often rare and diverse samples. However, concerns about the validity and reliability of the data collected from some platforms have reached crescendo. In this issue, Burnette et al. (2021) describe how commonly employed protective measures such as captchas, response consistency requirements, and attention checks may no longer be sufficient to ensure high-quality data in survey-based studies on Amazon's Mechanical Turk. We echo and elaborate on these concerns, but believe that although imperfect, online research will continue to be incredibly important in driving progress in mental health science. Not all platforms or populations are well suited to every research question and so we posit that the future of online research will be much more varied, and in no small part supported by citizen scientists and those with lived experience. Whatever the medium, researchers cannot stand still; we must continuously reflect and adapt to technological advances, demographics, and motivational shifts of our participants. Online research is difficult but worthwhile.


Subject(s)
Attention , Mental Health , Humans , Reproducibility of Results , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...