Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Assunto principal
Intervalo de ano de publicação
1.
Cogn Sci ; 47(7): e13313, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37428881

RESUMO

We present three experiments using a novel problem in which participants update their estimates of propensities when faced with an uncertain new instance. We examine this using two different causal structures (common cause/common effect) and two different scenarios (agent-based/mechanical). In the first, participants must update their estimate of the propensity for two warring nations to successfully explode missiles after being told of a new explosion on the border between both nations. In the second, participants must update their estimate of the accuracy of two early warning tests for cancer when they produce conflicting reports about a patient. Across both experiments, we find two modal responses, representing around one-third of participants each. In the first, "Categorical" response, participants update propensity estimates as if they were certain about the single event, for example, certain that one of the nations was responsible for the latest explosion, or certain about which of the two tests is correct. In the second, "No change" response, participants make no update to their propensity estimates at all. Across the three experiments, the theory is developed and tested that these two responses in fact have a single representation of the problem: because the actual outcome is binary (only one of the nations could have launched the missile; the patient either has cancer or not), these participants believe it is incorrect to update propensities in a graded manner. They therefore operate on a "certainty threshold" basis, whereby, if they are certain enough about the single event, they will make the "Categorical" response, and if they are below this threshold, they will make the "No change" response. Ramifications are considered for the "categorical" response in particular, as this approach produces a positive-feedback dynamic similar to that seen in the belief polarization/confirmation bias literature.


Assuntos
Neoplasias , Humanos , Teorema de Bayes , Incerteza , Viés
2.
Front Psychol ; 11: 503233, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33192757

RESUMO

The study of people's ability to engage in causal probabilistic reasoning has typically used fixed-point estimates for key figures. For example, in the classic taxi-cab problem, where a witness provides evidence on which of two cab companies (the more common 'green'/less common 'blue') were responsible for a hit and run incident, solvers are told the witness's ability to judge cab color is 80%. In reality, there is likely to be some uncertainty around this estimate (perhaps we tested the witness and they were correct 4/5 times), known as second-order uncertainty, producing a distribution rather than a fixed probability. While generally more closely matching real world reasoning, a further important ramification of this is that our best estimate of the witness' accuracy can and should change when the witness makes the claim that the cab was blue. We present a Bayesian Network model of this problem, and show that, while the witness's report does increase our probability of the cab being blue, it simultaneously decreases our estimate of their future accuracy (because blue cabs are less common). We presented this version of the problem to 131 participants, requiring them to update their estimates of both the probability the cab involved was blue, as well as the witness's accuracy, after they claim it was blue. We also required participants to explain their reasoning process and provided follow up questions to probe various aspects of their reasoning. While some participants responded normatively, the majority self-reported 'assuming' one of the probabilities was a certainty. Around a quarter assumed the cab was green, and thus the witness was wrong, decreasing their estimate of their accuracy. Another quarter assumed the witness was correct and actually increased their estimate of their accuracy, showing a circular logic similar to that seen in the confirmation bias/belief polarization literature. Around half of participants refused to make any change, with convergent evidence suggesting that these participants do not see the relevance of the witness's report to their accuracy before we know for certain whether they are correct or incorrect.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...