Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 120(42): e2307584120, 2023 10 17.
Artigo em Inglês | MEDLINE | ID: mdl-37812722

RESUMO

As social animals, people are highly sensitive to the attention of others. Seeing someone else gaze at an object automatically draws one's own attention to that object. Monitoring the attention of others aids in reconstructing their emotions, beliefs, and intentions and may play a crucial role in social alignment. Recently, however, it has been suggested that the human brain constructs a predictive model of other people's attention that is far more involved than a moment-by-moment monitoring of gaze direction. The hypothesized model learns the statistical patterns in other people's attention and extrapolates how attention is likely to move. Here, we tested the hypothesis of a predictive model of attention. Subjects saw movies of attention displayed as a bright spot shifting around a scene. Subjects were able to correctly distinguish natural attention sequences (based on eye tracking of prior participants) from altered sequences (e.g., played backward or in a scrambled order). Even when the attention spot moved around a blank background, subjects could distinguish natural from scrambled sequences, suggesting a sensitivity to the spatial-temporal statistics of attention. Subjects also showed an ability to recognize the attention patterns of different individuals. These results suggest that people possess a sophisticated model of the normal statistics of attention and can identify deviations from the model. Monitoring attention is therefore more than simply registering where someone else's eyes are pointing. It involves predictive modeling, which may contribute to our remarkable social ability to predict the mind states and behavior of others.


Assuntos
Encéfalo , Cognição , Humanos , Visão Ocular , Olho , Emoções
2.
J Vis ; 22(12): 16, 2022 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-36383365

RESUMO

When two pre-existing, separated squares are connected by the sudden onset of a bar between them, viewers do not perceive the bar to appear all at once. Instead, they see an illusory morphing of the original squares over time. The direction of this transformational apparent motion (TAM) can be influenced by endogenous attention deployed before the appearance of the connecting bar. Here, we investigated whether the influence of endogenous attention on TAM results from operations over high-level feature-independent shape representations, or instead over lower level shape representations defined by specific visual features. To do so, we tested the influence of endogenous attention on TAM in first- and second-order displays, which shared common shapes but had different shape-defining attributes (luminance and texture contrast, respectively). In terms of both the magnitude of directional bias and timing, we found that endogenous attention exerted a similar influence on both first- and second-order objects. These results imply that endogenous attention biases the perceived direction of TAM by operating on high-level shape representations that are invariant to the low-level visual features that define them. Our results support a four-stage model of TAM, where a feature encoding stage passes a features-specific layout to a parsing stage that forms discrete, high-level meta-featural shapes, which are then matched and visually interpolated over time.


Assuntos
Viés de Atenção , Percepção de Movimento , Humanos , Atenção , Visão Ocular , Movimento (Física)
3.
Behav Res Methods ; 50(6): 2597-2605, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29687235

RESUMO

Verbal responses are a convenient and naturalistic way for participants to provide data in psychological experiments (Salzinger, The Journal of General Psychology, 61(1),65-94:1959). However, audio recordings of verbal responses typically require additional processing, such as transcribing the recordings into text, as compared with other behavioral response modalities (e.g., typed responses, button presses, etc.). Further, the transcription process is often tedious and time-intensive, requiring human listeners to manually examine each moment of recorded speech. Here we evaluate the performance of a state-of-the-art speech recognition algorithm (Halpern et al., 2016) in transcribing audio data into text during a list-learning experiment. We compare transcripts made by human annotators to the computer-generated transcripts. Both sets of transcripts matched to a high degree and exhibited similar statistical properties, in terms of the participants' recall performance and recall dynamics that the transcripts captured. This proof-of-concept study suggests that speech-to-text engines could provide a cheap, reliable, and rapid means of automatically transcribing speech data in psychological experiments. Further, our findings open the door for verbal response experiments that scale to thousands of participants (e.g., administered online), as well as a new generation of experiments that decode speech on the fly and adapt experimental parameters based on participants' prior responses.


Assuntos
Pesquisa Comportamental/métodos , Pesquisa Comportamental/normas , Rememoração Mental , Interface para o Reconhecimento da Fala/normas , Fala , Adolescente , Feminino , Humanos , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...