Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Cognition ; 247: 105788, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38579638

RESUMO

In real-world vision, people prioritise the most informative scene regions via eye-movements. According to the cognitive guidance theory of visual attention, viewers allocate visual attention to those parts of the scene that are expected to be the most informative. The expected information of a scene region is coded in the semantic distribution of that scene. Meaning maps have been proposed to capture the spatial distribution of local scene semantics in order to test cognitive guidance theories of attention. Notwithstanding the success of meaning maps, the reason for their success has been contested. This has led to at least two possible explanations for the success of meaning maps in predicting visual attention. On the one hand, meaning maps might measure scene semantics. On the other hand, meaning maps might measure scene features, overlapping with, but distinct from, scene semantics. This study aims to disentangle these two sources of information by considering both conceptual information and non-semantic scene entropy simultaneously. We found that both semantic and non-semantic information is captured by meaning maps, but scene entropy accounted for more unique variance in the success of meaning maps than conceptual information. Additionally, some explained variance was unaccounted for by either source of information. Thus, although meaning maps may index some aspect of semantic information, their success seems to be better explained by non-semantic information. We conclude that meaning maps may not yet be a good tool to test cognitive guidance theories of attention in general, since they capture non-semantic aspects of local semantic density and only a small portion of conceptual information. Rather, we suggest that researchers should better define the exact aspect of cognitive guidance theories they wish to test and then use the tool that best captures that desired semantic information. As it stands, the semantic information contained in meaning maps seems too ambiguous to draw strong conclusions about how and when semantic information guides visual attention.

2.
Psychol Sci ; 35(6): 623-634, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38652604

RESUMO

Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.


Assuntos
Reconhecimento Visual de Modelos , Semântica , Humanos , Adulto , Feminino , Masculino , Adulto Jovem , Reconhecimento Visual de Modelos/fisiologia , Atenção/fisiologia , Tecnologia de Rastreamento Ocular , Reconhecimento Psicológico , Percepção Visual/fisiologia
3.
Elife ; 122023 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-38079481

RESUMO

Many species are able to recognize objects, but it has been proven difficult to pinpoint and compare how different species solve this task. Recent research suggested to combine computational and animal modelling in order to obtain a more systematic understanding of task complexity and compare strategies between species. In this study, we created a large multidimensional stimulus set and designed a visual discrimination task partially based upon modelling with a convolutional deep neural network (CNN). Experiments included rats (N = 11; 1115 daily sessions in total for all rats together) and humans (N = 45). Each species was able to master the task and generalize to a variety of new images. Nevertheless, rats and humans showed very little convergence in terms of which object pairs were associated with high and low performance, suggesting the use of different strategies. There was an interaction between species and whether stimulus pairs favoured early or late processing in a CNN. A direct comparison with CNN representations and visual feature analyses revealed that rat performance was best captured by late convolutional layers and partially by visual features such as brightness and pixel-level similarity, while human performance related more to the higher-up fully connected layers. These findings highlight the additional value of using a computational approach for the design of object recognition tasks. Overall, this computationally informed investigation of object recognition behaviour reveals a strong discrepancy in strategies between rodent and human vision.


Assuntos
Reconhecimento Visual de Modelos , Roedores , Humanos , Ratos , Animais , Reconhecimento Psicológico , Percepção Visual , Redes Neurais de Computação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...