Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Vis ; 23(6): 9, 2023 Jun 01.
Article in English | MEDLINE | ID: mdl-37318440

ABSTRACT

What determines how much one encodes into visual working memory? Traditionally, encoding depth is considered to be indexed by spatiotemporal properties of gaze, such as gaze position and dwell time. Although these properties inform about where and how long one looks, they do not necessarily inform about the current arousal state or how strongly attention is deployed to facilitate encoding. Here, we found that two types of pupillary dynamics predict how much information is encoded during a copy task. The task involved encoding a spatial pattern of multiple items for later reproduction. Results showed that smaller baseline pupil sizes preceding and stronger pupil orienting responses during encoding predicted that more information was encoded into visual working memory. Additionally, we show that pupil size reflects not only how much but also how precisely material is encoded. We argue that a smaller pupil size preceding encoding is related to increased exploitation, whereas larger pupil constrictions signal stronger attentional (re)orienting to the to-be-encoded pattern. Our findings support the notion that the depth of visual working memory encoding is the integrative outcome of differential aspects of attention: how alert one is, how much attention one deploys, and how long it is deployed. Together, these factors determine how much information is encoded into visual working memory.


Subject(s)
Attention , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Attention/physiology , Pupil/physiology
2.
R Soc Open Sci ; 7(10): 200595, 2020 Oct.
Article in English | MEDLINE | ID: mdl-33204449

ABSTRACT

Convolutional neural networks (CNNs) give the state-of-the-art performance in many pattern recognition problems but can be fooled by carefully crafted patterns of noise. We report that CNN face recognition systems also make surprising 'errors'. We tested six commercial face recognition CNNs and found that they outperform typical human participants on standard face-matching tasks. However, they also declare matches that humans would not, where one image from the pair has been transformed to appear a different sex or race. This is not due to poor performance; the best CNNs perform almost perfectly on the human face-matching tasks, but also declare the most matches for faces of a different apparent race or sex. Although differing on the salience of sex and race, humans and computer systems are not working in completely different ways. They tend to find the same pairs of images difficult, suggesting some agreement about the underlying similarity space.

3.
Cortex ; 122: 108-114, 2020 01.
Article in English | MEDLINE | ID: mdl-30685062

ABSTRACT

We use visual working memory (VWM) to maintain the visual features of objects in our world. Although the capacity of VWM is limited, it is unlikely that this limit will pose a problem in daily life, as visual information can be supplemented with input from our external visual world by using eye movements. In the current study, we influenced the trade-off between eye movements and VWM utilization by introducing a cost to a saccade. Higher costs were created by adding a delay in stimulus availability to a copying task. We show that increased saccade cost results in less saccades towards the model and an increased dwell time on the model. These results suggest a shift from making eye movements towards taxing internal VWM. Our findings reveal that the trade-off between executing eye-movements and building an internal representation of our world is based on an adaptive mechanism, governed by cost-efficiency.


Subject(s)
Memory, Short-Term , Saccades , Dietary Supplements , Eye Movements , Humans , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...