Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Psychol Sci ; 35(6): 623-634, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38652604

ABSTRACT

Viewers use contextual information to visually explore complex scenes. Object recognition is facilitated by exploiting object-scene relations (which objects are expected in a given scene) and object-object relations (which objects are expected because of the occurrence of other objects). Semantically inconsistent objects deviate from these expectations, so they tend to capture viewers' attention (the semantic-inconsistency effect). Some objects fit the identity of a scene more or less than others, yet semantic inconsistencies have hitherto been operationalized as binary (consistent vs. inconsistent). In an eye-tracking experiment (N = 21 adults), we study the semantic-inconsistency effect in a continuous manner by using the linguistic-semantic similarity of an object to the scene category and to other objects in the scene. We found that both highly consistent and highly inconsistent objects are viewed more than other objects (U-shaped relationship), revealing that the (in)consistency effect is more than a simple binary classification.


Subject(s)
Pattern Recognition, Visual , Semantics , Humans , Adult , Female , Male , Young Adult , Pattern Recognition, Visual/physiology , Attention/physiology , Eye-Tracking Technology , Recognition, Psychology , Visual Perception/physiology
2.
Cognition ; 247: 105788, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38579638

ABSTRACT

In real-world vision, people prioritise the most informative scene regions via eye-movements. According to the cognitive guidance theory of visual attention, viewers allocate visual attention to those parts of the scene that are expected to be the most informative. The expected information of a scene region is coded in the semantic distribution of that scene. Meaning maps have been proposed to capture the spatial distribution of local scene semantics in order to test cognitive guidance theories of attention. Notwithstanding the success of meaning maps, the reason for their success has been contested. This has led to at least two possible explanations for the success of meaning maps in predicting visual attention. On the one hand, meaning maps might measure scene semantics. On the other hand, meaning maps might measure scene features, overlapping with, but distinct from, scene semantics. This study aims to disentangle these two sources of information by considering both conceptual information and non-semantic scene entropy simultaneously. We found that both semantic and non-semantic information is captured by meaning maps, but scene entropy accounted for more unique variance in the success of meaning maps than conceptual information. Additionally, some explained variance was unaccounted for by either source of information. Thus, although meaning maps may index some aspect of semantic information, their success seems to be better explained by non-semantic information. We conclude that meaning maps may not yet be a good tool to test cognitive guidance theories of attention in general, since they capture non-semantic aspects of local semantic density and only a small portion of conceptual information. Rather, we suggest that researchers should better define the exact aspect of cognitive guidance theories they wish to test and then use the tool that best captures that desired semantic information. As it stands, the semantic information contained in meaning maps seems too ambiguous to draw strong conclusions about how and when semantic information guides visual attention.

3.
Trends Cogn Sci ; 27(11): 983-984, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37696691

ABSTRACT

Davis and Bainbridge reveal a consistent memorability signal for artworks, both online and in a museum setting, which is predicted by the intrinsic visual attributes of the paintings. The fusion of artificial intelligence (AI) with artistic intuition emerges as a promising avenue to deepen our understanding of what makes images memorable.

4.
J Vis ; 23(4): 1, 2023 04 03.
Article in English | MEDLINE | ID: mdl-37010831

ABSTRACT

Through the manipulation of color and form, visual abstract art is often used to convey feelings and emotions. Here, we explored how colors and lines are used to express basic emotions and whether non-artists express emotions through art in similar ways as trained artists. Both artists and non-artists created abstract color drawings and line drawings depicting six emotions (i.e., anger, disgust, fear, joy, sadness, and wonder). To test whether people represented basic emotions in similar ways, we computationally predicted the emotion of a given drawing by comparing it to a set of references created by averaging across all other participants' drawings within each emotion category. We found that prediction accuracy was higher for color drawings than line drawings and higher for color drawings by non-artists than by artists. In a behavioral experiment, we found that people (N = 242) could also accurately infer emotions, showing the same pattern of results as our computational predictions. Further computational analyses of the drawings revealed systematic use of certain colors and line features to depict each basic emotion (e.g., anger is generally redder and more densely drawn than other emotions, sadness is more blue and contains more vertical lines). Taken together, these results imply that abstract color and line drawings are able to convey certain emotions based on their visual features, which are also used by human observers to understand the intended emotional connotation of abstract artworks.


Subject(s)
Facial Expression , Sadness , Humans , Sadness/psychology , Emotions , Anger , Visual Perception
5.
Sci Rep ; 11(1): 19405, 2021 09 30.
Article in English | MEDLINE | ID: mdl-34593933

ABSTRACT

Quickly scanning an environment to determine relative threat is an essential part of survival. Scene gist extracted rapidly from the environment may help people detect threats. Here, we probed this link between emotional judgements and features of visual scenes. We first extracted curvature, length, and orientation statistics of all images in the International Affective Picture System image set and related them to emotional valence scores. Images containing angular contours were rated as negative, and images containing long contours as positive. We then composed new abstract line drawings with specific combinations of length, angularity, and orientation values and asked participants to rate them as positive or negative, and as safe or threatening. Smooth, long, horizontal contour scenes were rated as positive/safe, while short angular contour scenes were rated as negative/threatening. Our work shows that particular combinations of image features help people make judgements about potential threat in the environment.

6.
Cognition ; 212: 104698, 2021 07.
Article in English | MEDLINE | ID: mdl-33798948

ABSTRACT

Current theories propose that our sense of curiosity is determined by the learning progress or information gain that our cognitive system expects to make. However, few studies have explicitly tried to quantify subjective information gain and link it to measures of curiosity. Here, we asked people to report their curiosity about the intrinsically engaging perceptual 'puzzles' known as Mooney images, and to report on the strength of their aha experience upon revealing the solution image (curiosity relief). We also asked our participants (279) to make a guess concerning the solution of the image, and used the distribution of these guesses to compute the crowdsourced semantic entropy (or ambiguity) of the images, as a measure of the potential for information gain. Our results confirm that curiosity and, even more so, aha experience is substantially associated with this semantic information gain measure. These findings support the expected information gain theory of curiosity and suggest that the aha experience or intrinsic reward is driven by the actual information gain. In an unannounced memory part, we also established that the often reported influence of curiosity on memory is fully mediated by the aha experience or curiosity relief. We discuss the implications of our results for the burgeoning fields of curiosity and psychoaesthetics.


Subject(s)
Exploratory Behavior , Memory , Humans , Learning , Reward
7.
Atten Percept Psychophys ; 81(1): 35-46, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30191476

ABSTRACT

Our research has previously shown that scene categories can be predicted from observers' eye movements when they view photographs of real-world scenes. The time course of category predictions reveals the differential influences of bottom-up and top-down information. Here we used these known differences to determine to what extent image features at different representational levels contribute toward guiding gaze in a category-specific manner. Participants viewed grayscale photographs and line drawings of real-world scenes while their gaze was tracked. Scene categories could be predicted from fixation density at all times over a 2-s time course in both photographs and line drawings. We replicated the shape of the prediction curve found previously, with an initial steep decrease in prediction accuracy from 300 to 500 ms, representing the contribution of bottom-up information, followed by a steady increase, representing top-down knowledge of category-specific information. We then computed the low-level features (luminance contrasts and orientation statistics), mid-level features (local symmetry and contour junctions), and Deep Gaze II output from the images, and used that information as a reference in our category predictions in order to assess their respective contributions to category-specific guidance of gaze. We observed that, as expected, low-level salience contributes mostly to the initial bottom-up peak of gaze guidance. Conversely, the mid-level features that describe scene structure (i.e., local symmetry and junctions) split their contributions between bottom-up and top-down attentional guidance, with symmetry contributing to both bottom-up and top-down guidance, while junctions play a more prominent role in the top-down guidance of gaze.


Subject(s)
Attention/physiology , Eye Movements/physiology , Fixation, Ocular/physiology , Orientation, Spatial/physiology , Photic Stimulation/methods , Adolescent , Female , Humans , Male , Vision, Ocular/physiology , Visual Perception/physiology , Young Adult
8.
Cognition ; 184: 119-129, 2019 03.
Article in English | MEDLINE | ID: mdl-30594878

ABSTRACT

A long line of research has shown that vision and memory are closely linked, such that particular eye movement behaviour aids memory performance. In two experiments, we ask whether the positive influence of eye movements on memory is primarily a result of overt visual exploration during the encoding or the recognition phase. Experiment 1 allowed participants to free-view images of scenes, followed by a new-old recognition memory task. Exploratory analyses found that eye movements during study were predictive of subsequent memory performance. Importantly, intrinsic image memorability does not explain this finding. Eye movements during test were only predictive of memory within the first 600 ms of the trial. To examine whether this relationship between eye movements and memory is causal, Experiment 2 manipulated participants' ability to make eye movements during either study or test in a new-old recognition task. Participants were either encouraged to freely explore the scene in both the study and test phases, or had to refrain from making eye movements in either the test phase, the study phase, or both. We found that hit rate was significantly higher when participants moved their eyes during the study phase, regardless of what they did in the test phase. False alarm rate, on the other hand, was affected only by eye movements during the test phase: it decreased when participants were encouraged to explore the scene. Taken together, these results reveal a dissociation of the role of eye movements during the encoding and recognition of scenes. Eye movements during study are instrumental in forming memories, and eye movements during recognition support the judgment of memory veracity.


Subject(s)
Eye Movements/physiology , Memory/physiology , Mental Recall/physiology , Adult , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation , Recognition, Psychology/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...