Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
Add more filters











Publication year range
1.
Psychon Bull Rev ; 2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39020241

ABSTRACT

Temporal Binding (TB) is the subjective compression of action-effect intervals. While the effects of nonsocial actions are highly predictable, it is not the case when interacting with conspecifics, who often act under their own volition, at a time of their choosing. Given the relative differences in action-effect predictability in non-social and social interactions, it is plausible that TB and its properties differ across these situations. To examine this, in two experiments, we compared the time course of TB in social and nonsocial interactions, systematically varying action-effect intervals (200-2,100 ms). Participants were told they were (a) interacting with another person via a live webcam, who was in fact a confederate (social condition), (b) interacting with pre-recorded videos (nonsocial condition), or (c) observing two pre-recorded videos (control condition; Experiment 2). Results across experiments showed greater TB for social compared to nonsocial conditions, and the difference was proportional to the action-effect intervals. Further, in Experiment 1, TB was consistently observed throughout the experiment for social interactions, whereas nonsocial TB decreased from the first to the second half of the experiment. In Experiment 2, the nonsocial condition did not differ from control, whereas the social condition did, exhibiting enhanced binding. We argue these results suggest that the sociality of an interaction modulates the 'internal clock' of time perception.

2.
Exp Aging Res ; : 1-18, 2022 Dec 26.
Article in English | MEDLINE | ID: mdl-36572660

ABSTRACT

Previous research investigated age differences in gaze following with an attentional cueing paradigm where participants view a face with averted gaze, and then respond to a target appearing in a location congruent or incongruent with the gaze cue. However, this paradigm is far removed from the way we use gaze cues in everyday settings. Here we recorded the eye movements of younger and older adults while they freely viewed naturalistic scenes where a person looked at an object or location. Older adults were more likely to fixate and made more fixations to the gazed-at location, compared to younger adults. Our findings suggest that, contrary to what was observed in the traditional gaze-cueing paradigm, in a non-constrained task that uses contextualized stimuli older adults follow gaze as much as or even more than younger adults.

3.
Atten Percept Psychophys ; 83(5): 1954-1970, 2021 Jul.
Article in English | MEDLINE | ID: mdl-33748905

ABSTRACT

Searching for an object in a complex scene is influenced by high-level factors such as how much the item would be expected in that setting (semantic consistency). There is also evidence that a person gazing at an object directs our attention towards it. However, there has been little previous research that has helped to understand how we integrate top-down cues such as semantic consistency and gaze to direct attention when searching for an object. Also, there are separate lines of evidence to suggest that older adults may be more influenced by semantic factors and less by gaze cues compared to younger counterparts, but this has not been investigated before in an integrated task. In the current study we analysed eye-movements of 34 younger and 30 older adults as they searched for a target object in complex visual scenes. Younger adults were influenced by semantic consistency in their attention to objects, but were more influenced by gaze cues. In contrast, older adults were more guided by semantic consistency in directing their attention, and showed less influence from gaze cues. These age differences in use of high-level cues were apparent early in processing (time to first fixation and probability of immediate fixation) but not in later processing (total time looking at objects and time to make a response). Overall, this pattern of findings indicates that people are influenced by both social cues and prior expectations when processing a complex scene, and the relative importance of these factors depends on age.


Subject(s)
Eye Movements , Fixation, Ocular , Aged , Cues , Humans , Pattern Recognition, Visual , Photic Stimulation , Reaction Time , Semantics , Visual Perception
4.
Cogn Res Princ Implic ; 6(1): 11, 2021 02 18.
Article in English | MEDLINE | ID: mdl-33599890

ABSTRACT

CCTV plays a prominent role in public security, health and safety. Monitoring large arrays of CCTV camera feeds is a visually and cognitively demanding task. Arranging the scenes by geographical proximity in the surveilled environment has been recommended to reduce this demand, but empirical tests of this method have failed to find any benefit. The present study tests an alternative method for arranging scenes, based on psychological principles from literature on visual search and scene perception: grouping scenes by semantic similarity. Searching for a particular scene in the array-a common task in reactive and proactive surveillance-was faster when scenes were arranged by semantic category. This effect was found only when scenes were separated by gaps for participants who were not made aware that scenes in the multiplex were grouped by semantics (Experiment 1), but irrespective of whether scenes were separated by gaps or not for participants who were made aware of this grouping (Experiment 2). When target frequency varied between scene categories-mirroring unequal distributions of crime over space-the benefit of organising scenes by semantic category was enhanced for scenes in the most frequently searched-for category, without any statistical evidence for a cost when searching for rarely searched-for categories (Experiment 3). The findings extend current understanding of the role of within-scene semantics in visual search, to encompass between-scene semantic relationships. Furthermore, the findings suggest that arranging scenes in the CCTV control room by semantic category is likely to assist operators in finding specific scenes during surveillance.


Subject(s)
Semantics , Visual Perception , Awareness , Humans , Organizations , Research Design
5.
Psychon Bull Rev ; 28(2): 434-453, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33289061

ABSTRACT

Sense of Agency, the phenomenology associated with causing one's own actions and corresponding effects, is a cornerstone of human experience. Social Agency can be defined as the Sense of Agency experienced in any situation in which the effects of our actions are related to a conspecific. This can be implemented as the other's reactions being caused by our action, joint action modulating our Sense of Agency, or the other's mere social presence influencing our Sense of Agency. It is currently an open question how such Social Agency can be conceptualized and how it relates to its nonsocial variant. This is because, compared with nonsocial Sense of Agency, the concept of Social Agency has remained oversimplified and underresearched, with disparate empirical paradigms yielding divergent results. Reviewing the empirical evidence and the commonalities and differences between different instantiations of Social Agency, we propose that Social Agency can be conceptualized as a continuum, in which the degree of cooperation is the key dimension that determines our Sense of Agency, and how it relates to nonsocial Sense of Agency. Taking this perspective, we review how the different factors that typically influence Sense of Agency affect Social Agency, and in the process highlight outstanding empirical questions within the field. Finally, concepts from wider research areas are discussed in relation to the ecological validity of Social Agency paradigms, and we provide recommendations for future methodology.


Subject(s)
Self Concept , Social Behavior , Humans
6.
Behav Brain Sci ; 43: e128, 2020 06 19.
Article in English | MEDLINE | ID: mdl-32645807

ABSTRACT

If we consider perceptions as arising from predictive processes, we must consider the manner in which the underlying expectations are formed and how they are applied to the sensory data. We provide examples of cases where expectations give rise to unexpected and unlikely perceptions of the world. These examples may help define bounds for the notion that perceptual hypotheses are direct derivatives of experience and are used to furnish sensible interpretations of sensory data.

7.
Accid Anal Prev ; 138: 105469, 2020 Apr.
Article in English | MEDLINE | ID: mdl-32113007

ABSTRACT

Previous research has demonstrated that the distraction caused by holding a mobile telephone conversation is not limited to the period of the actual conversation (Haigney, 1995; Redelmeier & Tibshirani, 1997; Savage et al., 2013). In a prior study we identified potential eye movement and EEG markers of cognitive distraction during driving hazard perception. However the extent to which these markers are affected by the demands of the hazard perception task are unclear. Therefore in the current study we assessed the effects of secondary cognitive task demand on eye movement and EEG metrics separately for periods prior to, during and after the hazard was visible. We found that when no hazard was present (prior and post hazard windows), distraction resulted in changes to various elements of saccadic eye movements. However, when the target was present, distraction did not affect eye movements. We have previously found evidence that distraction resulted in an overall decrease in theta band output at occipital sites of the brain. This was interpreted as evidence that distraction results in a reduction in visual processing. The current study confirmed this by examining the effects of distraction on the lambda response component of subjects eye fixation related potentials (EFRPs). Furthermore, we demonstrated that although detections of hazards were not affected by distraction, both eye movement and EEG metrics prior to the onset of the hazard were sensitive to changes in cognitive workload. This suggests that changes to specific aspects of the saccadic eye movement system could act as unobtrusive markers of distraction even prior to a breakdown in driving performance.


Subject(s)
Cognition/physiology , Distracted Driving , Visual Perception/physiology , Accidents, Traffic/prevention & control , Adult , Eye Movements/physiology , Female , Fixation, Ocular/physiology , Humans , Male , Young Adult
8.
Vision (Basel) ; 3(2)2019 Jun 10.
Article in English | MEDLINE | ID: mdl-31735829

ABSTRACT

The dynamic nature of the real world poses challenges for predicting where best to allocate gaze during object interactions. The same object may require different visual guidance depending on its current or upcoming state. Here, we explore how object properties (the material and shape of objects) and object state (whether it is full of liquid, or to be set down in a crowded location) influence visual supervision while setting objects down, which is an element of object interaction that has been relatively neglected in the literature. In a liquid pouring task, we asked participants to move empty glasses to a filling station; to leave them empty, half fill, or completely fill them with water; and then move them again to a tray. During the first putdown (when the glasses were all empty), visual guidance was determined only by the type of glass being set down-with more unwieldy champagne flutes being more likely to be guided than other types of glasses. However, when the glasses were then filled, glass type no longer mattered, with the material and fill level predicting whether the glasses were set down with visual supervision: full, glass material containers were more likely to be guided than empty, plastic ones. The key finding from this research is that the visual system responds flexibly to dynamic changes in object properties, likely based on predictions of risk associated with setting-down the object unsupervised by vision. The factors that govern these mechanisms can vary within the same object as it changes state.

9.
Q J Exp Psychol (Hove) ; 71(10): 2162-2173, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30226438

ABSTRACT

People communicate using verbal and non-verbal cues, including gaze cues. Gaze allocation can be influenced by social factors; however, most research on gaze cueing has not considered these factors. The presence of social roles was manipulated in a natural, everyday collaborative task while eye movements were measured. In pairs, participants worked together to make a cake. Half of the pairs were given roles ("Chef" or "Gatherer") and the other half were not. Across all participants we found, contrary to the results of static-image experiments, that participants spent very little time looking at each other, challenging the generalisability of the conclusions from lab-based paradigms. However, participants were more likely than not to look at their partner when receiving an instruction, highlighting the typical coordination of gaze cues and verbal communication in natural interactions. The mean duration of instances in which the partners looked at each other (partner gaze) was longer in the roles condition, and these participants were quicker to align their gaze with their partners (shared gaze). In addition, we found some indication that when hearing spoken instructions, listeners in the roles condition looked at the speaker more than listeners in the no roles condition. We conclude that social context can affect our gaze behaviour during a social interaction.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Interpersonal Relations , Social Behavior , Analysis of Variance , Cues , Eye Movement Measurements , Female , Humans , Male
10.
Vision Res ; 153: 37-46, 2018 12.
Article in English | MEDLINE | ID: mdl-30248367

ABSTRACT

Many aspects of our everyday behaviour require that we search for objects. However, in real situations search is often conducted while internal and external factors compete for our attention resources. Cognitive distraction interferes with our ability to search for targets, increasing search times. Here we consider whether effects of cognitive distraction interfere differentially with three distinct phases of search: initiating search, overtly scanning through items in the display, and verifying that the object is indeed the target of search once it has been fixated. Furthermore, we consider whether strategic components of visual search that emerge when searching items organized into structured arrays are susceptible to cognitive distraction or not. We used Gilchrist & Harvey's (2006) structured and unstructured visual search paradigm with the addition of Savage, Potter, and Tatler's (2013) secondary puzzle task. Cognitive load influenced two phases of search: 1) scanning times and 2) verification times. Under high load, fixation durations were longer and re-fixations of distracters were more common. In terms of scanning strategy, we replicated Gilchrist and Harvey's (2006) findings of more systematic search for structured arrays than unstructured ones. We also found an effect of cognitive load on this aspect of search but only in structured arrays. Our findings suggest that our eyes, by default, produce an autonomous scanning pattern that is modulated but not completely eliminated by secondary cognitive load.


Subject(s)
Attention/physiology , Cognition/physiology , Eye Movements/physiology , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Reaction Time , Young Adult
11.
J Vis ; 17(11): 12, 2017 09 01.
Article in English | MEDLINE | ID: mdl-28973565

ABSTRACT

Much effort has been made to explain eye guidance during natural scene viewing. However, a substantial component of fixation placement appears to be a set of consistent biases in eye movement behavior. We introduce the concept of saccadic flow, a generalization of the central bias that describes the image-independent conditional probability of making a saccade to (xi+1, yi+1), given a fixation at (xi, yi). We suggest that saccadic flow can be a useful prior when carrying out analyses of fixation locations, and can be used as a submodule in models of eye movements during scene viewing. We demonstrate the utility of this idea by presenting bias-weighted gaze landscapes, and show that there is a link between the likelihood of a saccade under the flow model, and the salience of the following fixation. We also present a minor improvement to our central bias model (based on using a multivariate truncated Gaussian), and investigate the leftwards and coarse-to-fine biases in scene viewing.


Subject(s)
Attention/physiology , Fixation, Ocular/physiology , Models, Theoretical , Saccades/physiology , Visual Perception/physiology , Humans , Probability
12.
J Exp Psychol Hum Percept Perform ; 43(10): 1717-1743, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28967780

ABSTRACT

We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record


Subject(s)
Fixation, Ocular , Semantics , Visual Perception , Adolescent , Adult , Attention , Female , Humans , Male , Memory , Young Adult
13.
Can J Exp Psychol ; 71(2): 133-145, 2017 Jun.
Article in English | MEDLINE | ID: mdl-28604050

ABSTRACT

Vision and action are tightly coupled in space and time: for many tasks we must look at the right place at the right time to gather the information that we need to complete our behavioural goals. Vision typically leads action by about 0.5 seconds in many natural tasks. However, the factors that influence this temporal coordination are not well understood, and variations have been found previously between two domestic tasks each with similar constraints: tea making and sandwich making. This study offers a systematic exploration of the factors that govern spatiotemporal coordination of vision and action within complex real-world activities. We found that the temporal coordination eye movements and action differed between tea making and sandwich making. Longer eye-hand latencies, more "look ahead" fixations and more looks to irrelevant objects were found when making tea than when making a sandwich. Contrary to previous suggestions, we found that the requirement to move around the environment did not influence the coordination of vision and action. We conclude that the dynamics of visual behaviour during motor acts are sensitive to the task and specific objects and actions required but not to the spatial demands requiring movement around an environment. (PsycINFO Database Record


Subject(s)
Eye Movements/physiology , Motor Activity/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Activities of Daily Living , Adult , Female , Humans , Male , Young Adult
14.
Iperception ; 8(2): 2041669516689572, 2017.
Article in English | MEDLINE | ID: mdl-28540027

ABSTRACT

Multiplex viewing of static or dynamic scenes is an increasing feature of screen media. Most existing multiplex experiments have examined detection across increasing scene numbers, but currently no systematic evaluation of the factors that might produce difficulty in processing multiplexes exists. Across five experiments we provide such an evaluation. Experiment 1 characterises difficulty in change detection when the number of scenes is increased. Experiment 2 reveals that the increased difficulty across multiple-scene displays is caused by the total amount of visual information accounts for differences in change detection times, regardless of whether this information is presented across multiple scenes, or contained in one scene. Experiment 3 shows that whether quadrants of a display were drawn from the same, or different scenes did not affect change detection performance. Experiment 4 demonstrates that knowing which scene the change will occur in means participants can perform at monoplex level. Finally, Experiment 5 finds that changes of central interest in multiplexed scenes are detected far easier than marginal interest changes to such an extent that a centrally interesting object removal in nine screens is detected more rapidly than a marginally interesting object removal in four screens. Processing multiple-screen displays therefore seems dependent on the amount of information, and the importance of that information to the task, rather than simply the number of scenes in the display. We discuss the theoretical and applied implications of these findings.

15.
Psychol Rev ; 124(3): 267-300, 2017 04.
Article in English | MEDLINE | ID: mdl-28358564

ABSTRACT

Many of our actions require visual information, and for this it is important to direct the eyes to the right place at the right time. Two or three times every second, we must decide both when and where to direct our gaze. Understanding these decisions can reveal the moment-to-moment information priorities of the visual system and the strategies for information sampling employed by the brain to serve ongoing behavior. Most theoretical frameworks and models of gaze control assume that the spatial and temporal aspects of fixation point selection depend on different mechanisms. We present a single model that can simultaneously account for both when and where we look. Underpinning this model is the theoretical assertion that each decision to move the eyes is an evaluation of the relative benefit expected from moving the eyes to a new location compared with that expected by continuing to fixate the current target. The eyes move when the evidence that favors moving to a new location outweighs that favoring staying at the present location. Our model provides not only an account of when the eyes move, but also what will be fixated. That is, an analysis of saccade timing alone enables us to predict where people look in a scene. Indeed our model accounts for fixation selection as well as (and often better than) current computational models of fixation selection in scene viewing. (PsycINFO Database Record


Subject(s)
Decision Making , Models, Psychological , Saccades , Humans , Linear Models , Time Factors , Visual Perception
16.
Perception ; 46(1): 100-108, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27614664

ABSTRACT

Monocular depth cues can lead not only to illusory depth in two-dimensional patterns but also to perspective reversals in three-dimensional objects. When a viewer perceptually inverts (reverses) a three-dimensional object, stimuli on the inner surfaces of that object also invert. However, the perceptual fate of anything occurring within the space that is enclosed by the walls of a perceptually reversible object is unknown. In the present study, perceptions of the relative vertical heights of stimuli within a truncated pyramidal chute were compared for stimuli placed laterally, on the inner surface of the chute, or centrally, suspended within the volume enclosed by the chute. The typical inversion was obtained for lateral stimuli, but central stimuli did not invert. While central stimuli maintained their veridical vertical order, participants experienced a considerable compression of perceptual depth. These results imply a dilution of the illusion within the centre of the volume of space that it encloses.

17.
Mem Cognit ; 44(1): 114-23, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26335303

ABSTRACT

Following an active task, the memory representations for used and unused objects are different. However, it is not clear whether these differences arise due to prioritizing objects that are task-relevant, objects that are physically interacted with, or a combination of the two factors. The present study allowed us to tease apart the relative importance of task-relevance and physical manipulation on object memory. A paradigm was designed in which objects were either necessary to complete a task (target), moved out of the way (obstructing, but interacted with), or simply present in the environment (background). Participants' eye movements were recorded with a portable tracker during the task, and they received a memory test on the objects after the task was completed. Results showed that manipulating an object is sufficient to change how information is extracted and retained from fixations, compared to background objects. Task-relevance provides an additional influence: information is accumulated and retained differently for manipulated target objects than manipulated obstructing objects. These findings demonstrate that object memory is influenced both by whether we physically interact with an object, and the relevance of that object to our behavioral goals.


Subject(s)
Intention , Memory/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Eye Movement Measurements , Female , Humans , Male , Young Adult
18.
J Vis ; 15(2)2015 Feb 10.
Article in English | MEDLINE | ID: mdl-25761330

ABSTRACT

Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation.


Subject(s)
Cues , Eye Movements/physiology , Pattern Recognition, Visual/physiology , Space Perception/physiology , Visual Pathways/physiology , Adolescent , Adult , Female , Humans , Male , Photic Stimulation , Young Adult
19.
J Exp Psychol Hum Percept Perform ; 41(2): 565-75, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25621580

ABSTRACT

Gaze cues are used alongside language to communicate. Lab-based studies have shown that people reflexively follow gaze cue stimuli, however it is unclear whether this affect is present in real interactions. Language specificity influences the extent to which we utilize gaze cues in real interactions, but it is unclear whether the type of language used can similarly affect gaze cue utilization. We aimed to (a) investigate whether automatic gaze following effects are present in real-world interactions, and (b) explore how gaze cue utilization varies depending on the form of concurrent language used. Wearing a mobile eye-tracker, participants followed instructions to complete a real-world search task. The instructor varied the determiner used (featural or spatial) and the presence of gaze cues (absent, congruent, or incongruent). Congruent gaze cues were used more when provided alongside featural references. Incongruent gaze cues were initially followed no more than chance. However, unlike participants in the no-gaze condition, participants in the incongruent condition did not benefit from receiving spatial instructions over featural instructions. We suggest that although participants selectively use informative gaze cues and ignore unreliable gaze cues, visual search can nevertheless be disrupted when inherently spatial gaze cues are accompanied by contradictory verbal spatial references.


Subject(s)
Attention , Fixation, Ocular , Orientation , Reaction Time , Adolescent , Adult , Female , Humans , Male , Young Adult
20.
Vision Res ; 102: 41-51, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25080387

ABSTRACT

Humans display image-independent viewing biases when inspecting complex scenes. One of the strongest such bias is the central tendency in scene viewing: observers favour making fixations towards the centre of an image, irrespective of its content. Characterising these biases accurately is important for three reasons: (1) they provide a necessary baseline for quantifying the association between visual features in scenes and fixation selection; (2) they provide a benchmark for evaluating models of fixation behaviour when viewing scenes; and (3) they can be included as a component of generative models of eye guidance. In the present study we compare four commonly used approaches to describing image-independent biases and report their ability to describe observed data and correctly classify fixations across 10 eye movement datasets. We propose an anisotropic Gaussian function that can serve as an effective and appropriate baseline for describing image-independent biases without the need to fit functions to individual datasets or subjects.


Subject(s)
Fixation, Ocular/physiology , Visual Perception/physiology , Attention/physiology , Eye Movements/physiology , Humans , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL