Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
HERD ; 15(3): 206-228, 2022 07.
Article in English | MEDLINE | ID: mdl-35012375

ABSTRACT

BACKGROUND: Navigating large hospitals can be very challenging due to the functional complexity as well as the evolving changes and expansions of such facilities. Hospital wayfinding issues could lead to stress, negative mood, and poor healthcare experience among patients, staff, and family members. OBJECTIVES: A survey-embedded experiment was conducted using immersive virtual environment (IVE) techniques to explore people's wayfinding performance and their mood and spatial experience in hospital circulation spaces with or without visible greenspaces. METHODS: Seventy-four participants were randomly assigned to either group to complete wayfinding tasks in a timed session. Participants' wayfinding performances were interpreted using several indicators, including task completion, duration, walking distance, stop, sign-viewing, and route selection. Participants' mood states and perceived environmental attractiveness and atmosphere were surveyed; their perceived levels of presence in the IVE hospitals were also reported. RESULTS: The results revealed that participants performed better on high complexity wayfinding tasks in the IVE hospital with visible greenspaces, as indicated by less time consumed and shorter walking distance to find the correct destination, less frequent stops and sign viewing, and more efficient route selection. Participants also experienced enhanced mood states and favorable spatial experience and perceived aesthetics in the IVE hospital with visible greenspaces than the same environment without window views. IVE techniques could be an efficient tool to supplement environment-behavior studies with certain conditions noted. CONCLUSIONS: Hospital greenspaces located at key decision points could serve as landmarks that positively attract people's attention, aid wayfinding, and improve their navigational experience.


Subject(s)
Hospitals , Parks, Recreational , Humans
2.
Personal Disord ; 11(6): 431-439, 2020 11.
Article in English | MEDLINE | ID: mdl-32162939

ABSTRACT

Psychopathy is characterized by a lack of empathy, callousness, and a range of severe antisocial behaviors. A deficit to accurately process social cues, which has been widely documented in psychopathic populations, is assumed to underlie their pathological development. Impaired attention to socially salient cues, such as the eyes of an interaction partner, is a possible mechanism compromising the development of social cognition. Preliminary evidence from static facial stimuli suggests that psychopathy is indeed linked to reduced eye gaze. However, no study to date has investigated whether these mechanisms apply to naturalistic interactions. This study is the first to examine patterns of visual attention during live social interactions and their association with symptom clusters of psychopathy. Eye contact was assessed in a sample of incarcerated offenders (N = 30) during semistructured face-to-face interactions with a mobile eye-tracking headset and analyzed using a novel automated areas of interest (e.g., eye region) labeling technique. The interactions included an exchange on neutral predetermined topics and included a condition in which the participants were active (talking) and passive (listening). The data reveal that across both listening and talking conditions higher affective psychopathy is a significant predictor of reduced eye contact (listening: r = -.39; talking: r = -.43). The present findings are in line with previous research suggesting impaired attention to social cues in psychopathy. This study is the first to document these deficits in naturalistic, live social interaction and therefore provides important evidence for their relevance to real-life behavior. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Subject(s)
Antisocial Personality Disorder/physiopathology , Criminals/psychology , Fixation, Ocular , Prisoners/psychology , Social Interaction , Adolescent , Adult , Aged , Attention , Cues , Empathy , Eye-Tracking Technology , Germany , Humans , Male , Middle Aged , Young Adult
3.
J Eye Mov Res ; 13(5)2020 May 17.
Article in English | MEDLINE | ID: mdl-33828808

ABSTRACT

A large body of literature documents the sensitivity of pupil response to cognitive load (1)and emotional arousal (2). Recent empirical evidence also showed that microsaccade characteristics and dynamics can be modulated by mental fatigue and cognitive load (3). Very little is known about the sensitivity of microsaccadic characteristics to emotional arousal. The present paper demonstrates in a controlled experiment pupillary and microsaccadic responses to information processing during multi-attribute decision making under affective priming. Twenty-one psychology students were randomly assigned into three affective priming conditions (neutral, aversive, and erotic). Participants were tasked to make several discriminative decisions based on acquired cues. In line with the expectations, results showed microsaccadic rate inhibition and pupillary dilation depending on cognitive effort (number of acquired cues) prior to decision. These effects were moderated by affective priming. Aversivepriming strengthened pupillary and microsaccadic response to information processing effort.In general, results suggest that pupillary response is more biased by affective priming than microsaccadic rate. The results are discussed in the light of neuropsychological mechanisms of pupillary and microsaccadic behavior generation.

4.
IEEE Trans Vis Comput Graph ; 26(9): 2904-2918, 2020 Sep.
Article in English | MEDLINE | ID: mdl-30835226

ABSTRACT

We develop an approach to using microsaccade dynamics for the measurement of task difficulty/cognitive load imposed by a visual search task of a layered surface. Previous studies provide converging evidence that task difficulty/cognitive load can influence microsaccade activity. We corroborate this notion. Specifically, we explore this relationship during visual search for features embedded in a terrain-like surface, with the eyes allowed to move freely during the task. We make two relevant contributions. First, we validate an approach to distinguishing between the ambient and focal phases of visual search. We show that this spectrum of visual behavior can be quantified by a single previously reported estimator, known as Krejtz's K coefficient. Second, we use ambient/focal segments based on K as a moderating factor for microsaccade analysis in response to task difficulty. We find that during the focal phase of visual search (a) microsaccade magnitude increases significantly, and (b) microsaccade rate decreases significantly, with increased task difficulty. We conclude that the combined use of K and microsaccade analysis may be helpful in building effective tools that provide an indication of the level of cognitive activity within a task while the task is being performed.

5.
J Deaf Stud Deaf Educ ; 25(1): 10-21, 2020 01 03.
Article in English | MEDLINE | ID: mdl-31665493

ABSTRACT

The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient-focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.


Subject(s)
Attention , Deafness/physiopathology , Facial Recognition , Adolescent , Adult , Attention/physiology , Case-Control Studies , Deafness/psychology , Eye-Tracking Technology , Facial Expression , Facial Recognition/physiology , Female , Humans , Male , Persons With Hearing Impairments/psychology , Young Adult
6.
J Eye Mov Res ; 12(7)2019 Nov 25.
Article in English | MEDLINE | ID: mdl-33828764

ABSTRACT

Wearable mobile eye trackers have great potential as they allow the measurement of eye movements during daily activities such as driving, navigating the world and doing groceries. Although mobile eye trackers have been around for some time, developing and operating these eye trackers was generally a highly technical affair. As such, mobile eye-tracking research was not feasible for most labs. Nowadays, many mobile eye trackers are available from eye-tracking manufacturers (e.g. Tobii, Pupil labs, SMI, Ergoneers) and various implementations in virtual/augmented reality have recently been released.The wide availability has caused the number of publications using a mobile eye tracker to increase quickly. Mobile eye tracking is now applied in vision science, educational science, developmental psychology, marketing research (using virtual and real supermarkets), clinical psychology, usability, architecture, medicine, and more. Yet, transitioning from lab-based studies where eye trackers are fixed to the world to studies where eye trackers are fixed to the head presents researchers with a number of problems. These problems range from the conceptual frameworks used in world-fixed and head-fixed eye tracking and how they relate to each other, to the lack of data quality comparisons and field tests of the different mobile eye trackers and how the gaze signal can be classified or mapped to the visual stimulus. Such problems need to be addressed in order to understand how world-fixed and head-fixed eye-tracking research can be compared and to understand the full potential and limits of what mobile eye-tracking can deliver. In this symposium, we bring together presenting researchers from five different institutions (Lund University, Utrecht University, Clemson University, Birkbeck University of London and Rochester Institute of Technology) addressing problems and innovative solutions across the entire breadth of mobile eye-tracking research. Hooge, presenting Hessels et al. paper, focus on the definitions of fixations and saccades held by researchers in the eyemovement field and argue how they need to be clarified in order to allow comparisons between world-fixed and head-fixed eye-tracking research. - Diaz et al. introduce machine-learning techniques for classifying the gaze signal in mobile eye-tracking contexts where head and body are unrestrained. Niehorster et al. compare data quality of mobile eye trackers during natural behavior and discuss the application range of these eye trackers. Duchowski et al. introduce a method for automatically mapping gaze to faces using computer vision techniques. Pelz et al. employ state-of-the-art techniques to map fixations to objects of interest in the scene video and align grasp and eye-movement data in the same reference frame to investigate the guidance of eye movements during manual interaction. Video stream: https://vimeo.com/357473408.

7.
PLoS One ; 13(9): e0203629, 2018.
Article in English | MEDLINE | ID: mdl-30216385

ABSTRACT

Pupil diameter and microsaccades are captured by an eye tracker and compared for their suitability as indicators of cognitive load (as beset by task difficulty). Specifically, two metrics are tested in response to task difficulty: (1) the change in pupil diameter with respect to inter- or intra-trial baseline, and (2) the rate and magnitude of microsaccades. Participants performed easy and difficult mental arithmetic tasks while fixating a central target. Inter-trial change in pupil diameter and microsaccade magnitude appear to adequately discriminate task difficulty, and hence cognitive load, if the implied causality can be assumed. This paper's contribution corroborates previous work concerning microsaccade magnitude and extends this work by directly comparing microsaccade metrics to pupillometric measures. To our knowledge this is the first study to compare the reliability and sensitivity of task-evoked pupillary and microsaccadic measures of cognitive load.


Subject(s)
Pupil/physiology , Saccades/physiology , Eye Movement Measurements , Female , Fixation, Ocular/physiology , Humans , Male , Visual Perception/physiology
8.
J Eye Mov Res ; 10(3)2017 May 29.
Article in English | MEDLINE | ID: mdl-33828660

ABSTRACT

A model of the main sequence is proposed based on the logistic function. The model's fit to the peak velocity-amplitude relation resembles an S curve, simultaneously allowing control of the curve's asymptotes at very small and very large amplitudes, as well as its slope over the mid-amplitude range. The proposed inverse-linear logistic model is also able to express the linear relation of duration and amplitude. We demonstrate the utility and robustness of the model when fit to aggregate data at the smalland mid-amplitude ranges, namely when fitting microsaccades, saccades, and superposition of both. We are confident the model will suitably extend to the largeamplitude range of eye movements.

9.
Hum Factors ; 48(3): 540-54, 2006.
Article in English | MEDLINE | ID: mdl-17063968

ABSTRACT

OBJECTIVE: A model of semisystematic search was sought that could account for both memory retrieval and other performance-shaping factors. BACKGROUND: Visual search is an important aspect of many examination and monitoring tasks. As a result, visual search performance has been the topic of many empirical investigations. These investigations have reported that individual search performance depends on participant factors such as search behavior, which has motivated the development of models of visual search that incorporate this behavior. Search behavior ranges from random to strictly systematic; variation in behavior is commonly assumed to be caused by differences in memory retrieval and search strategy. METHODS: This model ultimately took the form of a discrete-time nonstationary Markov process. RESULTS: It yields both performance and process measures that include accuracy, time to perception, task time, and coverage while avoiding the statistical difficulties inherent to simulations. In particular, it was seen that as the search behavior becomes more systematic, expected coverage and accuracy increase while expected task time decreases. CONCLUSION: In addition to explaining these outcomes and their interrelationships from a theoretical standpoint, the model can predict these outcomes in practice to a certain extent as it can create an envelope defined by best- and worst-case search performance. APPLICATION: The model also has the capability of supporting assessment. That is, it can be used to assess the effectiveness of an individual's search performance, and to provide possible explanations for this performance, through the use of one or more of the output measures.


Subject(s)
Awareness/physiology , Models, Statistical , Visual Perception , Humans , United States
10.
Cyberpsychol Behav ; 7(6): 621-34, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15687796

ABSTRACT

Gaze-contingent displays (GCDs) attempt to balance the amount of information displayed against the visual information processing capacity of the observer through real-time eye movement sensing. Based on the assumed knowledge of the instantaneous location of the observer's focus of attention, GCD content can be "tuned" through several display processing means. Screen-based displays alter pixel level information generally matching the resolvability of the human retina in an effort to maximize bandwidth. Model-based displays alter geometric-level primitives along similar goals. Attentive user interfaces (AUIs) manage object- level entities (e.g., windows, applications) depending on the assumed attentive state of the observer. Such real-time display manipulation is generally achieved through non-contact, unobtrusive tracking of the observer's eye movements. This paper briefly reviews past and present display techniques as well as emerging graphics and eye tracking technology for GCD development.


Subject(s)
Computer Terminals , Visual Perception , Attention , Computers , Eye Movements/physiology , Humans , Retina/physiology , Software
11.
Behav Res Methods Instrum Comput ; 34(4): 455-70, 2002 Nov.
Article in English | MEDLINE | ID: mdl-12564550

ABSTRACT

Eye-tracking applications are surveyed in a breadth-first manner, reporting on work from the following domains: neuroscience, psychology, industrial engineering and human factors, marketing/advertising, and computer science. Following a review of traditionally diagnostic uses, emphasis is placed on interactive applications, differentiating between selective and gaze-contingent approaches.


Subject(s)
Prefrontal Cortex/physiology , Saccades/physiology , Humans , Speech Perception/physiology , Visual Perception/physiology
12.
Appl Ergon ; 33(6): 559-70, 2002 Nov.
Article in English | MEDLINE | ID: mdl-12507340

ABSTRACT

The aircraft maintenance industry is a complex system consisting of several interrelated human and machine components. Recognizing this, the Federal Aviation Administration (FAA) has pursued human factors related research. In the maintenance arena the research has focused on the aircraft inspection process and the aircraft inspector. Training has been identified as the primary intervention strategy to improve the quality and reliability of aircraft inspection. If training is to be successful, it is critical that we provide aircraft inspectors with appropriate training tools and environment. In response to this need, the paper outlines the development of a virtual reality (VR) system for aircraft inspection training. VR has generated much excitement but little formal proof that it is useful. However, since VR interfaces are difficult and expensive to build, the computer graphics community needs to be able to predict which applications will benefit from VR. To address this important issue, this research measured the degree of immersion and presence felt by subjects in a virtual environment simulator. Specifically, it conducted two controlled studies using the VR system developed for visual inspection task of an aft-cargo bay at the VR Lab of Clemson University. Beyond assembling the visual inspection virtual environment, a significant goal of this project was to explore subjective presence as it affects task performance. The results of this study indicated that the system scored high on the issues related to the degree of presence felt by the subjects. As a next logical step, this study, then, compared VR to an existing PC-based aircraft inspection simulator. The results showed that the VR system was better and preferred over the PC-based training tool.


Subject(s)
Aircraft/standards , Computer-Assisted Instruction , Inservice Training/methods , Maintenance/methods , User-Computer Interface , Adult , Ergonomics , Humans , Vision, Ocular
SELECTION OF CITATIONS
SEARCH DETAIL
...