Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
PLoS One ; 19(5): e0298867, 2024.
Article in English | MEDLINE | ID: mdl-38728266

ABSTRACT

U.S. service members maintain constant situational awareness (SA) due to training and experience operating in dynamic and complex environments. Work examining how military experience impacts SA during visual search of a complex naturalistic environment, is limited. Here, we compare Active Duty service members and Civilians' physiological behavior during a navigational visual search task in an open-world virtual environment (VE) while cognitive load was manipulated. We measured eye-tracking and electroencephalogram (EEG) outcomes from Active Duty (N = 21) and Civilians (N = 15) while they navigated a desktop VE at a self-regulated pace. Participants searched and counted targets (N = 15) presented among distractors, while cognitive load was manipulated with an auditory Math Task. Results showed Active Duty participants reported significantly greater/closer to the correct number of targets compared to Civilians. Overall, Active Duty participants scanned the VE with faster peak saccade velocities and greater average saccade magnitudes compared to Civilians. Convolutional Neural Network (CNN) response (EEG P-300) was significantly weighted more to initial fixations for the Active Duty group, showing reduced attentional resources on object refixations compared to Civilians. There were no group differences in fixation outcomes or overall CNN response when comparing targets versus distractor objects. When cognitive load was manipulated, only Civilians significantly decreased their average dwell time on each object and the Active Duty group had significantly fewer numbers of correct answers on the Math Task. Overall, the Active Duty group explored the VE with increased scanning speed and distance and reduced cognitive re-processing on objects, employing a different, perhaps expert, visual search strategy indicative of increased SA. The Active Duty group maintained SA in the main visual search task and did not appear to shift focus to the secondary Math Task. Future work could compare how a stress inducing environment impacts these groups' physiological or cognitive markers and performance for these groups.


Subject(s)
Awareness , Electroencephalography , Military Personnel , Humans , Military Personnel/psychology , Male , Female , Adult , Awareness/physiology , Young Adult , Cognition/physiology , Virtual Reality , Attention/physiology , Spatial Navigation/physiology , Saccades/physiology
2.
J Neural Eng ; 20(4)2023 08 23.
Article in English | MEDLINE | ID: mdl-37552980

ABSTRACT

Objective.Currently, there exists very few ways to isolate cognitive processes, historically defined via highly controlled laboratory studies, in more ecologically valid contexts. Specifically, it remains unclear as to what extent patterns of neural activity observed under such constraints actually manifest outside the laboratory in a manner that can be used to make accurate inferences about latent states, associated cognitive processes, or proximal behavior. Improving our understanding of when and how specific patterns of neural activity manifest in ecologically valid scenarios would provide validation for laboratory-based approaches that study similar neural phenomena in isolation and meaningful insight into the latent states that occur during complex tasks.Approach.Domain generalization methods, borrowed from the work of the brain-computer interface community, have the potential to capture high-dimensional patterns of neural activity in a way that can be reliably applied across experimental datasets in order to address this specific challenge. We previously used such an approach to decode phasic neural responses associated with visual target discrimination. Here, we extend that work to more tonic phenomena such as internal latent states. We use data from two highly controlled laboratory paradigms to train two separate domain-generalized models. We apply the trained models to an ecologically valid paradigm in which participants performed multiple, concurrent driving-related tasks while perched atop a six-degrees-of-freedom ride-motion simulator.Main Results.Using the pretrained models, we estimate latent state and the associated patterns of neural activity. As the patterns of neural activity become more similar to those patterns observed in the training data, we find changes in behavior and task performance that are consistent with the observations from the original, laboratory-based paradigms.Significance.These results lend ecological validity to the original, highly controlled, experimental designs and provide a methodology for understanding the relationship between neural activity and behavior during complex tasks.


Subject(s)
Brain-Computer Interfaces , Visual Perception , Humans , Task Performance and Analysis , Research Design , Discrimination, Psychological
3.
Eur J Neurosci ; 54(10): 7609-7625, 2021 11.
Article in English | MEDLINE | ID: mdl-34679237

ABSTRACT

It is well established that neural responses to visual stimuli are enhanced at select locations in the visual field. Although spatial selectivity and the effects of spatial attention are well understood for discrete tasks (e.g. visual cueing), little is known for naturalistic experience that involves continuous dynamic visual stimuli (e.g. driving). Here, we assess the strength of neural responses across the visual space during a kart-race game. Given the varying relevance of visual location in this task, we hypothesized that the strength of neural responses to movement will vary across the visual field, and it would differ between active play and passive viewing. To test this, we measure the correlation strength of scalp-evoked potentials with optical flow magnitude at individual locations on the screen. We find that neural responses are strongly correlated at task-relevant locations in visual space, extending beyond the focus of overt attention. Although the driver's gaze is directed upon the heading direction at the centre of the screen, neural responses were robust at the peripheral areas (e.g. roads and surrounding buildings). Importantly, neural responses to visual movement are broadly distributed across the scalp, with visual spatial selectivity differing across electrode locations. Moreover, during active gameplay, neural responses are enhanced at select locations in the visual space. Conventionally, spatial selectivity of neural response has been interpreted as an attentional gain mechanism. In the present study, the data suggest that different brain areas focus attention on different portions of the visual field that are task-relevant, beyond the focus of overt attention.


Subject(s)
Visual Cortex , Visual Fields , Attention , Brain , Evoked Potentials , Photic Stimulation , Visual Perception
4.
J Vis ; 21(10): 7, 2021 09 01.
Article in English | MEDLINE | ID: mdl-34491271

ABSTRACT

Relatively little is known about visual processing during free-viewing visual search in realistic dynamic environments. Free-viewing is characterized by frequent saccades. During saccades, visual processing is thought to be suppressed, yet we know that the presaccadic visual content can modulate postsaccadic processing. To better understand these processes in a realistic setting, we study here saccades and neural responses elicited by the appearance of visual targets in a realistic virtual environment. While subjects were being driven through a 3D virtual town, they were asked to discriminate between targets that appear on the road. Using a system identification approach, we separated overlapping and correlated activity evoked by visual targets, saccades, and button presses. We found that the presence of a target enhances early occipital as well as late frontocentral saccade-related responses. The earlier potential, shortly after 125 ms post-saccade onset, was enhanced for targets that appeared in the peripheral vision as compared to the central vision, suggesting that fast peripheral processing initiated before saccade onset. The later potential, at 195 ms post-saccade onset, was strongly modulated by the visibility of the target. Together these results suggest that, during natural viewing, neural processing of the presaccadic visual stimulus continues throughout the saccade, apparently unencumbered by saccadic suppression.


Subject(s)
Saccades , Visual Perception , Humans , Photic Stimulation , Vision, Ocular
5.
Front Psychol ; 12: 681042, 2021.
Article in English | MEDLINE | ID: mdl-34434140

ABSTRACT

Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, an increasing number of vision researchers are employing virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while subjects freely navigated through a rich, open-world virtual environment. Within this environment, subjects completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target conditions: Humvees, motorcycles, aircraft, or furniture. Our results show a statistically significant relationship between gaze behavior and target objects across Target Conditions with increased visual attention toward assigned targets. Specifically, we see an increase in the number of fixations and an increase in dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings and support that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic (open-world) virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.

6.
Front Psychol ; 12: 650693, 2021.
Article in English | MEDLINE | ID: mdl-35035362

ABSTRACT

Using head mounted displays (HMDs) in conjunction with virtual reality (VR), vision researchers are able to capture more naturalistic vision in an experimentally controlled setting. Namely, eye movements can be accurately tracked as they occur in concert with head movements as subjects navigate virtual environments. A benefit of this approach is that, unlike other mobile eye tracking (ET) set-ups in unconstrained settings, the experimenter has precise control over the location and timing of stimulus presentation, making it easier to compare findings between HMD studies and those that use monitor displays, which account for the bulk of previous work in eye movement research and vision sciences more generally. Here, a visual discrimination paradigm is presented as a proof of concept to demonstrate the applicability of collecting eye and head tracking data from an HMD in VR for vision research. The current work's contribution is 3-fold: firstly, results demonstrating both the strengths and the weaknesses of recording and classifying eye and head tracking data in VR, secondly, a highly flexible graphical user interface (GUI) used to generate the current experiment, is offered to lower the software development start-up cost of future researchers transitioning to a VR space, and finally, the dataset analyzed here of behavioral, eye and head tracking data synchronized with environmental variables from a task specifically designed to elicit a variety of eye and head movements could be an asset in testing future eye movement classification algorithms.

7.
Front Psychol ; 12: 748539, 2021.
Article in English | MEDLINE | ID: mdl-34992563

ABSTRACT

Pupil size is influenced by cognitive and non-cognitive factors. One of the strongest modulators of pupil size is scene luminance, which complicates studies of cognitive pupillometry in environments with complex patterns of visual stimulation. To help understand how dynamic visual scene statistics influence pupil size during an active visual search task in a visually rich 3D virtual environment (VE), we analyzed the correlation between pupil size and intensity changes of image pixels in the red, green, and blue (RGB) channels within a large window (~14 degrees) surrounding the gaze position over time. Overall, blue and green channels had a stronger influence on pupil size than the red channel. The correlation maps were not consistent with the hypothesis of a foveal bias for luminance, instead revealing a significant contextual effect, whereby pixels above the gaze point in the green/blue channels had a disproportionate impact on pupil size. We hypothesized this differential sensitivity of pupil responsiveness to blue light from above as a "blue sky effect," and confirmed this finding with a follow-on experiment with a controlled laboratory task. Pupillary constrictions were significantly stronger when blue was presented above fixation (paired with luminance-matched gray on bottom) compared to below fixation. This effect was specific for the blue color channel and this stimulus orientation. These results highlight the differential sensitivity of pupillary responses to scene statistics in studies or applications that involve complex visual environments and suggest blue light as a predominant factor influencing pupil size.

8.
IEEE Trans Neural Syst Rehabil Eng ; 28(5): 1081-1090, 2020 05.
Article in English | MEDLINE | ID: mdl-32217478

ABSTRACT

Although several guidelines for best practices in EEG preprocessing have been released, even studies that strictly adhere to those guidelines contain considerable variation in the ways that the recommended methods are applied. An open question for researchers is how sensitive the results of EEG analyses are to variations in preprocessing methods and parameters. To address this issue, we analyze the effect of preprocessing methods on downstream EEG analysis using several simple signal and event-related measures. Signal measures include recording-level channel amplitudes, study-level channel amplitude dispersion, and recording spectral characteristics. Event-related methods include ERPs and ERSPs and their correlations across methods for a diverse set of stimulus events. Our analysis also assesses differences in residual signals both in the time and spectral domains after blink artifacts have been removed. Using fully automated pipelines, we evaluate these measures across 17 EEG studies for two ICA-based preprocessing approaches (LARG, MARA) plus two variations of Artifact Subspace Reconstruction (ASR). Although the general structure of the results is similar across these preprocessing methods, there are significant differences, particularly in the low-frequency spectral features and in the residuals left by blinks. These results argue for detailed reporting of processing details as suggested by most guidelines, but also for using a federation of automated processing pipelines and comparison tools to quantify effects of processing choices as part of the research reporting.


Subject(s)
Benchmarking , Electroencephalography , Signal Processing, Computer-Assisted , Artifacts , Blinking , Brain , Humans
9.
Neuroimage ; 207: 116054, 2020 02 15.
Article in English | MEDLINE | ID: mdl-31491523

ABSTRACT

We present the results of a large-scale analysis of event-related responses based on raw EEG data from 17 studies performed at six experimental sites associated with four different institutions. The analysis corpus represents 1,155 recordings containing approximately 7.8 million event instances acquired under several different experimental paradigms. Such large-scale analysis is predicated on consistent data organization and event annotation as well as an effective automated preprocessing pipeline to transform raw EEG into a form suitable for comparative analysis. A key component of this analysis is the annotation of study-specific event codes using a common vocabulary to describe relevant event features. We demonstrate that Hierarchical Event Descriptors (HED tags) capture statistically significant cognitive aspects of EEG events common across multiple recordings, subjects, studies, paradigms, headset configurations, and experimental sites. We use representational similarity analysis (RSA) to show that EEG responses annotated with the same cognitive aspect are significantly more similar than those that do not share that cognitive aspect. These RSA similarity results are supported by visualizations that exploit the non-linear similarities of these associations. We apply temporal overlap regression, reducing confounds caused by adjacent event instances, to extract time and time-frequency EEG features (regressed ERPs and ERSPs) that are comparable across studies and replicate findings from prior, individual studies. Likewise, we use second-level linear regression to separate effects of different cognitive aspects on these features across all studies. This work demonstrates that EEG mega-analysis (pooling of raw data across studies) can enable investigations of brain dynamics in a more generalized fashion than single studies afford. A companion paper complements this event-based analysis by addressing commonality of the time and frequency statistical properties of EEG across studies at the channel and dipole level.


Subject(s)
Brain Mapping , Brain/physiology , Cognition/physiology , Evoked Potentials/physiology , Adult , Brain Mapping/methods , Electroencephalography/methods , Female , Humans , Male , Young Adult
10.
Neuroimage ; 207: 116361, 2020 02 15.
Article in English | MEDLINE | ID: mdl-31770636

ABSTRACT

Significant achievements have been made in the fMRI field by pooling statistical results from multiple studies (meta-analysis). More recently, fMRI standardization efforts have focused on enabling the joint analysis of raw fMRI data across studies (mega-analysis), with the hope of achieving more detailed insights. However, it has not been clear if such analyses in the EEG field are possible or equally fruitful. Here we present the results of a large-scale EEG mega-analysis using 18 studies from six sites representing several different experimental paradigms. We demonstrate that when meta-data are consistent across studies, both channel-level and source-level EEG mega-analysis are possible and can provide insights unavailable in single studies. The analysis uses a fully-automated processing pipeline to reduce line noise, interpolate noisy channels, perform robust referencing, remove eye-activity, and further identify outlier signals. We define several robust measures based on channel amplitude and dispersion to assess the comparability of data across studies and observe the effect of various processing steps on these measures. Using ICA-based dipolar sources, we also observe consistent differences in overall frequency baseline amplitudes across brain areas. For example, we observe higher alpha in posterior vs anterior regions and higher beta in temporal regions. We also detect consistent differences in the slope of the aperiodic portion of the EEG spectrum across brain areas. In a companion paper, we apply mega-analysis to assess commonalities in event-related EEG features across studies. The continuous raw and preprocessed data used in this analysis are available through the DataCatalog at https://cancta.net.


Subject(s)
Brain Mapping , Brain/diagnostic imaging , Electroencephalography , Magnetic Resonance Imaging , Adult , Brain Mapping/methods , Electroencephalography/methods , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Principal Component Analysis/methods
11.
Front Hum Neurosci ; 13: 201, 2019.
Article in English | MEDLINE | ID: mdl-31258469

ABSTRACT

Deep convolutional neural networks (CNN) have previously been shown to be useful tools for signal decoding and analysis in a variety of complex domains, such as image processing and speech recognition. By learning from large amounts of data, the representations encoded by these deep networks are often invariant to moderate changes in the underlying feature spaces. Recently, we proposed a CNN architecture that could be applied to electroencephalogram (EEG) decoding and analysis. In this article, we train our CNN model using data from prior experiments in order to later decode the P300 evoked response from an unseen, hold-out experiment. We analyze the CNN output as a function of the underlying variability in the P300 response and demonstrate that the CNN output is sensitive to the experiment-induced changes in the neural response. We then assess the utility of our approach as a means of improving the overall signal-to-noise ratio in the EEG record. Finally, we show an example of how CNN-based decoding can be applied to the analysis of complex data.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 5536-5539, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31947108

ABSTRACT

Virtual reality (VR) offers the potential to study brain function in complex, ecologically realistic environments. However, the additional degrees of freedom make analysis more challenging, particularly with respect to evoked neural responses. In this paper we designed a target detection task in VR where we varied the visual angle of targets as subjects moved through a three dimensional maze. We investigated how the latency and shape of the classic P300 evoked response varied as a function of locking the electroencephalogram data to the target image onset, the target-saccade intersection, and the first fixation on the target. We found, as expected, a systematic shift in the timing of the evoked responses as a function of the type of response locking, as well as a difference in the shape of the waveforms. Interestingly, single-trial analysis showed that the peak discriminability of the evoked responses does not differ between image locked and saccade locked analysis, though it decreases significantly when fixation locked. These results suggest that there is a spread in the perception of visual information in VR environments across time and visual space. Our results point to the importance of considering how information may be perceived in naturalistic environments, specifically those that have more complexity and higher degrees of freedom than in traditional laboratory paradigms.


Subject(s)
Electroencephalography , Virtual Reality , Environment , Evoked Potentials, Auditory , Evoked Potentials, Visual , Humans , Saccades
13.
Front Hum Neurosci ; 11: 264, 2017.
Article in English | MEDLINE | ID: mdl-28559807

ABSTRACT

EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later.

14.
Front Neurosci ; 9: 270, 2015.
Article in English | MEDLINE | ID: mdl-26347597

ABSTRACT

Brain computer interaction (BCI) technologies have proven effective in utilizing single-trial classification algorithms to detect target images in rapid serial visualization presentation tasks. While many factors contribute to the accuracy of these algorithms, a critical aspect that is often overlooked concerns the feature similarity between target and non-target images. In most real-world environments there are likely to be many shared features between targets and non-targets resulting in similar neural activity between the two classes. It is unknown how current neural-based target classification algorithms perform when qualitatively similar target and non-target images are presented. This study address this question by comparing behavioral and neural classification performance across two conditions: first, when targets were the only infrequent stimulus presented amongst frequent background distracters; and second when targets were presented together with infrequent non-targets containing similar visual features to the targets. The resulting findings show that behavior is slower and less accurate when targets are presented together with similar non-targets; moreover, single-trial classification yielded high levels of misclassification when infrequent non-targets are included. Furthermore, we present an approach to mitigate the image misclassification. We use confidence measures to assess the quality of single-trial classification, and demonstrate that a system in which low confidence trials are reclassified through a secondary process can result in improved performance.

15.
Neuron ; 65(1): 107-21, 2010 Jan 14.
Article in English | MEDLINE | ID: mdl-20152117

ABSTRACT

During natural vision, the entire visual field is stimulated by images rich in spatiotemporal structure. Although many visual system studies restrict stimuli to the classical receptive field (CRF), it is known that costimulation of the CRF and the surrounding nonclassical receptive field (nCRF) increases neuronal response sparseness. The cellular and network mechanisms underlying increased response sparseness remain largely unexplored. Here we show that combined CRF + nCRF stimulation increases the sparseness, reliability, and precision of spiking and membrane potential responses in classical regular spiking (RS(C)) pyramidal neurons of cat primary visual cortex. Conversely, fast-spiking interneurons exhibit increased activity and decreased selectivity during CRF + nCRF stimulation. The increased sparseness and reliability of RS(C) neuron spiking is associated with increased inhibitory barrages and narrower visually evoked synaptic potentials. Our experimental observations were replicated with a simple computational model, suggesting that network interactions among neuronal subtypes ultimately sharpen recurrent excitation, producing specific and reliable visual responses.


Subject(s)
Nerve Net/physiology , Synaptic Transmission/physiology , Visual Cortex/physiology , Visual Fields/physiology , Visual Perception/physiology , Animals , Cats , Excitatory Postsynaptic Potentials/physiology , Female , Inhibitory Postsynaptic Potentials/physiology , Interneurons/metabolism , Membrane Potentials/physiology , Neurons/metabolism , Photic Stimulation/methods
SELECTION OF CITATIONS
SEARCH DETAIL
...