Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Front Hum Neurosci ; 11: 310, 2017.
Article in English | MEDLINE | ID: mdl-28670269

ABSTRACT

Mobile electroencephalography (EEG) is a very useful tool to investigate the physiological basis of cognition under real-world conditions. However, as we move experimentation into less-constrained environments, the influence of state changes increases. The influence of stress on cortical activity and cognition is an important example. Monitoring of modulation of cortical activity by EEG measurements is a promising tool for assessing acute stress. In this study, we test this hypothesis and combine EEG with independent component analysis and source localization to identify cortical differences between a control condition and a stressful condition. Subjects performed a stationary shooting task using an airsoft rifle with and without the threat of an experimenter firing a different airsoft rifle in their direction. We observed significantly higher skin conductance responses and salivary cortisol levels (p < 0.05 for both) during the stressful conditions, indicating that we had successfully induced an adequate level of acute stress. We located independent components in five regions throughout the cortex, most notably in the dorsolateral prefrontal cortex, a region previously shown to be affected by increased levels of stress. This area showed a significant decrease in spectral power in the theta and alpha bands less than a second after the subjects pulled the trigger. Overall, our results suggest that EEG with independent component analysis and source localization has the potential of monitoring acute stress in real-world environments.

2.
J Neurosci Methods ; 212(2): 247-58, 2013 Jan 30.
Article in English | MEDLINE | ID: mdl-23085564

ABSTRACT

Detecting significant periods of phase synchronization in EEG recordings is a non-trivial task that is made especially difficult when considering the effects of volume conduction and common sources. In addition, EEG signals are often confounded by non-neural signals, such as artifacts arising from muscle activity or external electrical devices. A variety of phase synchronization analysis methods have been developed with each offering a different approach for dealing with these confounds. We investigate the use of a parametric estimation of the time-frequency transform as a means of improving the detection capability for a range of phase analysis methods. We argue that such an approach offers numerous benefits over using standard nonparametric approaches. We then demonstrate the utility of our technique using both simulated and actual EEG data by showing that the derived phase synchronization estimates are more robust to noise and volume conduction effects.


Subject(s)
Algorithms , Brain/physiology , Electroencephalography/methods , Models, Neurological , Signal Processing, Computer-Assisted , Artifacts , Brain Mapping/methods , Cortical Synchronization/physiology , Humans , Statistics, Nonparametric
3.
Physiol Meas ; 30(5): N37-51, 2009 May.
Article in English | MEDLINE | ID: mdl-19417238

ABSTRACT

Recently we have proposed the use of Tikhonov regularization with temporal smoothness constraints to estimate the BOLD fMRI hemodynamic response function (HRF). The temporal smoothness constraint was imposed on the estimates by using second derivative information while the regularization parameter was selected based on the generalized cross-validation function (GCV). Using one-dimensional simulations, we previously found this method to produce reliable estimates of the HRF time course, especially its time to peak (TTP), being at the same time fast and robust to over-sampling in the HRF estimation. Here, we extend the method to include simultaneous temporal and spatial smoothness constraints. This method does not need Gaussian smoothing as a pre-processing step as usually done in fMRI data analysis. We carried out two-dimensional simulations to compare the two methods: Tikhonov regularization with temporal (Tik-GCV-T) and spatio-temporal (Tik-GCV-ST) smoothness constraints on the estimated HRF. We focus our attention on quantifying the influence of the Gaussian data smoothing and the presence of edges on the performance of these techniques. Our results suggest that the spatial smoothing introduced by regularization is less severe than that produced by Gaussian smoothing. This allows more accurate estimates of the response amplitudes while producing similar estimates of the TTP. We illustrate these ideas using real data.


Subject(s)
Brain/physiology , Hemodynamics , Magnetic Resonance Imaging , Models, Biological , Oxygen/blood , Algorithms , Computer Simulation , Humans
4.
Exp Brain Res ; 158(2): 252-8, 2004 Sep.
Article in English | MEDLINE | ID: mdl-15112119

ABSTRACT

The brain integrates information from multiple sensory modalities and, through this process, generates a coherent and apparently seamless percept of the external world. Although multisensory integration typically binds information that is derived from the same event, when multisensory cues are somewhat discordant they can result in illusory percepts such as the "ventriloquism effect." These biases in stimulus localization are generally accompanied by the perceptual unification of the two stimuli. In the current study, we sought to further elucidate the relationship between localization biases, perceptual unification and measures of a participant's uncertainty in target localization (i.e., variability). Participants performed an auditory localization task in which they were also asked to report on whether they perceived the auditory and visual stimuli to be perceptually unified. The auditory and visual stimuli were delivered at a variety of spatial (0 degrees, 5 degrees, 10 degrees, 15 degrees ) and temporal (200, 500, 800 ms) disparities. Localization bias and reports of perceptual unity occurred even with substantial spatial (i.e., 15 degrees ) and temporal (i.e., 800 ms) disparities. Trial-by-trial comparison of these measures revealed a striking correlation: regardless of their disparity, whenever the auditory and visual stimuli were perceived as unified, they were localized at or very near the light. In contrast, when the stimuli were perceived as not unified, auditory localization was often biased away from the visual stimulus. Furthermore, localization variability was significantly less when the stimuli were perceived as unified. Intriguingly, on non-unity trials such variability increased with decreasing disparity. Together, these results suggest strong and potentially mechanistic links between the multiple facets of multisensory integration that contribute to our perceptual Gestalt.


Subject(s)
Acoustic Stimulation , Photic Stimulation , Sound Localization/physiology , Acoustic Stimulation/methods , Adult , Auditory Perception , Female , Humans , Photic Stimulation/methods , Time Factors , Visual Perception
5.
J Cogn Neurosci ; 15(1): 20-9, 2003 Jan 01.
Article in English | MEDLINE | ID: mdl-12590840

ABSTRACT

The ability of a visual signal to influence the localization of an auditory target (i.e., "cross-modal bias") was examined as a function of the spatial disparity between the two stimuli and their absolute locations in space. Three experimental issues were examined: (a) the effect of a spatially disparate visual stimulus on auditory localization judgments; (b) how the ability to localize visual, auditory, and spatially aligned multisensory (visual-auditory) targets is related to cross-modal bias, and (c) the relationship between the magnitude of cross-modal bias and the perception that the two stimuli are spatially "unified" (i.e., originate from the same location). Whereas variability in localization of auditory targets was large and fairly uniform for all tested locations, variability in localizing visual or spatially aligned multisensory targets was much smaller, and increased with increasing distance from the midline. This trend proved to be strongly correlated with biasing effectiveness, for although visual-auditory bias was unexpectedly large in all conditions tested, it decreased progressively (as localization variability increased) with increasing distance from the midline. Thus, central visual stimuli had a substantially greater biasing effect on auditory target localization than did more peripheral visual stimuli. It was also apparent that cross-modal bias decreased as the degree of visual-auditory disparity increased. Consequently, the greatest visual-auditory biases were obtained with small disparities at central locations. In all cases, the magnitude of these biases covaried with judgments of spatial unity. The results suggest that functional properties of the visual system play the predominant role in determining these visual-auditory interactions and that cross-modal biases can be substantially greater than previously noted.


Subject(s)
Auditory Perception/physiology , Psychomotor Performance/physiology , Sound Localization , Visual Fields/physiology , Adult , Bias , Female , Humans , Male , Orientation , Photic Stimulation/methods , Reaction Time , Space Perception , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...