Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Behav Res Methods ; 54(5): 2545-2564, 2022 10.
Article in English | MEDLINE | ID: mdl-34918232

ABSTRACT

Interest in applications for the simultaneous acquisition of data from different devices is growing. In neuroscience for example, co-registration complements and overcomes some of the shortcomings of individual methods. However, precise synchronization of the different data streams involved is required before joint data analysis. Our article presents and evaluates a synchronization method which maximizes the alignment of information across time. Synchronization through common triggers is widely used in all existing methods, because it is very simple and effective. However, this solution has been found to fail in certain practical situations, namely for the spurious detection of triggers and/or when the timestamps of triggers sampled by each acquisition device are not jointly distributed linearly for the entire duration of an experiment. We propose two additional mechanisms, the "Longest Common Subsequence" algorithm and a piecewise linear regression, in order to overcome the limitations of the classical method of synchronizing common triggers. The proposed synchronization method was evaluated using both real and artificial data. Co-registrations of electroencephalographic signals (EEG) and eye movements were used for real data. We compared the effectiveness of our method to another open source method implemented using EYE-EEG toolbox. Overall, we show that our method, implemented in C++ as a DOS application, is very fast, robust and fully automatic.


Subject(s)
Electroencephalography , Eye Movements , Humans , Electroencephalography/methods , Algorithms
2.
J Vis ; 21(11): 19, 2021 10 05.
Article in English | MEDLINE | ID: mdl-34698810

ABSTRACT

Retinal motion of the visual scene is not consciously perceived during ocular saccades in normal everyday conditions. It has been suggested that extra-retinal signals actively suppress intra-saccadic motion perception to preserve stable perception of the visual world. However, using stimuli optimized to preferentially activate the M-pathway, Castet and Masson (2000) demonstrated that motion can be perceived during a saccade. Based on this psychophysical paradigm, we used electroencephalography and eye-tracking recordings to investigate the neural correlates related to the conscious perception of intra-saccadic motion. We demonstrated the effective involvement during saccades of the cortical areas V1-V2 and MT-V5, which convey motion information along the M-pathway. We also showed that individual motion perception was related to retinal temporal frequency.


Subject(s)
Motion Perception , Visual Cortex , Humans , Motion , Photic Stimulation , Retina , Saccades , Visual Perception
3.
Front Psychol ; 9: 1190, 2018.
Article in English | MEDLINE | ID: mdl-30050487

ABSTRACT

This study aims at examining the precise temporal dynamics of the emotional facial decoding as it unfolds in the brain, according to the emotions displayed. To characterize this processing as it occurs in ecological settings, we focused on unconstrained visual explorations of natural emotional faces (i.e., free eye movements). The General Linear Model (GLM; Smith and Kutas, 2015a,b; Kristensen et al., 2017a) enables such a depiction. It allows deconvolving adjacent overlapping responses of the eye fixation-related potentials (EFRPs) elicited by the subsequent fixations and the event-related potentials (ERPs) elicited at the stimuli onset. Nineteen participants were displayed with spontaneous static facial expressions of emotions (Neutral, Disgust, Surprise, and Happiness) from the DynEmo database (Tcherkassof et al., 2013). Behavioral results on participants' eye movements show that the usual diagnostic features in emotional decoding (eyes for negative facial displays and mouth for positive ones) are consistent with the literature. The impact of emotional category on both the ERPs and the EFRPs elicited by the free exploration of the emotional faces is observed upon the temporal dynamics of the emotional facial expression processing. Regarding the ERP at stimulus onset, there is a significant emotion-dependent modulation of the P2-P3 complex and LPP components' amplitude at the left frontal site for the ERPs computed by averaging. Yet, the GLM reveals the impact of subsequent fixations on the ERPs time-locked on stimulus onset. Results are also in line with the valence hypothesis. The observed differences between the two estimation methods (Average vs. GLM) suggest the predominance of the right hemisphere at the stimulus onset and the implication of the left hemisphere in the processing of the information encoded by subsequent fixations. Concerning the first EFRP, the Lambda response and the P2 component are modulated by the emotion of surprise compared to the neutral emotion, suggesting an impact of high-level factors, in parieto-occipital sites. Moreover, no difference is observed on the second and subsequent EFRP. Taken together, the results stress the significant gain obtained in analyzing the EFRPs using the GLM method and pave the way toward efficient ecological emotional dynamic stimuli analyses.

4.
Behav Res Methods ; 49(6): 2255-2274, 2017 12.
Article in English | MEDLINE | ID: mdl-28275950

ABSTRACT

The usual event-related potential (ERP) estimation is the average across epochs time-locked on stimuli of interest. These stimuli are repeated several times to improve the signal-to-noise ratio (SNR) and only one evoked potential is estimated inside the temporal window of interest. Consequently, the average estimation does not take into account other neural responses within the same epoch that are due to short inter stimuli intervals. These adjacent neural responses may overlap and distort the evoked potential of interest. This overlapping process is a significant issue for the eye fixation-related potential (EFRP) technique in which the epochs are time-locked on the ocular fixations. The inter fixation intervals are not experimentally controlled and can be shorter than the neural response's latency. To begin, the Tikhonov regularization, applied to the classical average estimation, was introduced to improve the SNR for a given number of trials. The generalized cross validation was chosen to obtain the optimal value of the ridge parameter. Then, to deal with the issue of overlapping, the general linear model (GLM), was used to extract all neural responses inside an epoch. Finally, the regularization was also applied to it. The models (the classical average and the GLM with and without regularization) were compared on both simulated data and real datasets from a visual scene exploration in co-registration with an eye-tracker, and from a P300 Speller experiment. The regularization was found to improve the estimation by average for a given number of trials. The GLM was more robust and efficient, its efficiency actually reinforced by the regularization.


Subject(s)
Data Interpretation, Statistical , Electroencephalography/methods , Evoked Potentials/physiology , Fixation, Ocular/physiology , Linear Models , Brain-Computer Interfaces , Humans
5.
J Eye Mov Res ; 10(1)2017 Oct 07.
Article in English | MEDLINE | ID: mdl-33828644

ABSTRACT

The Eye Fixation Related Potential (EFRP) estimation is the average of EEG signals across epochs at ocular fixation onset. Its main limitation is the overlapping issue. Inter Fixation Intervals (IFI) - typically around 300 ms in the case of unrestricted eye movement- depend on participants' oculomotor patterns, and can be shorter than the latency of the components of the evoked potential. If the duration of an epoch is longer than the IFI value, more than one fixation can occur, and some overlapping between adjacent neural responses ensues. The classical average does not take into account either the presence of several fixations during an epoch or overlapping. The Adjacent Response algorithm (ADJAR), which is popular for event-related potential estimation, was compared to the General Linear Model (GLM) on a real dataset from a conjoint EEG and eye-tracking experiment to address the overlapping issue. The results showed that the ADJAR algorithm was based on assumptions that were too restrictive for EFRP estimation. The General Linear Model appeared to be more robust and efficient. Different configurations of this model were compared to estimate the potential elicited at image onset, as well as EFRP at the beginning of exploration. These configurations took into account the overlap between the event-related potential at stimulus presentation and the following EFRP, and the distinction between the potential elicited by the first fixation onset and subsequent ones. The choice of the General Linear Model configuration was a tradeoff between assumptions about expected behavior and the quality of the EFRP estimation: the number of different potentials estimated by a given model must be controlled to avoid erroneous estimations with large variances.

SELECTION OF CITATIONS
SEARCH DETAIL
...