Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 10(1): 6922, 2020 04 24.
Article in English | MEDLINE | ID: mdl-32332791

ABSTRACT

Many active neuroimaging paradigms rely on the assumption that the participant sustains attention to a task. However, in practice, there will be momentary distractions, potentially influencing the results. We investigated the effect of focal attention, objectively quantified using a measure of brain signal entropy, on cortical tracking of the speech envelope. The latter is a measure of neural processing of naturalistic speech. We let participants listen to 44 minutes of natural speech, while their electroencephalogram was recorded, and quantified both entropy and cortical envelope tracking. Focal attention affected the later brain responses to speech, between 100 and 300 ms latency. By only taking into account periods with higher attention, the measured cortical speech tracking improved by 47%. This illustrates the impact of the participant's active engagement in the modeling of the brain-speech response and the importance of accounting for it. Our results suggest a cortico-cortical loop that initiates during the early-stages of the auditory processing, then propagates through the parieto-occipital and frontal areas, and finally impacts the later-latency auditory processes in a top-down fashion. The proposed framework could be transposed to other active electrophysiological paradigms (visual, somatosensory, etc) and help to control the impact of participants' engagement on the results.


Subject(s)
Attention/physiology , Cerebral Cortex/physiology , Speech/physiology , Databases as Topic , Humans , Male , Statistics, Nonparametric , Time Factors , Young Adult
2.
J Neural Eng ; 16(6): 066017, 2019 10 25.
Article in English | MEDLINE | ID: mdl-31426053

ABSTRACT

OBJECTIVE: Measurement of the cortical tracking of continuous speech from electroencephalography (EEG) recordings using a forward model is an important tool in auditory neuroscience. Usually the stimulus is represented by its temporal envelope. Recently, the phonetic representation of speech was successfully introduced in English. We aim to show that the EEG prediction from phoneme-related speech features is possible in Dutch. The method requires a manual channel selection based on visual inspection or prior knowledge to obtain a summary measure of cortical tracking. We evaluate a method to (1) remove non-stimulus-related activity from the EEG signals to be predicted, and (2) automatically select the channels of interest. APPROACH: Eighteen participants listened to a Flemish story, while their EEG was recorded. Subject-specific and grand-average temporal response functions were determined between the EEG activity in different frequency bands and several stimulus features: the envelope, spectrogram, phonemes, phonetic features or a combination. The temporal response functions were used to predict EEG from the stimulus, and the predicted was compared with the recorded EEG, yielding a measure of cortical tracking of stimulus features. A spatial filter was calculated based on the generalized eigenvalue decomposition (GEVD), and the effect on EEG prediction accuracy was determined. MAIN RESULTS: A model including both low- and high-level speech representations was able to better predict the brain responses to the speech than a model only including low-level features. The inclusion of a GEVD-based spatial filter in the model increased the prediction accuracy of cortical responses to each speech feature at both single-subject (270% improvement) and group-level (310%). SIGNIFICANCE: We showed that the inclusion of acoustical and phonetic speech information and the addition of a data-driven spatial filter allow improved modelling of the relationship between the speech and its brain responses and offer an automatic channel selection.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Brain Mapping/methods , Data Analysis , Electroencephalography/methods , Speech Perception/physiology , Adult , Brain Mapping/instrumentation , Electroencephalography/instrumentation , Female , Humans , Male , Speech/physiology , Young Adult
3.
Hear Res ; 380: 1-9, 2019 09 01.
Article in English | MEDLINE | ID: mdl-31167150

ABSTRACT

OBJECTIVE: To objectively measure speech intelligibility of individual subjects from the EEG, based on cortical tracking of different representations of speech: low-level acoustical, higher-level discrete, or a combination. To compare each model's prediction of the speech reception threshold (SRT) for each individual with the behaviorally measured SRT. METHODS: Nineteen participants listened to Flemish Matrix sentences presented at different signal-to-noise ratios (SNRs), corresponding to different levels of speech understanding. For different EEG frequency bands (delta, theta, alpha, beta or low-gamma), a model was built to predict the EEG signal from various speech representations: envelope, spectrogram, phonemes, phonetic features or a combination of phonetic Features and Spectrogram (FS). The same model was used for all subjects. The model predictions were then compared to the actual EEG of each subject for the different SNRs, and the prediction accuracy in function of SNR was used to predict the SRT. RESULTS: The model based on the FS speech representation and the theta EEG band yielded the best SRT predictions, with a difference between the behavioral and objective SRT below 1 decibel for 53% and below 2 decibels for 89% of the subjects. CONCLUSION: A model including low- and higher-level speech features allows to predict the speech reception threshold from the EEG of people listening to natural speech. It has potential applications in diagnostics of the auditory system.


Subject(s)
Acoustics , Auditory Cortex/physiology , Electroencephalography , Evoked Potentials, Auditory , Phonetics , Speech Intelligibility , Speech Perception , Speech Reception Threshold Test , Adult , Auditory Pathways/physiology , Female , Humans , Male , Predictive Value of Tests , Theta Rhythm , Young Adult
4.
J Neural Eng ; 11(3): 035002, 2014 Jun.
Article in English | MEDLINE | ID: mdl-24838215

ABSTRACT

OBJECTIVE: Steady-state visually evoked potential (SSVEP)-based brain-computer interfaces (BCIs) allow healthy subjects to communicate. However, their dependence on gaze control prevents their use with severely disabled patients. Gaze-independent SSVEP-BCIs have been designed but have shown a drop in accuracy and have not been tested in brain-injured patients. In the present paper, we propose a novel independent SSVEP-BCI based on covert attention with an improved classification rate. We study the influence of feature extraction algorithms and the number of harmonics. Finally, we test online communication on healthy volunteers and patients with locked-in syndrome (LIS). APPROACH: Twenty-four healthy subjects and six LIS patients participated in this study. An independent covert two-class SSVEP paradigm was used with a newly developed portable light emitting diode-based 'interlaced squares' stimulation pattern. MAIN RESULTS: Mean offline and online accuracies on healthy subjects were respectively 85 ± 2% and 74 ± 13%, with eight out of twelve subjects succeeding to communicate efficiently with 80 ± 9% accuracy. Two out of six LIS patients reached an offline accuracy above the chance level, illustrating a response to a command. One out of four LIS patients could communicate online. SIGNIFICANCE: We have demonstrated the feasibility of online communication with a covert SSVEP paradigm that is truly independent of all neuromuscular functions. The potential clinical use of the presented BCI system as a diagnostic (i.e., detecting command-following) and communication tool for severely brain-injured patients will need to be further explored.


Subject(s)
Algorithms , Brain-Computer Interfaces , Communication Aids for Disabled , Quadriplegia/physiopathology , Quadriplegia/rehabilitation , Speech Disorders/rehabilitation , Visual Perception , Adult , Aged , Electroencephalography/instrumentation , Electroencephalography/methods , Equipment Design , Equipment Failure Analysis , Humans , Man-Machine Systems , Middle Aged , Neurofeedback/instrumentation , Photic Stimulation/instrumentation , Photic Stimulation/methods , Speech Disorders/physiopathology , Support Vector Machine , Treatment Outcome , User-Computer Interface , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...