Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 50
Filter
1.
Ear Hear ; 45(3): 721-729, 2024.
Article in English | MEDLINE | ID: mdl-38287477

ABSTRACT

OBJECTIVES: Background noise and linguistic violations have been shown to increase the listening effort. The present study aims to examine the effects of the interaction between background noise and linguistic violations on subjective listening effort and frontal theta oscillations during effortful listening. DESIGN: Thirty-two normal-hearing listeners participated in this study. The linguistic violation was operationalized as sentences versus random words (strings). Behavioral and electroencephalography data were collected while participants listened to sentences and strings in background noise at different signal to noise ratios (SNRs) (-9, -6, -3, 0 dB), maintained them in memory for about 3 sec in the presence of background noise, and then chose the correct sequence of words from a base matrix of words. RESULTS: Results showed the interaction effects of SNR and speech type on effort ratings. Although strings were inherently more effortful than sentences, decreasing SNR from 0 to -9 dB (in 3 dB steps), increased effort rating more for sentences than strings in each step, suggesting the more pronounced effect of noise on sentence processing that strings in low SNRs. Results also showed a significant interaction between SNR and speech type on frontal theta event-related synchronization during the retention interval. This interaction indicated that strings exhibited higher frontal theta event-related synchronization than sentences at SNR of 0 dB, suggesting increased verbal working memory demand for strings under challenging listening conditions. CONCLUSIONS: The study demonstrated that the interplay between linguistic violation and background noise shapes perceived effort and cognitive load during speech comprehension under challenging listening conditions. The differential impact of noise on processing sentences versus strings highlights the influential role of context and cognitive resource allocation in the processing of speech.


Subject(s)
Speech Perception , Humans , Noise , Linguistics , Hearing Tests , Memory, Short-Term
2.
Clin EEG Neurosci ; 55(2): 185-191, 2024 Mar.
Article in English | MEDLINE | ID: mdl-36945785

ABSTRACT

Background. Depression disorder has been associated with altered oscillatory brain activity. The common methods to quantify oscillatory activity are Fourier and wavelet transforms. Both methods have difficulties distinguishing synchronized oscillatory activity from nonrhythmic and large-amplitude artifacts. Here we proposed a method called self-synchronization index (SSI) to quantify synchronized oscillatory activities in neural data. The method considers temporal characteristics of neural oscillations, amplitude, and cycles, to estimate the synchronization value for a specific frequency band. Method. The recorded electroencephalography (EEG) data of 45 depressed and 55 healthy individuals were used. The SSI method was applied to each EEG electrode filtered in the alpha frequency band (8-13 Hz). The multiple linear regression model was used to predict depression severity (Beck Depression Inventory-II scores) using alpha SSI values. Results. Patients with severe depression showed a lower alpha SSI than those with moderate depression and healthy controls in all brain regions. Moreover, the alpha SSI values negatively correlated with depression severity in all brain regions. The regression model showed a significant performance of depression severity prediction using alpha SSI. Conclusion. The findings support the SSI measure as a powerful tool for quantifying synchronous oscillatory activity. The data examined in this article support the idea that there is a strong link between the synchronization of alpha oscillatory neural activities and the level of depression. These findings yielded an objective and quantitative depression severity prediction.


Subject(s)
Depressive Disorder , Electroencephalography , Humans , Electroencephalography/methods , Brain
3.
Eur J Neurosci ; 58(11): 4357-4370, 2023 12.
Article in English | MEDLINE | ID: mdl-37984406

ABSTRACT

Listening effort can be defined as a measure of cognitive resources used by listeners to perform a listening task. Various methods have been proposed to measure this effort, yet their reliability remains unestablished, a crucial step before their application in research or clinical settings. This study encompassed 32 participants undertaking speech-in-noise tasks across two sessions, approximately a week apart. They listened to sentences and word lists at varying signal-to-noise ratios (SNRs) (-9, -6, -3 and 0 dB), then retaining them for roughly 3 s. We evaluated the test-retest reliability of self-reported effort ratings, theta (4-7 Hz) and alpha (8-13 Hz) oscillatory power, suggested previously as neural markers of listening effort. Additionally, we examined the reliability of correct word percentages. Both relative and absolute reliability were assessed using intraclass correlation coefficients (ICC) and Bland-Altman analysis. We also computed the standard error of measurement (SEM) and smallest detectable change (SDC). Our findings indicated heightened frontal midline theta power for word lists compared to sentences during the retention phase under high SNRs (0 dB, -3 dB), likely indicating a greater memory load for word lists. We observed SNR's impact on alpha power in the right central region during the listening phase and frontal theta power during the retention phase in sentences. Overall, the reliability analysis demonstrated satisfactory between-session variability for correct words and effort ratings. However, neural measures (frontal midline theta power and right central alpha power) displayed substantial variability, even though group-level outcomes appeared consistent across sessions.


Subject(s)
Listening Effort , Speech Perception , Humans , Self Report , Reproducibility of Results , Noise
4.
J Cogn Neurosci ; 35(8): 1301-1311, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37379482

ABSTRACT

The envelope of a speech signal is tracked by neural activity in the cerebral cortex. The cortical tracking occurs mainly in two frequency bands, theta (4-8 Hz) and delta (1-4 Hz). Tracking in the faster theta band has been mostly associated with lower-level acoustic processing, such as the parsing of syllables, whereas the slower tracking in the delta band relates to higher-level linguistic information of words and word sequences. However, much regarding the more specific association between cortical tracking and acoustic as well as linguistic processing remains to be uncovered. Here, we recorded EEG responses to both meaningful sentences and random word lists in different levels of signal-to-noise ratios (SNRs) that lead to different levels of speech comprehension as well as listening effort. We then related the neural signals to the acoustic stimuli by computing the phase-locking value (PLV) between the EEG recordings and the speech envelope. We found that the PLV in the delta band increases with increasing SNR for sentences but not for the random word lists, showing that the PLV in this frequency band reflects linguistic information. When attempting to disentangle the effects of SNR, speech comprehension, and listening effort, we observed a trend that the PLV in the delta band might reflect listening effort rather than the other two variables, although the effect was not statistically significant. In summary, our study shows that the PLV in the delta band reflects linguistic information and might be related to listening effort.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Speech/physiology , Electroencephalography , Speech Perception/physiology , Auditory Cortex/physiology , Linguistics , Acoustic Stimulation
5.
IEEE Trans Biomed Eng ; 70(4): 1264-1273, 2023 04.
Article in English | MEDLINE | ID: mdl-36227816

ABSTRACT

OBJECTIVE: The purpose of this study was to investigate alpha power as an objective measure of effortful listening in continuous speech with scalp and ear-EEG. METHODS: Scalp and ear-EEG were recorded simultaneously during presentation of a 33-s news clip in the presence of 16-talker babble noise. Four different signal-to-noise ratios (SNRs) were used to manipulate task demand. The effects of changes in SNR were investigated on alpha event-related synchronization (ERS) and desynchronization (ERD). Alpha activity was extracted from scalp EEG using different referencing methods (common average and symmetrical bi-polar) in different regions of the brain (parietal and temporal) and ear-EEG. RESULTS: Alpha ERS decreased with decreasing SNR (i.e., increasing task demand) in both scalp and ear-EEG. Alpha ERS was also positively correlated to behavioural performance which was based on the questions regarding the contents of the speech. CONCLUSION: Alpha ERS/ERD is better suited to track performance of a continuous speech than listening effort. SIGNIFICANCE: EEG alpha power in continuous speech may indicate of how well the speech was perceived and it can be measured with both scalp and Ear-EEG.


Subject(s)
Scalp , Speech , Electroencephalography , Auditory Perception , Auscultation
6.
Front Neurosci ; 16: 932959, 2022.
Article in English | MEDLINE | ID: mdl-36017182

ABSTRACT

Objectives: Comprehension of speech in adverse listening conditions is challenging for hearing-impaired (HI) individuals. Noise reduction (NR) schemes in hearing aids (HAs) have demonstrated the capability to help HI to overcome these challenges. The objective of this study was to investigate the effect of NR processing (inactive, where the NR feature was switched off, vs. active, where the NR feature was switched on) on correlates of listening effort across two different background noise levels [+3 dB signal-to-noise ratio (SNR) and +8 dB SNR] by using a phase synchrony analysis of electroencephalogram (EEG) signals. Design: The EEG was recorded while 22 HI participants fitted with HAs performed a continuous speech in noise (SiN) task in the presence of background noise and a competing talker. The phase synchrony within eight regions of interest (ROIs) and four conventional EEG bands was computed by using a multivariate phase synchrony measure. Results: The results demonstrated that the activation of NR in HAs affects the EEG phase synchrony in the parietal ROI at low SNR differently than that at high SNR. The relationship between conditions of the listening task and phase synchrony in the parietal ROI was nonlinear. Conclusion: We showed that the activation of NR schemes in HAs can non-linearly reduce correlates of listening effort as estimated by EEG-based phase synchrony. We contend that investigation of the phase synchrony within ROIs can reflect the effects of HAs in HI individuals in ecological listening conditions.

7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 531-534, 2021 11.
Article in English | MEDLINE | ID: mdl-34891349

ABSTRACT

Comprehension of speech in noise is a challenge for hearing-impaired (HI) individuals. Electroencephalography (EEG) provides a tool to investigate the effect of different levels of signal-to-noise ratio (SNR) of the speech. Most studies with EEG have focused on spectral power in well-defined frequency bands such as alpha band. In this study, we investigate how local functional connectivity, i.e. functional connectivity within a localized region of the brain, is affected by two levels of SNR. Twenty-two HI participants performed a continuous speech in noise task at two different SNRs (+3 dB and +8 dB). The local connectivity within eight regions of interest was computed by using a multivariate phase synchrony measure on EEG data. The results showed that phase synchrony increased in the parietal and frontal area as a response to increasing SNR. We contend that local connectivity measures can be used to discriminate between speech-evoked EEG responses at different SNRs.


Subject(s)
Speech Perception , Speech , Electroencephalography Phase Synchronization , Humans , Noise , Signal-To-Noise Ratio
8.
Ear Hear ; 42(6): 1590-1601, 2021.
Article in English | MEDLINE | ID: mdl-33950865

ABSTRACT

OBJECTIVES: The investigation of auditory cognitive processes recently moved from strictly controlled, trial-based paradigms toward the presentation of continuous speech. This also allows the investigation of listening effort on larger time scales (i.e., sustained listening effort). Here, we investigated the modulation of sustained listening effort by a noise reduction algorithm as applied in hearing aids in a listening scenario with noisy continuous speech. The investigated directional noise reduction algorithm mainly suppresses noise from the background. DESIGN: We recorded the pupil size and the EEG in 22 participants with hearing loss who listened to audio news clips in the presence of background multi-talker babble noise. We estimated how noise reduction (off, on) and signal-to-noise ratio (SNR; +3 dB, +8 dB) affect pupil size and the power in the parietal EEG alpha band (i.e., parietal alpha power) as well as the behavioral performance. RESULTS: Our results show that noise reduction reduces pupil size, while there was no significant effect of the SNR. It is important to note that we found interactions of SNR and noise reduction, which suggested that noise reduction reduces pupil size predominantly under the lower SNR. Parietal alpha power showed a similar yet nonsignificant pattern, with increased power under easier conditions. In line with the participants' reports that one of the two presented talkers was more intelligible, we found a reduced pupil size, increased parietal alpha power, and better performance when people listened to the more intelligible talker. CONCLUSIONS: We show that the modulation of sustained listening effort (e.g., by hearing aid noise reduction) as indicated by pupil size and parietal alpha power can be studied under more ecologically valid conditions. Mainly concluded from pupil size, we demonstrate that hearing aid noise reduction lowers sustained listening effort. Our study approximates to real-world listening scenarios and evaluates the benefit of the signal processing as can be found in a modern hearing aid.


Subject(s)
Hearing Aids , Hearing Loss , Speech Perception , Electroencephalography , Humans , Listening Effort , Speech Intelligibility
9.
Front Neurosci ; 15: 636060, 2021.
Article in English | MEDLINE | ID: mdl-33841081

ABSTRACT

OBJECTIVES: Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). DESIGN: We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. RESULTS: Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. CONCLUSION: Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.

10.
Entropy (Basel) ; 22(10)2020 Oct 03.
Article in English | MEDLINE | ID: mdl-33286893

ABSTRACT

We propose a new estimator to measure directed dependencies in time series. The dimensionality of data is first reduced using a new non-uniform embedding technique, where the variables are ranked according to a weighted sum of the amount of new information and improvement of the prediction accuracy provided by the variables. Then, using a greedy approach, the most informative subsets are selected in an iterative way. The algorithm terminates, when the highest ranked variable is not able to significantly improve the accuracy of the prediction as compared to that obtained using the existing selected subsets. In a simulation study, we compare our estimator to existing state-of-the-art methods at different data lengths and directed dependencies strengths. It is demonstrated that the proposed estimator has a significantly higher accuracy than that of existing methods, especially for the difficult case, where the data are highly correlated and coupled. Moreover, we show its false detection of directed dependencies due to instantaneous couplings effect is lower than that of existing measures. We also show applicability of the proposed estimator on real intracranial electroencephalography data.

11.
Front Neurosci ; 14: 846, 2020.
Article in English | MEDLINE | ID: mdl-33071722

ABSTRACT

OBJECTIVES: Selectively attending to a target talker while ignoring multiple interferers (competing talkers and background noise) is more difficult for hearing-impaired (HI) individuals compared to normal-hearing (NH) listeners. Such tasks also become more difficult as background noise levels increase. To overcome these difficulties, hearing aids (HAs) offer noise reduction (NR) schemes. The objective of this study was to investigate the effect of NR processing (inactive, where the NR feature was switched off, vs. active, where the NR feature was switched on) on the neural representation of speech envelopes across two different background noise levels [+3 dB signal-to-noise ratio (SNR) and +8 dB SNR] by using a stimulus reconstruction (SR) method. DESIGN: To explore how NR processing supports the listeners' selective auditory attention, we recruited 22 HI participants fitted with HAs. To investigate the interplay between NR schemes, background noise, and neural representation of the speech envelopes, we used electroencephalography (EEG). The participants were instructed to listen to a target talker in front while ignoring a competing talker in front in the presence of multi-talker background babble noise. RESULTS: The results show that the neural representation of the attended speech envelope was enhanced by the active NR scheme for both background noise levels. The neural representation of the attended speech envelope at lower (+3 dB) SNR was shifted, approximately by 5 dB, toward the higher (+8 dB) SNR when the NR scheme was turned on. The neural representation of the ignored speech envelope was modulated by the NR scheme and was mostly enhanced in the conditions with more background noise. The neural representation of the background noise was modulated (i.e., reduced) by the NR scheme and was significantly reduced in the conditions with more background noise. The neural representation of the net sum of the ignored acoustic scene (ignored talker and background babble) was not modulated by the NR scheme but was significantly reduced in the conditions with a reduced level of background noise. Taken together, we showed that the active NR scheme enhanced the neural representation of both the attended and the ignored speakers and reduced the neural representation of background noise, while the net sum of the ignored acoustic scene was not enhanced. CONCLUSION: Altogether our results support the hypothesis that the NR schemes in HAs serve to enhance the neural representation of speech and reduce the neural representation of background noise during a selective attention task. We contend that these results provide a neural index that could be useful for assessing the effects of HAs on auditory and cognitive processing in HI populations.

12.
Ear Hear ; 41 Suppl 1: 39S-47S, 2020.
Article in English | MEDLINE | ID: mdl-33105258

ABSTRACT

To increase the ecological validity of outcomes from laboratory evaluations of hearing and hearing devices, it is desirable to introduce more realistic outcome measures in the laboratory. This article presents and discusses three outcome measures that have been designed to go beyond traditional speech-in-noise measures to better reflect realistic everyday challenges. The outcome measures reviewed are: the Sentence-final Word Identification and Recall (SWIR) test that measures working memory performance while listening to speech in noise at ceiling performance; a neural tracking method that produces a quantitative measure of selective speech attention in noise; and pupillometry that measures changes in pupil dilation to assess listening effort while listening to speech in noise. According to evaluation data, the SWIR test provides a sensitive measure in situations where speech perception performance might be unaffected. Similarly, pupil dilation has also shown sensitivity in situations where traditional speech-in-noise measures are insensitive. Changes in working memory capacity and effort mobilization were found at positive signal-to-noise ratios (SNR), that is, at SNRs that might reflect everyday situations. Using stimulus reconstruction, it has been demonstrated that neural tracking is a robust method at determining to what degree a listener is attending to a specific talker in a typical cocktail party situation. Using both established and commercially available noise reduction schemes, data have further shown that all three measures are sensitive to variation in SNR. In summary, the new outcome measures seem suitable for testing hearing and hearing devices under more realistic and demanding everyday conditions than traditional speech-in-noise tests.


Subject(s)
Communication , Outcome Assessment, Health Care , Speech Perception , Cognition , Humans , Noise
13.
PLoS One ; 15(7): e0235782, 2020.
Article in English | MEDLINE | ID: mdl-32649733

ABSTRACT

Individuals with hearing loss allocate cognitive resources to comprehend noisy speech in everyday life scenarios. Such a scenario could be when they are exposed to ongoing speech and need to sustain their attention for a rather long period of time, which requires listening effort. Two well-established physiological methods that have been found to be sensitive to identify changes in listening effort are pupillometry and electroencephalography (EEG). However, these measurements have been used mainly for momentary, evoked or episodic effort. The aim of this study was to investigate how sustained effort manifests in pupillometry and EEG, using continuous speech with varying signal-to-noise ratio (SNR). Eight hearing-aid users participated in this exploratory study and performed a continuous speech-in-noise task. The speech material consisted of 30-second continuous streams that were presented from loudspeakers to the right and left side of the listener (±30° azimuth) in the presence of 4-talker background noise (+180° azimuth). The participants were instructed to attend either to the right or left speaker and ignore the other in a randomized order with two different SNR conditions: 0 dB and -5 dB (the difference between the target and the competing talker). The effects of SNR on listening effort were explored objectively using pupillometry and EEG. The results showed larger mean pupil dilation and decreased EEG alpha power in the parietal lobe during the more effortful condition. This study demonstrates that both measures are sensitive to changes in SNR during continuous speech.


Subject(s)
Hearing Aids , Pupil/physiology , Speech Perception , Aged , Aged, 80 and over , Auditory Perception , Electroencephalography , Female , Hearing , Hearing Tests , Humans , Male , Middle Aged , Signal-To-Noise Ratio
14.
Front Neurosci ; 13: 1294, 2019.
Article in English | MEDLINE | ID: mdl-31920477

ABSTRACT

People with hearing impairment typically have difficulties following conversations in multi-talker situations. Previous studies have shown that utilizing eye gaze to steer audio through beamformers could be a solution for those situations. Recent studies have shown that in-ear electrodes that capture electrooculography in the ear (EarEOG) can estimate the eye-gaze relative to the head, when the head was fixed. The head movement can be estimated using motion sensors around the ear to create an estimate of the absolute eye-gaze in the room. In this study, an experiment was designed to mimic a multi-talker situation in order to study and model the EarEOG signal when participants attempted to follow a conversation. Eleven hearing impaired participants were presented speech from the DAT speech corpus (Bo Nielsen et al., 2014), with three targets positioned at -30°, 0° and +30° azimuth. The experiment was run in two setups: one where the participants had their head fixed in a chinrest, and the other where they were free to move their head. The participants' task was to focus their visual attention on an LED-indicated target that changed regularly. A model was developed for the relative eye-gaze estimation, taking saccades, fixations, head movement and drift from the electrode-skin half-cell into account. This model explained 90.5% of the variance of the EarEOG when the head was fixed, and 82.6% when the head was free. The absolute eye-gaze was also estimated utilizing that model. When the head was fixed, the estimation of the absolute eye-gaze was reliable. However, due to hardware issues, the estimation of the absolute eye-gaze when the head was free had a variance that was too large to reliably estimate the attended target. Overall, this study demonstrated the potential of estimating absolute eye-gaze using EarEOG and motion sensors around the ear.

15.
Scand J Pain ; 2(3): 95-104, 2018 Jul 01.
Article in English | MEDLINE | ID: mdl-29913746

ABSTRACT

During the last decades there has been a tremendous development of non-invasive methods for assessment of brain activity following visceral pain. Improved methods for neurophysiological and brain imaging techniques have vastly increased our understanding of the central processing of gastrointestinal sensation and pain in both healthy volunteers as well as in patients suffering from gastrointestinal disorders. The techniques used are functional magnetic resonance imaging (fMRI), positron emission tomography (PET), electroencephalography (EEG)/evoked brain potentials (EPs), magnetoencephalography (MEG), single photon emission computed tomography (SPECT), and the multimodal combinations of these techniques. The use of these techniques has brought new insight into the complex brain processes underlying pain perception, including a number of subcortical and cortical regions, and paved new ways in our understanding of acute and chronic pain. The pathways are dynamic with a delicate balance between facilitatory and inhibitory pain mechanisms, and with modulation of the response to internal or external stressors with a high degree of plasticity. Hence, the ultimate goal in imaging of pain is to follow the stimulus response throughout the neuraxis. Brain activity measured by fMRI is based on subtracting regional changes in blood oxygenation during a resting condition from the signal during a stimulus condition, and has high spatial resolution but low temporal resolution. SPECT and PET are nuclear imaging techniques where radiolabeled molecules are injected with visualization of the distribution, density and activity of receptors in the brain allowing not only assessment of brain activity but also study of receptor sites. EEG is based on assessment of electrical activity in the brain, and recordings of the resting EEG and evoked potentials following an external stimulus are used to study normal visceral pain processing, alterations of pain processing in different patient groups and the effect of pharmacological intervention. EEG has high temporal resolution, but relative poor spatial resolution, which however to some extent can be overcome by applying inverse modelling algorithms and signal decomposition procedures. MEG is based on recording the magnetic fields produced by electrical currents in the brain, has high spatial resolution and is especially suitable for the study cortical activation. The treatment of chronic abdominal pain is often ineffective and dissapointing, which leads to search for optimized treatment achieved on the basis of a better understanding of underlying pain mechanisms. Application of the recent improvements in neuroimaging on the visceral pain system may likely in near future contribute substantially to our understanding of the functional and structural pathophysiology underlying chronic visceral pain disorders, and pave the road for optimized individual and mechanism based treatments. The purpose of this review is to give a state-of-the-art overview of these methods, with focus on EEG, and especially the advantages and limitations of the single methods in clinical gastrointestinal pain esearch including examples from relevant studies.

16.
Brain Res ; 1664: 37-47, 2017 06 01.
Article in English | MEDLINE | ID: mdl-28366617

ABSTRACT

Studies of the antidepressant vortioxetine have demonstrated beneficial effects on cognitive dysfunction associated with depression. To elucidate how vortioxetine modulates neuronal activity during cognitive processing we investigated the effects of vortioxetine (3 and 10mg/kg) in rats performing an auditory oddball (deviant target) task. We investigated neuronal activity in target vs non-target tone responses in vehicle-treated animals using electroencephalographic (EEG) recordings. Furthermore, we characterized task performance and EEG changes in target tone responses of vortioxetine vs controls. Quantification of event-related potentials (ERPs) was supplemented by analyses of spectral power and inter-trial phase-locking. The assessed brain regions included prelimbic cortex, the hippocampus, and thalamus. As compared to correct rejection of non-target tones, correct target tone responses elicited increased EEG power in all regions. Additionally, neuronal synchronization was increased in vehicle-treated rats during both early and late ERP responses to target tones. This indicates a significant consistency of local phases across trials during high attentional load. During early sensory processing, vortioxetine increased both thalamic and frontal synchronized gamma band activity and EEG power in all brain regions measured. Finally, vortioxetine increased the amplitude of late hippocampal P3-like ERPs, the rodent correlate of the human P300 ERP. These findings suggest differential effects of vortioxetine during early sensory registration and late endogenous processing of auditory discrimination. Strengthened P3-like ERP response may relate to the pro-cognitive profile of vortioxetine in rodents. Further investigations are warranted to explore the mechanism by which vortioxetine increases network synchronization during attentive and cognitive processing.


Subject(s)
Antidepressive Agents/administration & dosage , Attention/drug effects , Brain/drug effects , Brain/physiology , Cognition/drug effects , Evoked Potentials, Auditory/drug effects , Piperazines/administration & dosage , Sulfides/administration & dosage , Acoustic Stimulation , Animals , Attention/physiology , Auditory Perception/drug effects , Auditory Perception/physiology , Cerebral Cortex/drug effects , Cerebral Cortex/physiology , Cognition/physiology , Electroencephalography , Hippocampus/drug effects , Hippocampus/physiology , Male , Rats, Sprague-Dawley , Thalamus/drug effects , Thalamus/physiology , Vortioxetine
17.
J Neural Eng ; 14(3): 036020, 2017 06.
Article in English | MEDLINE | ID: mdl-28384124

ABSTRACT

OBJECTIVE: Conventional, multi-channel scalp electroencephalography (EEG) allows the identification of the attended speaker in concurrent-listening ('cocktail party') scenarios. This implies that EEG might provide valuable information to complement hearing aids with some form of EEG and to install a level of neuro-feedback. APPROACH: To investigate whether a listener's attentional focus can be detected from single-channel hearing-aid-compatible EEG configurations, we recorded EEG from three electrodes inside the ear canal ('in-Ear-EEG') and additionally from 64 electrodes on the scalp. In two different, concurrent listening tasks, participants (n = 7) were fitted with individualized in-Ear-EEG pieces and were either asked to attend to one of two dichotically-presented, concurrent tone streams or to one of two diotically-presented, concurrent audiobooks. A forward encoding model was trained to predict the EEG response at single EEG channels. MAIN RESULTS: Each individual participants' attentional focus could be detected from single-channel EEG response recorded from short-distance configurations consisting only of a single in-Ear-EEG electrode and an adjacent scalp-EEG electrode. The differences in neural responses to attended and ignored stimuli were consistent in morphology (i.e. polarity and latency of components) across subjects. SIGNIFICANCE: In sum, our findings show that the EEG response from a single-channel, hearing-aid-compatible configuration provides valuable information to identify a listener's focus of attention.


Subject(s)
Attention/physiology , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Pattern Recognition, Physiological/physiology , Pitch Perception/physiology , Speech Perception/physiology , Speech Production Measurement/methods , Adult , Algorithms , Female , Humans , Male , Middle Aged , Reproducibility of Results , Sensitivity and Specificity
18.
J Neural Eng ; 14(2): 026012, 2017 04.
Article in English | MEDLINE | ID: mdl-28177924

ABSTRACT

OBJECTIVE: Active auditory oddball paradigms are simple tone discrimination tasks used to study the P300 deflection of event-related potentials (ERPs). These ERPs may be quantified by time-frequency analysis. As auditory stimuli cause early high frequency and late low frequency ERP oscillations, the continuous wavelet transform (CWT) is often chosen for decomposition due to its multi-resolution properties. However, as the conventional CWT traditionally applies only one mother wavelet to represent the entire spectrum, the time-frequency resolution is not optimal across all scales. To account for this, we developed and validated a novel method specifically refined to analyse P300-like ERPs in rats. APPROACH: An adapted CWT (aCWT) was implemented to preserve high time-frequency resolution across all scales by commissioning of multiple wavelets operating at different scales. First, decomposition of simulated ERPs was illustrated using the classical CWT and the aCWT. Next, the two methods were applied to EEG recordings obtained from prefrontal cortex in rats performing a two-tone auditory discrimination task. MAIN RESULTS: While only early ERP frequency changes between responses to target and non-target tones were detected by the CWT, both early and late changes were successfully described with strong accuracy by the aCWT in rat ERPs. Increased frontal gamma power and phase synchrony was observed particularly within theta and gamma frequency bands during deviant tones. SIGNIFICANCE: The study suggests superior performance of the aCWT over the CWT in terms of detailed quantification of time-frequency properties of ERPs. Our methodological investigation indicates that accurate and complete assessment of time-frequency components of short-time neural signals is feasible with the novel analysis approach which may be advantageous for characterisation of several types of evoked potentials in particularly rodents.


Subject(s)
Algorithms , Auditory Perception/physiology , Electroencephalography/methods , Event-Related Potentials, P300/physiology , Evoked Potentials, Auditory/physiology , Wavelet Analysis , Acoustic Stimulation/methods , Animals , Male , Rats , Rats, Sprague-Dawley , Reproducibility of Results , Sensitivity and Specificity
19.
Pancreas ; 46(2): 170-176, 2017 02.
Article in English | MEDLINE | ID: mdl-28060186

ABSTRACT

OBJECTIVES: Many patients with painful chronic pancreatitis (CP) have insufficient effect of treatment, and the prevalence of adverse effects is high. Consequently, alternatives to conventional management are needed. We aimed to study the effect of acupuncture in painful CP. METHODS: This was a prospective, single-blinded, randomized crossover trial. Fifteen patients with CP were assigned to a session of acupuncture followed by sham stimulation or vice versa. Patients rated clinical pain scores daily on a 0 to 10 visual analogue scale (VAS) and completed the Patient Global Impression of Change. For mechanistic linkage, resting state electroencephalograms were recorded and quantified by spectral power analysis to explore effects on central pain processing. RESULTS: Acupuncture, compared with sham stimulation, caused more pain relief (2.0 ± 1.5 VAS vs 0.7 ± 0.8 VAS; P = 0.009). The effect, however, was short, and after 1-week follow-up, there was no difference in clinical pain scores between groups (P = 1.0) or the rating of Patient Global Impression of Change (P = 0.8). Electroencephalogram spectral power distributions between sham and acupuncture were comparable between groups (all P > 0.6). CONCLUSIONS: The study presents proof-of-concept for the analgesic effect of acupuncture in pancreatic pain. Although the effect was short lasting, the framework may be used to conceptualize future trials of acupuncture in visceral pain.


Subject(s)
Acupuncture Therapy/methods , Pain Measurement/methods , Pancreatitis, Chronic/complications , Visceral Pain/therapy , Adult , Aged , Cross-Over Studies , Female , Humans , Male , Middle Aged , Prospective Studies , Single-Blind Method , Treatment Outcome , Visceral Pain/diagnosis , Visceral Pain/etiology
20.
J Diabetes Complications ; 31(2): 400-406, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27884662

ABSTRACT

Diabetes mellitus (DM) is associated with structural and functional changes of the central nervous system. We used electroencephalography (EEG) to assess resting state cortical activity and explored associations to relevant clinical features. Multichannel resting state EEG was recorded in 27 healthy controls and 24 patients with longstanding DM and signs of autonomic dysfunction. The power distribution based on wavelet analysis was summarized into frequency bands with corresponding topographic mapping. Source localization analysis was applied to explore the electrical cortical sources underlying the EEG. Compared to controls, DM patients had an overall decreased EEG power in the delta (1-4Hz) and gamma (30-45Hz) bands. Topographic analysis revealed that these changes were confined to the frontal region for the delta band and to central cortical areas for the gamma band. Source localization analysis identified sources with reduced activity in the left postcentral gyrus for the gamma band and in right superior parietal lobule for the alpha1 (8-10Hz) band. DM patients with clinical signs of autonomic dysfunction and gastrointestinal symptoms had evidence of altered resting state cortical processing. This may reflect metabolic, vascular or neuronal changes associated with diabetes.


Subject(s)
Central Nervous System Diseases/physiopathology , Central Nervous System/physiopathology , Cerebral Cortex/physiopathology , Diabetes Mellitus, Type 1/complications , Diabetes Mellitus, Type 2/complications , Diabetic Neuropathies/physiopathology , Adult , Autonomic Nervous System/physiopathology , Autonomic Nervous System Diseases/complications , Autonomic Nervous System Diseases/physiopathology , Brain Mapping , Central Nervous System Diseases/complications , Electroencephalography , Female , Frontal Lobe/physiopathology , Gastrointestinal Diseases/complications , Gastrointestinal Diseases/physiopathology , Gastrointestinal Tract/innervation , Gastrointestinal Tract/physiopathology , Humans , Male , Middle Aged , Parietal Lobe/physiopathology , Somatosensory Cortex/physiopathology
SELECTION OF CITATIONS
SEARCH DETAIL
...