Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 40
Filter
Add more filters










Publication year range
1.
Hear Res ; 439: 108879, 2023 11.
Article in English | MEDLINE | ID: mdl-37826916

ABSTRACT

We demonstrate how the structure of auditory cortex can be investigated by combining computational modelling with advanced optimisation methods. We optimise a well-established auditory cortex model by means of an evolutionary algorithm. The model describes auditory cortex in terms of multiple core, belt, and parabelt fields. The optimisation process finds the optimum connections between individual fields of auditory cortex so that the model is able to reproduce experimental magnetoencephalographic (MEG) data. In the current study, this data comprised the auditory event-related fields (ERFs) recorded from a human subject in an MEG experiment where the stimulus-onset interval between consecutive tones was varied. The quality of the match between synthesised and experimental waveforms was 98%. The results suggest that neural activity caused by feedback connections plays a particularly important role in shaping ERF morphology. Further, ERFs reflect activity of the entire auditory cortex, and response adaptation due to stimulus repetition emerges from a complete reorganisation of AC dynamics rather than a reduction of activity in discrete sources. Our findings constitute the first stage in establishing a new non-invasive method for uncovering the organisation of the human auditory cortex.


Subject(s)
Auditory Cortex , Animals , Humans , Auditory Cortex/physiology , Brain Mapping , Magnetoencephalography , Macaca mulatta/physiology , Computer Simulation , Evoked Potentials, Auditory , Auditory Perception/physiology , Acoustic Stimulation
2.
PLoS One ; 18(4): e0280566, 2023.
Article in English | MEDLINE | ID: mdl-37079604

ABSTRACT

Lifetime experiences and lifestyle, such as education and engaging in leisure activities, contribute to cognitive reserve (CR), which delays the onset of age-related cognitive decline. Word-finding difficulties have been identified as the most prominent cognitive problem in older age. Whether CR mitigates age-related word-finding difficulties is currently unknown. Using picture-naming and verbal fluency tasks, this online study aimed to investigate the effect of CR on word-finding ability in younger, middle-aged, and older adults. All participants were right-handed, monolingual speakers of British English. CR for both the period preceding and coinciding with the COVID-19 pandemic was measured through years of education and questionnaires concerning the frequency of engagement in cognitive, leisure, and physical activities. Linear mixed-effect models demonstrated that older adults were less accurate at action and object naming than middle-aged and younger adults. Higher CR in middle age predicted higher accuracies for action and object naming. Hence, high CR might not only be beneficial in older age, but also in middle age. This benefit will depend on multiple factors: the underlying cognitive processes, individual general cognitive processing abilities, and whether task demands are high. Moreover, younger and middle-aged adults displayed faster object naming compared to older adults. There were no differences between CR scores for the period preceding and coinciding with the pandemic. However, the effect of the COVID-19 pandemic on CR and, subsequently, on word-finding ability might only become apparent in the long term. This article discusses the implications of CR in healthy ageing as well as suggestions for conducting language production studies online.


Subject(s)
COVID-19 , Cognitive Reserve , Healthy Aging , Middle Aged , Humans , Aged , Pandemics , Neuropsychological Tests , COVID-19/epidemiology , Brain
3.
J Gerontol B Psychol Sci Soc Sci ; 78(5): 777-788, 2023 05 11.
Article in English | MEDLINE | ID: mdl-36546399

ABSTRACT

The World Health Organization (WHO) aims to improve our understanding of the factors that promote healthy cognitive aging and combat dementia. Aging theories that consider individual aging trajectories are of paramount importance to meet the WHO's aim. Both the revised Scaffolding Theory of Aging and Cognition (STAC-r) and Cognitive Reserve theory (CR) offer theoretical frameworks for the mechanisms of cognitive aging and the positive influence of an engaged lifestyle. STAC-r additionally considers adverse factors, such as depression. The two theories explain different though partly overlapping aspects of cognitive aging. Currently, it is unclear where the theories agree and differ and what compensation mechanism of age-related cognitive decline might be better explained by either STAC-r, CR, or by both. This review provides an essential discussion of the similarities and differences between these prominent cognitive aging theories, their implications for intervention methods and neurodegenerative disease, and significant shortcomings that have not yet been addressed. This review will direct researchers to common insights in the field and to intervention targets and testable hypotheses for future research. Future research should investigate the potential use of STAC-r in neurodegenerative diseases and provide clarity as to what combination of factors build CR, including their relative importance and when in life they are most effective.


Subject(s)
Cognitive Aging , Cognitive Reserve , Neurodegenerative Diseases , Humans , Brain , Cognition , Aging/psychology , Life Style
4.
Biol Cybern ; 116(4): 475-499, 2022 08.
Article in English | MEDLINE | ID: mdl-35718809

ABSTRACT

Adaptation, the reduction of neuronal responses by repetitive stimulation, is a ubiquitous feature of auditory cortex (AC). It is not clear what causes adaptation, but short-term synaptic depression (STSD) is a potential candidate for the underlying mechanism. In such a case, adaptation can be directly linked with the way AC produces context-sensitive responses such as mismatch negativity and stimulus-specific adaptation observed on the single-unit level. We examined this hypothesis via a computational model based on AC anatomy, which includes serially connected core, belt, and parabelt areas. The model replicates the event-related field (ERF) of the magnetoencephalogram as well as ERF adaptation. The model dynamics are described by excitatory and inhibitory state variables of cell populations, with the excitatory connections modulated by STSD. We analysed the system dynamics by linearising the firing rates and solving the STSD equation using time-scale separation. This allows for characterisation of AC dynamics as a superposition of damped harmonic oscillators, so-called normal modes. We show that repetition suppression of the N1m is due to a mixture of causes, with stimulus repetition modifying both the amplitudes and the frequencies of the normal modes. In this view, adaptation results from a complete reorganisation of AC dynamics rather than a reduction of activity in discrete sources. Further, both the network structure and the balance between excitation and inhibition contribute significantly to the rate with which AC recovers from adaptation. This lifetime of adaptation is longer in the belt and parabelt than in the core area, despite the time constants of STSD being spatially homogeneous. Finally, we critically evaluate the use of a single exponential function to describe recovery from adaptation.


Subject(s)
Auditory Cortex , Acoustic Stimulation , Adaptation, Physiological/physiology , Auditory Cortex/physiology , Neurons/physiology
5.
Front Hum Neurosci ; 15: 721574, 2021.
Article in English | MEDLINE | ID: mdl-34867238

ABSTRACT

An unpredictable stimulus elicits a stronger event-related response than a high-probability stimulus. This differential in response magnitude is termed the mismatch negativity (MMN). Over the past decade, it has become increasingly popular to explain the MMN terms of predictive coding, a proposed general principle for the way the brain realizes Bayesian inference when it interprets sensory information. This perspective article is a reminder that the issue of MMN generation is far from settled, and that an alternative model in terms of adaptation continues to lurk in the wings. The adaptation model has been discounted because of the unrealistic and simplistic fashion in which it tends to be set up. Here, simulations of auditory cortex incorporating a modern version of the adaptation model are presented. These show that locally operating short-term synaptic depression accounts both for adaptation due to stimulus repetition and for MMN responses. This happens even in cases where adaptation has been ruled out as an explanation of the MMN (e.g., in the stimulus omission paradigm and the multi-standard control paradigm). Simulation models that would demonstrate the viability of predictive coding in a similarly multifaceted way are currently missing from the literature, and the reason for this is discussed in light of the current results.

6.
Psychophysiology ; 58(4): e13769, 2021 04.
Article in English | MEDLINE | ID: mdl-33475173

ABSTRACT

Auditory event-related fields (ERFs) measured with magnetoencephalography (MEG) are useful for studying the neuronal underpinnings of auditory cognition in human cortex. They have a highly subject-specific morphology, albeit certain characteristic deflections (e.g., P1m, N1m, and P2m) can be identified in most subjects. Here, we explore the reason for this subject-specificity through a combination of MEG measurements and computational modeling of auditory cortex. We test whether ERF subject-specificity can predominantly be explained in terms of each subject having an individual cortical gross anatomy, which modulates the MEG signal, or whether individual cortical dynamics is also at play. To our knowledge, this is the first time that tools to address this question are being presented. The effects of anatomical and dynamical variation on the MEG signal is simulated in a model describing the core-belt-parabelt structure of the auditory cortex, and with the dynamics based on the leaky-integrator neuron model. The experimental and simulated ERFs are characterized in terms of the N1m amplitude, latency, and width. Also, we examine the waveform grand-averaged across subjects, and the standard deviation of this grand average. The results show that the intersubject variability of the ERF arises out of both the anatomy and the dynamics of auditory cortex being specific to each subject. Moreover, our results suggest that the latency variation of the N1m is largely related to subject-specific dynamics. The findings are discussed in terms of how learning, plasticity, and sound detection are reflected in the auditory ERFs. The notion of the grand-averaged ERF is critically evaluated.


Subject(s)
Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Biological Variation, Population/physiology , Computer Simulation , Evoked Potentials, Auditory/physiology , Magnetoencephalography , Neural Networks, Computer , Humans
7.
Biol Cybern ; 113(3): 321-345, 2019 06.
Article in English | MEDLINE | ID: mdl-30820663

ABSTRACT

Event-related fields of the magnetoencephalogram are triggered by sensory stimuli and appear as a series of waves extending hundreds of milliseconds after stimulus onset. They reflect the processing of the stimulus in cortex and have a highly subject-specific morphology. However, we still have an incomplete picture of how event-related fields are generated, what the various waves signify, and why they are so subject-specific. Here, we focus on this problem through the lens of a computational model which describes auditory cortex in terms of interconnected cortical columns as part of hierarchically placed fields of the core, belt, and parabelt areas. We develop an analytical approach arriving at solutions to the system dynamics in terms of normal modes: damped harmonic oscillators emerging out of the coupled excitation and inhibition in the system. Each normal mode is a global feature which depends on the anatomical structure of the entire auditory cortex. Further, normal modes are fundamental dynamical building blocks, in that the activity of each cortical column represents a combination of all normal modes. This approach allows us to replicate a typical auditory event-related response as a weighted sum of the single-column activities. Our work offers an alternative to the view that the event-related field arises out of spatially discrete, local generators. Rather, there is only a single generator process distributed over the entire network of the auditory cortex. We present predictions for testing to what degree subject-specificity is due to cross-subject variations in dynamical parameters rather than in the cortical surface morphology.


Subject(s)
Auditory Cortex/physiology , Computer Simulation , Evoked Potentials, Auditory/physiology , Models, Neurological , Animals , Humans , Magnetoencephalography
8.
J Neurophysiol ; 120(2): 703-719, 2018 08 01.
Article in English | MEDLINE | ID: mdl-29718805

ABSTRACT

Receptive field (RF) models are an important tool for deciphering neural responses to sensory stimuli. The two currently popular RF models are multifilter linear-nonlinear (LN) models and context models. Models are, however, never correct, and they rely on assumptions to keep them simple enough to be interpretable. As a consequence, different models describe different stimulus-response mappings, which may or may not be good approximations of real neural behavior. In the current study, we take up two tasks: 1) we introduce new ways to estimate context models with realistic nonlinearities, that is, with logistic and exponential functions, and 2) we evaluate context models and multifilter LN models in terms of how well they describe recorded data from complex cells in cat primary visual cortex. Our results, based on single-spike information and correlation coefficients, indicate that context models outperform corresponding multifilter LN models of equal complexity (measured in terms of number of parameters), with the best increase in performance being achieved by the novel context models. Consequently, our results suggest that the multifilter LN-model framework is suboptimal for describing the behavior of complex cells: the context-model framework is clearly superior while still providing interpretable quantizations of neural behavior. NEW & NOTEWORTHY We used data from complex cells in primary visual cortex to estimate a wide variety of receptive field models from two frameworks that have previously not been compared with each other. The models included traditionally used multifilter linear-nonlinear models and novel variants of context models. Using mutual information and correlation coefficients as performance measures, we showed that context models are superior for describing complex cells and that the novel context models performed the best.


Subject(s)
Models, Neurological , Neurons/physiology , Visual Cortex/physiology , Action Potentials , Animals , Cats , Linear Models , Neural Networks, Computer , Nonlinear Dynamics , Photic Stimulation , Reproducibility of Results , Visual Fields
9.
Brain Behav ; 7(9): e00789, 2017 09.
Article in English | MEDLINE | ID: mdl-28948083

ABSTRACT

INTRODUCTION: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. METHODS: In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. RESULTS: The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. CONCLUSIONS: The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.


Subject(s)
Auditory Cortex/physiology , Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/diagnostic imaging , Brain/diagnostic imaging , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
10.
Hear Res ; 339: 195-210, 2016 09.
Article in English | MEDLINE | ID: mdl-27473504

ABSTRACT

Spectro-temporal receptive fields (STRFs) are thought to provide descriptive images of the computations performed by neurons along the auditory pathway. However, their validity can be questioned because they rely on a set of assumptions that are probably not fulfilled by real neurons exhibiting contextual effects, that is, nonlinear interactions in the time or frequency dimension that cannot be described with a linear filter. We used a novel approach to investigate how a variety of contextual effects, due to facilitating nonlinear interactions and synaptic depression, affect different STRF models, and if these effects can be captured with a context field (CF). Contextual effects were incorporated in simulated networks of spiking neurons, allowing one to define the true STRFs of the neurons. This, in turn, made it possible to evaluate the performance of each STRF model by comparing the estimations with the true STRFs. We found that currently used STRF models are particularly poor at estimating inhibitory regions. Specifically, contextual effects make estimated STRFs dependent on stimulus density in a contrasting fashion: inhibitory regions are underestimated at lower densities while artificial inhibitory regions emerge at higher densities. The CF was found to provide a solution to this dilemma, but only when it is used together with a generalized linear model. Our results therefore highlight the limitations of the traditional STRF approach and provide useful recipes for how different STRF models and stimuli can be used to arrive at reliable quantifications of neural computations in the presence of contextual effects. The results therefore push the purpose of STRF analysis from simply finding an optimal stimulus toward describing context-dependent computations of neurons along the auditory pathway.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Models, Neurological , Acoustic Stimulation , Action Potentials/physiology , Animals , Auditory Pathways , Computer Simulation , Humans , Linear Models , Neuronal Plasticity , Neurons/physiology , Nonlinear Dynamics
11.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26774614

ABSTRACT

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Subject(s)
Prefrontal Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Young Adult
12.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26477651

ABSTRACT

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Subject(s)
Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Speech Intelligibility/physiology , Young Adult
13.
Neural Comput ; 28(2): 327-53, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26654206

ABSTRACT

Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.


Subject(s)
Memory/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Action Potentials/physiology , Association Learning , Brain/cytology , Brain/physiology , Computer Simulation , Humans , Synapses/physiology
14.
Eur J Neurosci ; 41(5): 615-30, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25728180

ABSTRACT

Incoming sounds are represented in the context of preceding events, and this requires a memory mechanism that integrates information over time. Here, it was demonstrated that response adaptation, the suppression of neural responses due to stimulus repetition, might reflect a computational solution that auditory cortex uses for temporal integration. Adaptation is observed in single-unit measurements as two-tone forward masking effects and as stimulus-specific adaptation (SSA). In non-invasive observations, the amplitude of the auditory N1m response adapts strongly with stimulus repetition, and it is followed by response recovery (the so-called mismatch response) to rare deviant events. The current computational simulations described the serial core-belt-parabelt structure of auditory cortex, and included synaptic adaptation, the short-term, activity-dependent depression of excitatory corticocortical connections. It was found that synaptic adaptation is sufficient for columns to respond selectively to tone pairs and complex tone sequences. These responses were defined as combination sensitive, thus reflecting temporal integration, when a strong response to a stimulus sequence was coupled with weaker responses both to the time-reversed sequence and to the isolated sequence elements. The temporal complexity of the stimulus seemed to be reflected in the proportion of combination-sensitive columns across the different regions of the model. Our results suggest that while synaptic adaptation produces facilitation and suppression effects, including SSA and the modulation of the N1m response, its functional significance may actually be in its contribution to temporal integration. This integration seems to benefit from the serial structure of auditory cortex.


Subject(s)
Adaptation, Physiological , Auditory Cortex/physiology , Models, Neurological , Synapses/physiology , Animals , Humans , Time
15.
Front Comput Neurosci ; 7: 152, 2013.
Article in English | MEDLINE | ID: mdl-24223549

ABSTRACT

The ability to represent and recognize naturally occuring sounds such as speech depends not only on spectral analysis carried out by the subcortical auditory system but also on the ability of the cortex to bind spectral information over time. In primates, these temporal binding processes are mirrored as selective responsiveness of neurons to species-specific vocalizations. Here, we used computational modeling of auditory cortex to investigate how selectivity to spectrally and temporally complex stimuli is achieved. A set of 208 microcolumns were arranged in a serial core-belt-parabelt structure documented in both humans and animals. Stimulus material comprised multiple consonant-vowel (CV) pseudowords. Selectivity to the spectral structure of the sounds was commonly found in all regions of the model (N = 122 columns out of 208), and this selectivity was only weakly affected by manipulating the structure and dynamics of the model. In contrast, temporal binding was rarer (N = 39), found mostly in the belt and parabelt regions. Thus, the serial core-belt-parabelt structure of auditory cortex is necessary for temporal binding. Further, adaptation due to synaptic depression-rendering the cortical network malleable by stimulus history-was crucial for the emergence of neurons sensitive to the temporal structure of the stimuli. Both spectral selectivity and temporal binding required that a sufficient proportion of the columns interacted in an inhibitory manner. The model and its structural modifications had a small-world structure (i.e., columns formed clusters and were within short node-to-node distances from each other). However, simulations showed that a small-world structure is not a necessary condition for spectral selectivity and temporal binding to emerge. In summary, this study suggests that temporal binding arises out of (1) the serial structure typical to the auditory cortex, (2) synaptic adaptation, and (3) inhibitory interactions between microcolumns.

16.
Neuroscientist ; 18(6): 602-12, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22492193

ABSTRACT

The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.


Subject(s)
Auditory Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Auditory Perception/physiology , Functional Laterality/physiology , Humans
17.
Neuroimage ; 60(2): 1036-45, 2012 Apr 02.
Article in English | MEDLINE | ID: mdl-22289805

ABSTRACT

Human speech perception is highly resilient to acoustic distortions. In addition to distortions from external sound sources, degradation of the acoustic structure of the sound itself can substantially reduce the intelligibility of speech. The degradation of the internal structure of speech happens, for example, when the digital representation of the signal is impoverished by reducing its amplitude resolution. Further, the perception of speech is also influenced by whether the distortion is transient, coinciding with speech, or is heard continuously in the background. However, the complex effects of the acoustic structure and continuity of the distortion on the cortical processing of degraded speech are unclear. In the present magnetoencephalography study, we investigated how the cortical processing of degraded speech sounds as measured through the auditory N1m response is affected by variation of both the distortion type (internal, external) and the continuity of distortion (transient, continuous). We found that when the distortion was continuous, the N1m was significantly delayed, regardless of the type of distortion. The N1m amplitude, in turn, was affected only when speech sounds were degraded with transient internal distortion, which resulted in larger response amplitudes. The results suggest that external and internal distortions of speech result in divergent patterns of activity in the auditory cortex, and that the effects are modulated by the temporal continuity of the distortion.


Subject(s)
Auditory Cortex/physiology , Phonetics , Speech Perception/physiology , Adult , Female , Humans , Magnetoencephalography , Male , Time Factors , Young Adult
18.
BMC Neurosci ; 13: 157, 2012 Dec 31.
Article in English | MEDLINE | ID: mdl-23276297

ABSTRACT

BACKGROUND: The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG) study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. RESULTS: In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms) being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms), auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms), stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. CONCLUSIONS: The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs during the first 100 ms, and sensitivity to speech intelligibility emerges in auditory cortex and surrounding areas from 200 ms onwards. The two processes are intertwined, with the activity of auditory cortical areas being modulated by top-down processes related to memory traces of speech and supporting speech intelligibility.


Subject(s)
Auditory Cortex/physiology , Brain Mapping/psychology , Speech Intelligibility/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Brain Mapping/methods , Evoked Potentials, Auditory/physiology , Humans , Image Processing, Computer-Assisted/methods , Magnetoencephalography/methods , Magnetoencephalography/psychology
19.
Neuroimage ; 55(3): 1252-9, 2011 Apr 01.
Article in English | MEDLINE | ID: mdl-21215807

ABSTRACT

Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.


Subject(s)
Cerebral Cortex/cytology , Cerebral Cortex/physiology , Neurons/physiology , Speech Perception/physiology , Acoustic Stimulation , Adaptation, Physiological/physiology , Data Interpretation, Statistical , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Speech , Young Adult
20.
Brain Res ; 1367: 298-309, 2011 Jan 07.
Article in English | MEDLINE | ID: mdl-20969833

ABSTRACT

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs). To mimic the degradation of the internal structure of sound and external distortion, we degraded speech sounds by reducing the amplitude resolution of the signal waveform and by using additive noise, respectively. Since both distortion types increase the relative strength of high frequencies in the signal spectrum, we also used versions of the stimuli which were low-pass filtered to match the tilted spectral envelope of the undistorted speech sound. This enabled us to examine whether the changes in the overall spectral shape of the stimuli affect the AEFs. We found that the auditory N1m response was substantially enhanced as the amplitude resolution was reduced. In contrast, the N1m was insensitive to distorted speech with additive noise. Changing the spectral envelope had no effect on the N1m. We propose that the observed amplitude enhancements are due to an increase in noisy spectral harmonics produced by the reduction of the amplitude resolution, which activates the periodicity-sensitive neuronal populations participating in pitch extraction processes. The current findings suggest that the auditory cortex processes speech sounds in a differential manner when the internal structure of sound is degraded compared with the speech distorted by external noise.


Subject(s)
Auditory Cortex/physiology , Brain Waves/physiology , Evoked Potentials, Auditory/physiology , Noise , Sound Localization/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Humans , Magnetoencephalography , Male , Phonetics , Psychoacoustics , Reaction Time/physiology , Spectrum Analysis , Statistics, Nonparametric , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...