Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 44
Filter
Add more filters










Publication year range
1.
Brain Behav ; 7(9): e00789, 2017 09.
Article in English | MEDLINE | ID: mdl-28948083

ABSTRACT

INTRODUCTION: We examined which brain areas are involved in the comprehension of acoustically distorted speech using an experimental paradigm where the same distorted sentence can be perceived at different levels of intelligibility. This change in intelligibility occurs via a single intervening presentation of the intact version of the sentence, and the effect lasts at least on the order of minutes. Since the acoustic structure of the distorted stimulus is kept fixed and only intelligibility is varied, this allows one to study brain activity related to speech comprehension specifically. METHODS: In a functional magnetic resonance imaging (fMRI) experiment, a stimulus set contained a block of six distorted sentences. This was followed by the intact counterparts of the sentences, after which the sentences were presented in distorted form again. A total of 18 such sets were presented to 20 human subjects. RESULTS: The blood oxygenation level dependent (BOLD)-responses elicited by the distorted sentences which came after the disambiguating, intact sentences were contrasted with the responses to the sentences presented before disambiguation. This revealed increased activity in the bilateral frontal pole, the dorsal anterior cingulate/paracingulate cortex, and the right frontal operculum. Decreased BOLD responses were observed in the posterior insula, Heschl's gyrus, and the posterior superior temporal sulcus. CONCLUSIONS: The brain areas that showed BOLD-enhancement for increased sentence comprehension have been associated with executive functions and with the mapping of incoming sensory information to representations stored in episodic memory. Thus, the comprehension of acoustically distorted speech may be associated with the engagement of memory-related subsystems. Further, activity in the primary auditory cortex was modulated by prior experience, possibly in a predictive coding framework. Our results suggest that memory biases the perception of ambiguous sensory information toward interpretations that have the highest probability to be correct based on previous experience.


Subject(s)
Auditory Cortex/physiology , Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/diagnostic imaging , Brain/diagnostic imaging , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
2.
Neuroimage ; 129: 214-223, 2016 Apr 01.
Article in English | MEDLINE | ID: mdl-26774614

ABSTRACT

Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.


Subject(s)
Prefrontal Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Young Adult
3.
Neuroimage ; 125: 131-143, 2016 Jan 15.
Article in English | MEDLINE | ID: mdl-26477651

ABSTRACT

Recent studies have shown that acoustically distorted sentences can be perceived as either unintelligible or intelligible depending on whether one has previously been exposed to the undistorted, intelligible versions of the sentences. This allows studying processes specifically related to speech intelligibility since any change between the responses to the distorted stimuli before and after the presentation of their undistorted counterparts cannot be attributed to acoustic variability but, rather, to the successful mapping of sensory information onto memory representations. To estimate how the complexity of the message is reflected in speech comprehension, we applied this rapid change in perception to behavioral and magnetoencephalography (MEG) experiments using vowels, words and sentences. In the experiments, stimuli were initially presented to the subject in a distorted form, after which undistorted versions of the stimuli were presented. Finally, the original distorted stimuli were presented once more. The resulting increase in intelligibility observed for the second presentation of the distorted stimuli depended on the complexity of the stimulus: vowels remained unintelligible (behaviorally measured intelligibility 27%) whereas the intelligibility of the words increased from 19% to 45% and that of the sentences from 31% to 65%. This increase in the intelligibility of the degraded stimuli was reflected as an enhancement of activity in the auditory cortex and surrounding areas at early latencies of 130-160ms. In the same regions, increasing stimulus complexity attenuated mean currents at latencies of 130-160ms whereas at latencies of 200-270ms the mean currents increased. These modulations in cortical activity may reflect feedback from top-down mechanisms enhancing the extraction of information from speech. The behavioral results suggest that memory-driven expectancies can have a significant effect on speech comprehension, especially in acoustically adverse conditions where the bottom-up information is decreased.


Subject(s)
Brain/physiology , Comprehension/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Signal Processing, Computer-Assisted , Speech Intelligibility/physiology , Young Adult
4.
Neural Comput ; 28(2): 327-53, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26654206

ABSTRACT

Robust representations of sounds with a complex spectrotemporal structure are thought to emerge in hierarchically organized auditory cortex, but the computational advantage of this hierarchy remains unknown. Here, we used computational models to study how such hierarchical structures affect temporal binding in neural networks. We equipped individual units in different types of feedforward networks with local memory mechanisms storing recent inputs and observed how this affected the ability of the networks to process stimuli context dependently. Our findings illustrate that these local memories stack up in hierarchical structures and hence allow network units to exhibit selectivity to spectral sequences longer than the time spans of the local memories. We also illustrate that short-term synaptic plasticity is a potential local memory mechanism within the auditory cortex, and we show that it can bring robustness to context dependence against variation in the temporal rate of stimuli, while introducing nonlinearities to response profiles that are not well captured by standard linear spectrotemporal receptive field models. The results therefore indicate that short-term synaptic plasticity might provide hierarchically structured auditory cortex with computational capabilities important for robust representations of spectrotemporal patterns.


Subject(s)
Memory/physiology , Models, Neurological , Neural Networks, Computer , Neurons/physiology , Action Potentials/physiology , Association Learning , Brain/cytology , Brain/physiology , Computer Simulation , Humans , Synapses/physiology
5.
Eur J Neurosci ; 41(5): 615-30, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25728180

ABSTRACT

Incoming sounds are represented in the context of preceding events, and this requires a memory mechanism that integrates information over time. Here, it was demonstrated that response adaptation, the suppression of neural responses due to stimulus repetition, might reflect a computational solution that auditory cortex uses for temporal integration. Adaptation is observed in single-unit measurements as two-tone forward masking effects and as stimulus-specific adaptation (SSA). In non-invasive observations, the amplitude of the auditory N1m response adapts strongly with stimulus repetition, and it is followed by response recovery (the so-called mismatch response) to rare deviant events. The current computational simulations described the serial core-belt-parabelt structure of auditory cortex, and included synaptic adaptation, the short-term, activity-dependent depression of excitatory corticocortical connections. It was found that synaptic adaptation is sufficient for columns to respond selectively to tone pairs and complex tone sequences. These responses were defined as combination sensitive, thus reflecting temporal integration, when a strong response to a stimulus sequence was coupled with weaker responses both to the time-reversed sequence and to the isolated sequence elements. The temporal complexity of the stimulus seemed to be reflected in the proportion of combination-sensitive columns across the different regions of the model. Our results suggest that while synaptic adaptation produces facilitation and suppression effects, including SSA and the modulation of the N1m response, its functional significance may actually be in its contribution to temporal integration. This integration seems to benefit from the serial structure of auditory cortex.


Subject(s)
Adaptation, Physiological , Auditory Cortex/physiology , Models, Neurological , Synapses/physiology , Animals , Humans , Time
6.
Front Psychol ; 5: 394, 2014.
Article in English | MEDLINE | ID: mdl-24834062

ABSTRACT

The cortical dorsal auditory stream has been proposed to mediate mapping between auditory and articulatory-motor representations in speech processing. Whether this sensorimotor integration contributes to speech perception remains an open question. Here, magnetoencephalography was used to examine connectivity between auditory and motor areas while subjects were performing a sensorimotor task involving speech sound identification and overt repetition. Functional connectivity was estimated with inter-areal phase synchrony of electromagnetic oscillations. Structural equation modeling was applied to determine the direction of information flow. Compared to passive listening, engagement in the sensorimotor task enhanced connectivity within 200 ms after sound onset bilaterally between the temporoparietal junction (TPJ) and ventral premotor cortex (vPMC), with the left-hemisphere connection showing directionality from vPMC to TPJ. Passive listening to noisy speech elicited stronger connectivity than clear speech between left auditory cortex (AC) and vPMC at ~100 ms, and between left TPJ and dorsal premotor cortex (dPMC) at ~200 ms. Information flow was estimated from AC to vPMC and from dPMC to TPJ. Connectivity strength among the left AC, vPMC, and TPJ correlated positively with the identification of speech sounds within 150 ms after sound onset, with information flowing from AC to TPJ, from AC to vPMC, and from vPMC to TPJ. Taken together, these findings suggest that sensorimotor integration mediates the categorization of incoming speech sounds through reciprocal auditory-to-motor and motor-to-auditory projections.

7.
Front Comput Neurosci ; 7: 152, 2013.
Article in English | MEDLINE | ID: mdl-24223549

ABSTRACT

The ability to represent and recognize naturally occuring sounds such as speech depends not only on spectral analysis carried out by the subcortical auditory system but also on the ability of the cortex to bind spectral information over time. In primates, these temporal binding processes are mirrored as selective responsiveness of neurons to species-specific vocalizations. Here, we used computational modeling of auditory cortex to investigate how selectivity to spectrally and temporally complex stimuli is achieved. A set of 208 microcolumns were arranged in a serial core-belt-parabelt structure documented in both humans and animals. Stimulus material comprised multiple consonant-vowel (CV) pseudowords. Selectivity to the spectral structure of the sounds was commonly found in all regions of the model (N = 122 columns out of 208), and this selectivity was only weakly affected by manipulating the structure and dynamics of the model. In contrast, temporal binding was rarer (N = 39), found mostly in the belt and parabelt regions. Thus, the serial core-belt-parabelt structure of auditory cortex is necessary for temporal binding. Further, adaptation due to synaptic depression-rendering the cortical network malleable by stimulus history-was crucial for the emergence of neurons sensitive to the temporal structure of the stimuli. Both spectral selectivity and temporal binding required that a sufficient proportion of the columns interacted in an inhibitory manner. The model and its structural modifications had a small-world structure (i.e., columns formed clusters and were within short node-to-node distances from each other). However, simulations showed that a small-world structure is not a necessary condition for spectral selectivity and temporal binding to emerge. In summary, this study suggests that temporal binding arises out of (1) the serial structure typical to the auditory cortex, (2) synaptic adaptation, and (3) inhibitory interactions between microcolumns.

8.
Nat Commun ; 4: 2585, 2013.
Article in English | MEDLINE | ID: mdl-24121634

ABSTRACT

Neurophysiological animal models suggest that anterior auditory cortex (AC) areas process sound identity information, whereas posterior ACs specialize in sound location processing. In humans, inconsistent neuroimaging results and insufficient causal evidence have challenged the existence of such parallel AC organization. Here we transiently inhibit bilateral anterior or posterior AC areas using MRI-guided paired-pulse transcranial magnetic stimulation (TMS) while subjects listen to Reference/Probe sound pairs and perform either sound location or identity discrimination tasks. The targeting of TMS pulses, delivered 55-145 ms after Probes, is confirmed with individual-level cortical electric-field estimates. Our data show that TMS to posterior AC regions delays reaction times (RT) significantly more during sound location than identity discrimination, whereas TMS to anterior AC regions delays RTs significantly more during sound identity than location discrimination. This double dissociation provides direct causal support for parallel processing of sound identity features in anterior AC and sound location in posterior AC.


Subject(s)
Auditory Perception/physiology , Pattern Recognition, Physiological/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation/methods , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Psychomotor Performance/physiology , Reaction Time , Sound , Transcranial Magnetic Stimulation
9.
Neuroscientist ; 18(6): 602-12, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22492193

ABSTRACT

The auditory system codes spatial locations in a way that deviates from the spatial representations found in other modalities. This difference is especially striking in the cortex, where neurons form topographical maps of visual and tactile space but where auditory space is represented through a population rate code. In this hemifield code, sound source location is represented in the activity of two widely tuned opponent populations, one tuned to the right and the other to the left side of auditory space. Scientists are only beginning to uncover how this coding strategy adapts to various spatial processing demands. This review presents the current understanding of auditory spatial processing in the cortex. To this end, the authors consider how various implementations of the hemifield code may exist within the auditory cortex and how these may be modulated by the stimulation and task context. As a result, a coherent set of neural strategies for auditory spatial processing emerges.


Subject(s)
Auditory Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Auditory Perception/physiology , Functional Laterality/physiology , Humans
10.
Neuroimage ; 60(2): 1036-45, 2012 Apr 02.
Article in English | MEDLINE | ID: mdl-22289805

ABSTRACT

Human speech perception is highly resilient to acoustic distortions. In addition to distortions from external sound sources, degradation of the acoustic structure of the sound itself can substantially reduce the intelligibility of speech. The degradation of the internal structure of speech happens, for example, when the digital representation of the signal is impoverished by reducing its amplitude resolution. Further, the perception of speech is also influenced by whether the distortion is transient, coinciding with speech, or is heard continuously in the background. However, the complex effects of the acoustic structure and continuity of the distortion on the cortical processing of degraded speech are unclear. In the present magnetoencephalography study, we investigated how the cortical processing of degraded speech sounds as measured through the auditory N1m response is affected by variation of both the distortion type (internal, external) and the continuity of distortion (transient, continuous). We found that when the distortion was continuous, the N1m was significantly delayed, regardless of the type of distortion. The N1m amplitude, in turn, was affected only when speech sounds were degraded with transient internal distortion, which resulted in larger response amplitudes. The results suggest that external and internal distortions of speech result in divergent patterns of activity in the auditory cortex, and that the effects are modulated by the temporal continuity of the distortion.


Subject(s)
Auditory Cortex/physiology , Phonetics , Speech Perception/physiology , Adult , Female , Humans , Magnetoencephalography , Male , Time Factors , Young Adult
11.
Neuroimage ; 60(4): 1937-46, 2012 May 01.
Article in English | MEDLINE | ID: mdl-22361165

ABSTRACT

Sensory-motor interactions between auditory and articulatory representations in the dorsal auditory processing stream are suggested to contribute to speech perception, especially when bottom-up information alone is insufficient for purely auditory perceptual mechanisms to succeed. Here, we hypothesized that the dorsal stream responds more vigorously to auditory syllables when one is engaged in a phonetic identification/repetition task subsequent to perception compared to passive listening, and that this effect is further augmented when the syllables are embedded in noise. To this end, we recorded magnetoencephalography while twenty subjects listened to speech syllables, with and without noise masking, in four conditions: passive perception; overt repetition; covert repetition; and overt imitation. Compared to passive listening, left-hemispheric N100m equivalent current dipole responses were amplified and shifted posteriorly when perception was followed by covert repetition task. Cortically constrained minimum-norm estimates showed amplified left supramarginal and angylar gyri responses in the covert repetition condition at ~100ms from stimulus onset. Longer-latency responses at ~200ms were amplified in the covert repetition condition in the left angular gyrus and in all three active conditions in the left premotor cortex, with further enhancements when the syllables were embedded in noise. Phonetic categorization accuracy and magnitude of voice pitch change between overt repetition and imitation conditions correlated with left premotor cortex responses at ~100 and ~200ms, respectively. Together, these results suggest that the dorsal stream involvement in speech perception is dependent on perceptual task demands and that phonetic categorization performance is influenced by the left premotor cortex.


Subject(s)
Brain Mapping , Cerebral Cortex/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Middle Aged , Phonetics , Young Adult
12.
BMC Neurosci ; 13: 157, 2012 Dec 31.
Article in English | MEDLINE | ID: mdl-23276297

ABSTRACT

BACKGROUND: The robustness of speech perception in the face of acoustic variation is founded on the ability of the auditory system to integrate the acoustic features of speech and to segregate them from background noise. This auditory scene analysis process is facilitated by top-down mechanisms, such as recognition memory for speech content. However, the cortical processes underlying these facilitatory mechanisms remain unclear. The present magnetoencephalography (MEG) study examined how the activity of auditory cortical areas is modulated by acoustic degradation and intelligibility of connected speech. The experimental design allowed for the comparison of cortical activity patterns elicited by acoustically identical stimuli which were perceived as either intelligible or unintelligible. RESULTS: In the experiment, a set of sentences was presented to the subject in distorted, undistorted, and again in distorted form. The intervening exposure to undistorted versions of sentences rendered the initially unintelligible, distorted sentences intelligible, as evidenced by an increase from 30% to 80% in the proportion of sentences reported as intelligible. These perceptual changes were reflected in the activity of the auditory cortex, with the auditory N1m response (~100 ms) being more prominent for the distorted stimuli than for the intact ones. In the time range of auditory P2m response (>200 ms), auditory cortex as well as regions anterior and posterior to this area generated a stronger response to sentences which were intelligible than unintelligible. During the sustained field (>300 ms), stronger activity was elicited by degraded stimuli in auditory cortex and by intelligible sentences in areas posterior to auditory cortex. CONCLUSIONS: The current findings suggest that the auditory system comprises bottom-up and top-down processes which are reflected in transient and sustained brain activity. It appears that analysis of acoustic features occurs during the first 100 ms, and sensitivity to speech intelligibility emerges in auditory cortex and surrounding areas from 200 ms onwards. The two processes are intertwined, with the activity of auditory cortical areas being modulated by top-down processes related to memory traces of speech and supporting speech intelligibility.


Subject(s)
Auditory Cortex/physiology , Brain Mapping/psychology , Speech Intelligibility/physiology , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation/methods , Adult , Brain Mapping/methods , Evoked Potentials, Auditory/physiology , Humans , Image Processing, Computer-Assisted/methods , Magnetoencephalography/methods , Magnetoencephalography/psychology
13.
Neuroimage ; 55(3): 1252-9, 2011 Apr 01.
Article in English | MEDLINE | ID: mdl-21215807

ABSTRACT

Most speech sounds are periodic due to the vibration of the vocal folds. Non-invasive studies of the human brain have revealed a periodicity-sensitive population in the auditory cortex which might contribute to the encoding of speech periodicity. Since the periodicity of natural speech varies from (almost) periodic to aperiodic, one may argue that speech aperiodicity could similarly be represented by a dedicated neuron population. In the current magnetoencephalography study, cortical sensitivity to periodicity was probed with natural periodic vowels and their aperiodic counterparts in a stimulus-specific adaptation paradigm. The effects of intervening adaptor stimuli on the N1m elicited by the probe stimuli (the actual effective stimuli) were studied under interstimulus intervals (ISIs) of 800 and 200 ms. The results indicated a periodicity-dependent release from adaptation which was observed for aperiodic probes alternating with periodic adaptors under both ISIs. Such release from adaptation can be attributed to the activation of a distinct neural population responsive to aperiodic (probe) but not to periodic (adaptor) stimuli. Thus, the current results suggest that the aperiodicity of speech sounds may be represented not only by decreased activation of the periodicity-sensitive population but, additionally, by the activation of a distinct cortical population responsive to speech aperiodicity.


Subject(s)
Cerebral Cortex/cytology , Cerebral Cortex/physiology , Neurons/physiology , Speech Perception/physiology , Acoustic Stimulation , Adaptation, Physiological/physiology , Data Interpretation, Statistical , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Speech , Young Adult
14.
Brain Res ; 1367: 298-309, 2011 Jan 07.
Article in English | MEDLINE | ID: mdl-20969833

ABSTRACT

The cortical mechanisms underlying human speech perception in acoustically adverse conditions remain largely unknown. Besides distortions from external sources, degradation of the acoustic structure of the sound itself poses further demands on perceptual mechanisms. We conducted a magnetoencephalography (MEG) study to reveal whether the perceptual differences between these distortions are reflected in cortically generated auditory evoked fields (AEFs). To mimic the degradation of the internal structure of sound and external distortion, we degraded speech sounds by reducing the amplitude resolution of the signal waveform and by using additive noise, respectively. Since both distortion types increase the relative strength of high frequencies in the signal spectrum, we also used versions of the stimuli which were low-pass filtered to match the tilted spectral envelope of the undistorted speech sound. This enabled us to examine whether the changes in the overall spectral shape of the stimuli affect the AEFs. We found that the auditory N1m response was substantially enhanced as the amplitude resolution was reduced. In contrast, the N1m was insensitive to distorted speech with additive noise. Changing the spectral envelope had no effect on the N1m. We propose that the observed amplitude enhancements are due to an increase in noisy spectral harmonics produced by the reduction of the amplitude resolution, which activates the periodicity-sensitive neuronal populations participating in pitch extraction processes. The current findings suggest that the auditory cortex processes speech sounds in a differential manner when the internal structure of sound is degraded compared with the speech distorted by external noise.


Subject(s)
Auditory Cortex/physiology , Brain Waves/physiology , Evoked Potentials, Auditory/physiology , Noise , Sound Localization/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Brain Mapping , Humans , Magnetoencephalography , Male , Phonetics , Psychoacoustics , Reaction Time/physiology , Spectrum Analysis , Statistics, Nonparametric , Young Adult
15.
J Acoust Soc Am ; 128(1): 224-34, 2010 Jul.
Article in English | MEDLINE | ID: mdl-20649218

ABSTRACT

Cortical sensitivity to the periodicity of speech sounds has been evidenced by larger, more anterior responses to periodic than to aperiodic vowels in several non-invasive studies of the human brain. The current study investigated the temporal integration underlying the cortical sensitivity to speech periodicity by studying the increase in periodicity-specific cortical activation with growing stimulus duration. Periodicity-specific activation was estimated from magnetoencephalography as the differences between the N1m responses elicited by periodic and aperiodic vowel stimuli. The duration of the vowel stimuli with a fundamental frequency (F0=106 Hz) representative of typical male speech was varied in units corresponding to the vowel fundamental period (9.4 ms) and ranged from one to ten units. Cortical sensitivity to speech periodicity, as reflected by larger and more anterior responses to periodic than to aperiodic stimuli, was observed when stimulus duration was 3 cycles or more. Further, for stimulus durations of 5 cycles and above, response latency was shorter for the periodic than for the aperiodic stimuli. Together the current results define a temporal window of integration for the periodicity of speech sounds in the F0 range of typical male speech. The length of this window is 3-5 cycles, or 30-50 ms.


Subject(s)
Auditory Cortex/physiology , Periodicity , Speech Acoustics , Speech Perception , Time Perception , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Humans , Magnetoencephalography , Male , Models, Statistical , Reaction Time , Signal Processing, Computer-Assisted , Sound Spectrography , Time Factors
16.
Clin Neurophysiol ; 121(6): 912-20, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20457006

ABSTRACT

OBJECTIVE: To investigate the effects of cortical ischemic stroke and aphasic symptoms on auditory processing abilities in humans as indicated by the transient brain response, a recently documented cortical deflection which has been shown to accurately predict behavioral sound detection. METHODS: Using speech and sinusoidal stimuli in the active (attend) and the passive (ignore) recording condition, cortical activity of ten aphasic stroke patients and ten control subjects was recorded with whole-head MEG and behavioral measurements. RESULTS: Stroke patients exhibited significantly diminished neuromagnetic transient responses for both sinusoidal and speech stimulation when compared to the control subjects. The attention-related increase of response amplitude was slightly more pronounced in the control subjects than in the stroke patients but this difference did not reach statistical significance. CONCLUSIONS: Left-hemispheric ischemic stroke impairs the processing of sinusoidal and speech sounds. This deficit seems to depend on the severity and location of stroke. SIGNIFICANCE: Directly observable, non-invasive brain measures can be used in assessing the effects of stroke which are related to the behavioral symptoms patients manifest.


Subject(s)
Aphasia/physiopathology , Auditory Pathways/physiopathology , Auditory Perception/physiology , Cerebral Cortex/physiopathology , Evoked Potentials, Auditory/physiology , Stroke/physiopathology , Acoustic Stimulation , Aged , Aged, 80 and over , Analysis of Variance , Aphasia/complications , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Middle Aged , Phonetics , Reaction Time/physiology , Severity of Illness Index , Stroke/complications
17.
Clin Neurophysiol ; 121(6): 902-11, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20359943

ABSTRACT

OBJECTIVE: The aim of the study was to investigate the effects of aging on human cortical auditory processing of rising-intensity sinusoids and speech sounds. We also aimed to evaluate the suitability of a recently discovered transient brain response for applied research. METHODS: In young and aged adults, magnetic fields produced by cortical activity elicited by a 570-Hz pure-tone and a speech sound (Finnish vowel /a/) were measured using MEG. The stimuli rose smoothly in intensity from an inaudible to an audible level over 750 ms. We used both the active (attended) and the passive recording condition. In the attended condition, behavioral reaction times were measured. RESULTS: The latency of the transient brain response was prolonged in the aged compared to the young and the accuracy of behavioral responses to sinusoids was diminished among the aged. In response amplitudes, no differences were found between the young and the aged. In both groups, spectral complexity of the stimuli enhanced response amplitudes. CONCLUSIONS: Aging seems to affect the temporal dynamics of cortical auditory processing. The transient brain response is sensitive both to spectral complexity and aging-related changes in the timing of cortical activation. SIGNIFICANCE: The transient brain responses elicited by rising-intensity sounds could be useful in revealing differences in auditory cortical processing in applied research.


Subject(s)
Aging/physiology , Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Adult , Age Factors , Aged , Analysis of Variance , Attention/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Psychomotor Performance/physiology , Reaction Time/physiology
18.
BMC Neurosci ; 11: 24, 2010 Feb 22.
Article in English | MEDLINE | ID: mdl-20175890

ABSTRACT

BACKGROUND: Recent studies have shown that the human right-hemispheric auditory cortex is particularly sensitive to reduction in sound quality, with an increase in distortion resulting in an amplification of the auditory N1m response measured in the magnetoencephalography (MEG). Here, we examined whether this sensitivity is specific to the processing of acoustic properties of speech or whether it can be observed also in the processing of sounds with a simple spectral structure. We degraded speech stimuli (vowel /a/), complex non-speech stimuli (a composite of five sinusoidals), and sinusoidal tones by decreasing the amplitude resolution of the signal waveform. The amplitude resolution was impoverished by reducing the number of bits to represent the signal samples. Auditory evoked magnetic fields (AEFs) were measured in the left and right hemisphere of sixteen healthy subjects. RESULTS: We found that the AEF amplitudes increased significantly with stimulus distortion for all stimulus types, which indicates that the right-hemispheric N1m sensitivity is not related exclusively to degradation of acoustic properties of speech. In addition, the P1m and P2m responses were amplified with increasing distortion similarly in both hemispheres. The AEF latencies were not systematically affected by the distortion. CONCLUSIONS: We propose that the increased activity of AEFs reflects cortical processing of acoustic properties common to both speech and non-speech stimuli. More specifically, the enhancement is most likely caused by spectral changes brought about by the decrease of amplitude resolution, in particular the introduction of periodic, signal-dependent distortion to the original sound. Converging evidence suggests that the observed AEF amplification could reflect cortical sensitivity to periodic sounds.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Functional Laterality , Humans , Magnetoencephalography , Male , Neuropsychological Tests , Pattern Recognition, Physiological/physiology , Psychoacoustics , Reaction Time , Speech , Time Factors
19.
J Acoust Soc Am ; 127(2): EL60-5, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20136180

ABSTRACT

A magnetoencephalography study was conducted to reveal the neural code of interaural time difference (ITD) in the human cortex. Widely used crosscorrelator models predict that the code consists of narrow receptive fields distributed to all ITDs. The present findings are, however, more in line with a neural code formed by two opponent neural populations: one tuned to the left and the other to the right hemifield. The results are consistent with models of ITD extraction in the auditory brainstem of small mammals and, therefore, suggest that similar computational principles underlie human sound source localization.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Ear , Models, Neurological , Acoustic Stimulation , Adult , Evoked Potentials, Auditory , Female , Functional Laterality , Head , Humans , Magnetoencephalography , Male , Photic Stimulation , Time Factors , Visual Perception/physiology
20.
Brain Res ; 1306: 93-9, 2010 Jan 08.
Article in English | MEDLINE | ID: mdl-19799877

ABSTRACT

Recent single-neuron recordings in monkeys and magnetoencephalography (MEG) data on humans suggest that auditory space is represented in cortex as a population rate code whereby spatial receptive fields are wide and centered at locations to the far left or right of the subject. To explore the details of this code in the human brain, we conducted an MEG study utilizing realistic spatial sound stimuli presented in a stimulus-specific adaptation paradigm. In this paradigm, the spatial selectivity of cortical neurons is measured as the effect the location of a preceding adaptor has on the response to a subsequent probe sound. Two types of stimuli were used: a wideband noise sound and a speech sound. The cortical hemispheres differed in the effects the adaptors had on the response to a probe sound presented in front of the subject. The right-hemispheric responses were attenuated more by an adaptor to the left than by an adaptor to the right of the subject. In contrast, the left-hemispheric responses were similarly affected by adaptors in these two locations. When interpreted in terms of single-neuron spatial receptive fields, these results support a population rate code model where neurons in the right hemisphere are more often tuned to the left than to the right of the perceiver while in the left hemisphere these two neuronal populations are of equal size.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Sound Localization/physiology , Space Perception/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Evoked Potentials, Auditory , Female , Functional Laterality , Humans , Magnetoencephalography , Male , Models, Neurological , Neurons/physiology , Speech , Speech Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...