Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 153(6): 3350, 2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37328948

RESUMO

A model of early auditory processing is proposed in which each peripheral channel is processed by a delay-and-subtract cancellation filter, tuned independently for each channel with a criterion of minimum power. For a channel dominated by a pure tone or a resolved partial of a complex tone, the optimal delay is its period. For a channel responding to harmonically related partials, the optimal delay is their common fundamental period. Each peripheral channel is thus split into two subchannels-one that is cancellation-filtered and the other that is not. Perception can involve either or both, depending on the task. The model is illustrated by applying it to the masking asymmetry between pure tones and narrowband noise: a noise target masked by a tone is more easily detectable than a tone target masked by noise. The model is one of a wider class of models, monaural or binaural, that cancel irrelevant stimulus dimensions to attain invariance to competing sources. Similar to occlusion in the visual domain, cancellation yields sensory evidence that is incomplete, thus requiring Bayesian inference of an internal model of the world along the lines of Helmholtz's doctrine of unconscious inference.


Assuntos
Percepção Auditiva , Mascaramento Perceptivo , Limiar Auditivo , Teorema de Bayes , Ruído/efeitos adversos
2.
J Acoust Soc Am ; 153(5): 2600, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37129672

RESUMO

This paper suggests an explanation for listeners' greater tolerance to positive than negative mistuning of the higher tone within an octave pair. It hypothesizes a neural circuit tuned to cancel the lower tone that also cancels the higher tone if that tone is in tune. Imperfect cancellation is the cue to mistuning of the octave. The circuit involves two neural pathways, one delayed with respect to the other, that feed a coincidence-sensitive neuron via excitatory and inhibitory synapses. A mismatch between the time constants of these two synapses results in an asymmetry in sensitivity to mismatch. Specifically, if the time constant of the delayed pathway is greater than that of the direct pathway, there is a greater tolerance to positive mistuning than to negative mistuning. The model is directly applicable to the harmonic octave (concurrent tones) but extending it to the melodic octave (successive tones) requires additional assumptions that are discussed. The paper reviews evidence from auditory psychophysics and physiology in favor-or against-this explanation.


Assuntos
Tronco Encefálico , Neurônios , Neurônios/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica
3.
J Assoc Res Otolaryngol ; 23(2): 167-181, 2022 04.
Artigo em Inglês | MEDLINE | ID: mdl-35132510

RESUMO

We investigated the effect of a biasing tone close to 5, 15, or 30 Hz on the response to higher-frequency probe tones, behaviorally, and by measuring distortion-product otoacoustic emissions (DPOAEs). The amplitude of the biasing tone was adjusted for criterion suppression of cubic DPOAE elicited by probe tones presented between 0.7 and 8 kHz, or criterion loudness suppression of a train of tone-pip probes in the range 0.125-8 kHz. For DPOAEs, the biasing-tone level for criterion suppression increased with probe-tone frequency by 8-9 dB/octave, consistent with an apex-to-base gradient of biasing-tone-induced basilar membrane displacement, as we verified by computational simulation. In contrast, the biasing-tone level for criterion loudness suppression increased with probe frequency by only 1-3 dB/octave, reminiscent of previously published data on low-side suppression of auditory nerve responses to characteristic frequency tones. These slopes were independent of biasing-tone frequency, but the biasing-tone sensation level required for criterion suppression was ~ 10 dB lower for the two infrasound biasing tones than for the 30-Hz biasing tone. On average, biasing-tone sensation levels as low as 5 dB were sufficient to modulate the perception of higher frequency sounds. Our results are relevant for recent debates on perceptual effects of environmental noise with very low-frequency content and might offer insight into the mechanism underlying low-side suppression.


Assuntos
Cóclea , Emissões Otoacústicas Espontâneas , Estimulação Acústica , Membrana Basilar , Cóclea/fisiologia , Ruído , Emissões Otoacústicas Espontâneas/fisiologia , Som
4.
Trends Hear ; 25: 23312165211041422, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34698574

RESUMO

This paper reviews the hypothesis of harmonic cancellation according to which an interfering sound is suppressed or canceled on the basis of its harmonicity (or periodicity in the time domain) for the purpose of Auditory Scene Analysis. It defines the concept, discusses theoretical arguments in its favor, and reviews experimental results that support it, or not. If correct, the hypothesis may draw on time-domain processing of temporally accurate neural representations within the brainstem, as required also by the classic equalization-cancellation model of binaural unmasking. The hypothesis predicts that a target sound corrupted by interference will be easier to hear if the interference is harmonic than inharmonic, all else being equal. This prediction is borne out in a number of behavioral studies, but not all. The paper reviews those results, with the aim to understand the inconsistencies and come up with a reliable conclusion for, or against, the hypothesis of harmonic cancellation within the auditory system.

5.
J Neural Eng ; 18(4)2021 05 04.
Artigo em Inglês | MEDLINE | ID: mdl-33849003

RESUMO

Objective.An auditory stimulus can be related to the brain response that it evokes by a stimulus-response model fit to the data. This offers insight into perceptual processes within the brain and is also of potential use for devices such as brain computer interfaces (BCIs). The quality of the model can be quantified by measuring the fit with a regression problem, or by applying it to a classification task and measuring its performance.Approach.Here we focus on amatch-mismatch(MM) task that entails deciding whether a segment of brain signal matches, via a model, the auditory stimulus that evoked it.Main results. Using these metrics, we describe a range of models of increasing complexity that we compare to methods in the literature, showing state-of-the-art performance. We document in detail one particular implementation, calibrated on a publicly-available database, that can serve as a robust reference to evaluate future developments.Significance.The MM task allows stimulus-response models to be evaluated in the limit of very high model accuracy, making it an attractive alternative to the more commonly used task of auditory attention detection. The MM task does not require class labels, so it is immune to mislabeling, and it is applicable to data recorded in listening scenarios with only one sound source, thus it is cheap to obtain large quantities of training and testing data. Performance metrics from this task, associated with regression accuracy, provide complementary insights into the relation between stimulus and response, as well as information about discriminatory power directly applicable to BCI applications.


Assuntos
Interfaces Cérebro-Computador , Eletroencefalografia , Atenção , Percepção Auditiva , Encéfalo
6.
J Neurosci ; 41(23): 4991-5003, 2021 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-33824190

RESUMO

Seeing a speaker's face benefits speech comprehension, especially in challenging listening conditions. This perceptual benefit is thought to stem from the neural integration of visual and auditory speech at multiple stages of processing, whereby movement of a speaker's face provides temporal cues to auditory cortex, and articulatory information from the speaker's mouth can aid recognizing specific linguistic units (e.g., phonemes, syllables). However, it remains unclear how the integration of these cues varies as a function of listening conditions. Here, we sought to provide insight on these questions by examining EEG responses in humans (males and females) to natural audiovisual (AV), audio, and visual speech in quiet and in noise. We represented our speech stimuli in terms of their spectrograms and their phonetic features and then quantified the strength of the encoding of those features in the EEG using canonical correlation analysis (CCA). The encoding of both spectrotemporal and phonetic features was shown to be more robust in AV speech responses than what would have been expected from the summation of the audio and visual speech responses, suggesting that multisensory integration occurs at both spectrotemporal and phonetic stages of speech processing. We also found evidence to suggest that the integration effects may change with listening conditions; however, this was an exploratory analysis and future work will be required to examine this effect using a within-subject design. These findings demonstrate that integration of audio and visual speech occurs at multiple stages along the speech processing hierarchy.SIGNIFICANCE STATEMENT During conversation, visual cues impact our perception of speech. Integration of auditory and visual speech is thought to occur at multiple stages of speech processing and vary flexibly depending on the listening conditions. Here, we examine audiovisual (AV) integration at two stages of speech processing using the speech spectrogram and a phonetic representation, and test how AV integration adapts to degraded listening conditions. We find significant integration at both of these stages regardless of listening conditions. These findings reveal neural indices of multisensory interactions at different stages of processing and provide support for the multistage integration framework.


Assuntos
Encéfalo/fisiologia , Compreensão/fisiologia , Sinais (Psicologia) , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Mapeamento Encefálico , Eletroencefalografia , Feminino , Humanos , Masculino , Fonética , Estimulação Luminosa
7.
Elife ; 92020 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-32122465

RESUMO

Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.


Assuntos
Percepção Auditiva/fisiologia , Música , Lobo Temporal/fisiologia , Estimulação Acústica , Eletroencefalografia , Potenciais Evocados Auditivos/fisiologia , Humanos , Tempo de Reação
8.
Neuroimage ; 207: 116356, 2020 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-31786167

RESUMO

Power line artifacts are the bane of animal and human electrophysiology. A number of methods are available to help attenuate or eliminate them, but each has its own set of drawbacks. In this brief note I present a simple method that combines the advantages of spectral and spatial filtering, while minimizing their downsides. A perfect-reconstruction filterbank is used to split the data into two parts, one noise-free and the other contaminated by line artifact. The artifact-contaminated stream is processed by a spatial filter to project out line components, and added to the noise-free part to obtain clean data. This method is applicable to multichannel data such as electroencephalography (EEG), magnetoencephalography (MEG), or multichannel local field potentials (LFP). I briefly review past methods, pointing out their drawbacks, describe the new method, and evaluate the outcome using synthetic and real data.


Assuntos
Encéfalo/anormalidades , Eletroencefalografia , Magnetoencefalografia , Processamento de Sinais Assistido por Computador/instrumentação , Algoritmos , Artefatos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Humanos , Magnetoencefalografia/métodos , Ruído
9.
Neuron ; 102(2): 280-293, 2019 04 17.
Artigo em Inglês | MEDLINE | ID: mdl-30998899

RESUMO

Filters are commonly used to reduce noise and improve data quality. Filter theory is part of a scientist's training, yet the impact of filters on interpreting data is not always fully appreciated. This paper reviews the issue and explains what a filter is, what problems are to be expected when using them, how to choose the right filter, and how to avoid filtering by using alternative tools. Time-frequency analysis shares some of the same problems that filters have, particularly in the case of wavelet transforms. We recommend reporting filter characteristics with sufficient details, including a plot of the impulse or step response as an inset.


Assuntos
Artefatos , Confiabilidade dos Dados , Processamento de Sinais Assistido por Computador , Razão Sinal-Ruído , Causalidade , Análise de Fourier , Humanos , Neurociências , Análise de Ondaletas
10.
Neuroimage ; 196: 237-247, 2019 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-30991126

RESUMO

Humans comprehend speech despite the various challenges such as mispronunciation and noisy environments. Our auditory system is robust to these thanks to the integration of the sensory input with prior knowledge and expectations built on language-specific regularities. One such regularity regards the permissible phoneme sequences, which determine the likelihood that a word belongs to a given language (phonotactic probability; "blick" is more likely to be an English word than "bnick"). Previous research demonstrated that violations of these rules modulate brain-evoked responses. However, several fundamental questions remain unresolved, especially regarding the neural encoding and integration strategy of phonotactics in naturalistic conditions, when there are no (or few) violations. Here, we used linear modelling to assess the influence of phonotactic probabilities on the brain responses to narrative speech measured with non-invasive EEG. We found that the relationship between continuous speech and EEG responses is best described when the stimulus descriptor includes phonotactic probabilities. This indicates that low-frequency cortical signals (<9 Hz) reflect the integration of phonotactic information during natural speech perception, providing us with a measure of phonotactic processing at the individual subject-level. Furthermore, phonotactics-related signals showed the strongest speech-EEG interactions at latencies of 100-500 ms, supporting a pre-lexical role of phonotactic information.


Assuntos
Córtex Cerebral/fisiologia , Fonética , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Potenciais Evocados Auditivos , Feminino , Humanos , Masculino , Adulto Jovem
11.
Neuroimage ; 186: 728-740, 2019 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-30496819

RESUMO

Brain data recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratios due to the presence of multiple competing sources and artifacts. A common remedy is to average responses over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are presented only once (speech, music, movies, natural sound). An alternative is to average responses over multiple subjects that were presented with identical stimuli, but differences in geometry of brain sources and sensors reduce the effectiveness of this solution. Multiway canonical correlation analysis (MCCA) brings a solution to this problem by allowing data from multiple subjects to be fused in such a way as to extract components common to all. This paper reviews the method, offers application examples that illustrate its effectiveness, and outlines the caveats and risks entailed by the method.


Assuntos
Encéfalo/fisiologia , Interpretação Estatística de Dados , Eletroencefalografia/métodos , Magnetoencefalografia/métodos , Modelos Teóricos , Adulto , Humanos
12.
Front Neurosci ; 12: 531, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30131670

RESUMO

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.

13.
Neuroimage ; 172: 903-912, 2018 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-29448077

RESUMO

Electroencephalography (EEG), magnetoencephalography (MEG) and related techniques are prone to glitches, slow drift, steps, etc., that contaminate the data and interfere with the analysis and interpretation. These artifacts are usually addressed in a preprocessing phase that attempts to remove them or minimize their impact. This paper offers a set of useful techniques for this purpose: robust detrending, robust rereferencing, outlier detection, data interpolation (inpainting), step removal, and filter ringing artifact removal. These techniques provide a less wasteful alternative to discarding corrupted trials or channels, and they are relatively immune to artifacts that disrupt alternative approaches such as filtering. Robust detrending allows slow drifts and common mode signals to be factored out while avoiding the deleterious effects of glitches. Robust rereferencing reduces the impact of artifacts on the reference. Inpainting allows corrupt data to be interpolated from intact parts based on the correlation structure estimated over the intact parts. Outlier detection allows the corrupt parts to be identified. Step removal fixes the high-amplitude flux jump artifacts that are common with some MEG systems. Ringing removal allows the ringing response of the antialiasing filter to glitches (steps, pulses) to be suppressed. The performance of the methods is illustrated and evaluated using synthetic data and data from real EEG and MEG systems. These methods, which are mainly automatic and require little tuning, can greatly improve the quality of the data.


Assuntos
Artefatos , Encéfalo/fisiologia , Eletroencefalografia/métodos , Magnetoencefalografia/métodos , Processamento de Sinais Assistido por Computador , Mapeamento Encefálico/métodos , Humanos
14.
Neuroimage ; 172: 206-216, 2018 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-29378317

RESUMO

The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Processamento de Sinais Assistido por Computador , Estimulação Acústica , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Humanos , Magnetoencefalografia/métodos
15.
Hear Res ; 358: 98-110, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-29107413

RESUMO

The auditory system processes temporal information at multiple scales, and disruptions to this temporal processing may lead to deficits in auditory tasks such as detecting and discriminating sounds in a noisy environment. Here, a modelling approach is used to study the temporal regularity of firing by chopper cells in the ventral cochlear nucleus, in both the normal and impaired auditory system. Chopper cells, which have a strikingly regular firing response, divide into two classes, sustained and transient, based on the time course of this regularity. Several hypotheses have been proposed to explain the behaviour of chopper cells, and the difference between sustained and transient cells in particular. However, there is no conclusive evidence so far. Here, a reduced mathematical model is developed and used to compare and test a wide range of hypotheses with a limited number of parameters. Simulation results show a continuum of cell types and behaviours: chopper-like behaviour arises for a wide range of parameters, suggesting that multiple mechanisms may underlie this behaviour. The model accounts for systematic trends in regularity as a function of stimulus level that have previously only been reported anecdotally. Finally, the model is used to predict the effects of a reduction in the number of auditory nerve fibres (deafferentation due to, for example, cochlear synaptopathy). An interactive version of this paper in which all the model parameters can be changed is available online.

16.
J Acoust Soc Am ; 142(5): 3047, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-29195443

RESUMO

Studies that measure pitch discrimination relate a subject's response on each trial to the stimuli presented on that trial, but there is evidence that behavior depends also on earlier stimulation. Here, listeners heard a sequence of tones and reported after each tone whether it was higher or lower in pitch than the previous tone. Frequencies were determined by an adaptive staircase targeting 75% correct, with interleaved tracks to ensure independence between consecutive frequency changes. Responses for this specific task were predicted by a model that took into account the frequency interval on the current trial, as well as the interval and response on the previous trial. This model was superior to simpler models. The dependence on the previous interval was positive (assimilative) for all subjects, consistent with persistence of the sensory trace. The dependence on the previous response was either positive or negative, depending on the subject, consistent with a subject-specific suboptimal response strategy. It is argued that a full stimulus + response model is necessary to account for effects of stimulus history and obtain an accurate estimate of sensory noise.


Assuntos
Discriminação Psicológica , Julgamento , Discriminação da Altura Tonal , Estimulação Acústica , Adaptação Psicológica , Adulto , Audiometria de Tons Puros , Feminino , Humanos , Masculino , Psicoacústica , Adulto Jovem
17.
J Acoust Soc Am ; 142(1): 167, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28764422

RESUMO

Studies that measure frequency discrimination often use 2, 3, or 4 tones per trial. This paper shows an investigation of a two-alternative forced choice (2AFC) task in which each tone of a series is judged relative to the previous tone ("sliding 2AFC"). Potential advantages are a greater yield (number of responses per unit time), and a more uniform history of stimulation for the study of context effects, or to relate time-varying performance to cortical activity. The new task was evaluated relative to a classic 2-tone-per-trial 2AFC task with similar stimulus parameters. For each task, conditions with different stimulus parameters were compared. The main results were as follows: (1) thresholds did not differ significantly between tasks when similar parameters were used. (2) Thresholds did differ between conditions for the new task, showing a deleterious effect of inserting relatively large steps in the frequency sequence. (3) Thresholds also differed between conditions for the classic task, showing an advantage for a fixed frequency standard. There was no indication that results were more variable with either task, and no reason was found not to use the new sliding 2AFC task in lieu of the classic 2-tone-per-trial 2AFC task.

18.
J Neurosci Methods ; 262: 14-20, 2016 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-26778608

RESUMO

BACKGROUND: Muscle artifacts and electrode noise are an obstacle to interpretation of EEG and other electrophysiological signals. They are often channel-specific and do not fully benefit from component analysis techniques such as ICA, and their presence reduces the dimensionality needed by those techniques. Their high-frequency content may mask or masquerade as gamma band cortical activity. NEW METHOD: The sparse time artifact removal (STAR) algorithm removes artifacts that are sparse in space and time. The time axis is partitioned into an artifact-free and an artifact-contaminated part, and the correlation structure of the data is estimated from the covariance matrix of the artifact-free part. Artifacts are then corrected by projection of each channel onto the subspace spanned by the other channels. RESULTS: The method is evaluated with both simulated and real data, and found to be highly effective in removing or attenuating typical channel-specific artifacts. COMPARISON WITH EXISTING METHODS: In contrast to the widespread practice of trial removal or channel removal or interpolation, very few data are lost. In contrast to ICA or other linear techniques, processing is local in time and affects only the artifact part, so most of the data are identical to the unprocessed data and the full dimensionality of the data is preserved. CONCLUSIONS: STAR complements other linear component analysis techniques, and can enhance their ability to discover weak sources of interest by increasing the number of effective noise-free channels.


Assuntos
Artefatos , Mapeamento Encefálico , Encéfalo/fisiologia , Processamento de Sinais Assistido por Computador , Algoritmos , Eletroencefalografia , Humanos
19.
J Neural Eng ; 12(6): 066020, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26501393

RESUMO

OBJECTIVE: Oscillations are an important aspect of brain activity, but they often have a low signal-to-noise ratio (SNR) due to source-to-electrode mixing with competing brain activity and noise. Filtering can improve the SNR of narrowband signals, but it introduces ringing effects that may masquerade as genuine oscillations, leading to uncertainty as to the true oscillatory nature of the phenomena. Likewise, time-frequency analysis kernels have a temporal extent that blurs the time course of narrowband activity, introducing uncertainty as to timing and causal relations between events and/or frequency bands. APPROACH: Here, we propose a methodology that reveals narrowband activity within multichannel data such as electroencephalography, magnetoencephalography, electrocorticography or local field potential. The method exploits the between-channel correlation structure of the data to suppress competing sources by joint diagonalization of the covariance matrices of narrowband filtered and unfiltered data. MAIN RESULTS: Applied to synthetic and real data, the method effectively extracts narrowband components at unfavorable SNR. SIGNIFICANCE: Oscillatory components of brain activity, including weak sources that are hard or impossible to observe using standard methods, can be detected and their time course plotted accurately. The method avoids the temporal artifacts of standard filtering and time-frequency analysis methods with which it remains complementary.


Assuntos
Ondas Encefálicas/fisiologia , Encéfalo/fisiologia , Magnetoencefalografia/métodos , Estimulação Acústica/métodos , Eletroencefalografia/métodos , Humanos
20.
J Neurophysiol ; 112(12): 3053-65, 2014 Dec 15.
Artigo em Inglês | MEDLINE | ID: mdl-25231619

RESUMO

In animal models, single-neuron response properties such as stimulus-specific adaptation have been described as possible precursors to mismatch negativity, a human brain response to stimulus change. In the present study, we attempted to bridge the gap between human and animal studies by characterising responses to changes in the frequency of repeated tone series in the anesthetised guinea pig using small-animal magnetoencephalography (MEG). We showed that 1) auditory evoked fields (AEFs) qualitatively similar to those observed in human MEG studies can be detected noninvasively in rodents using small-animal MEG; 2) guinea pig AEF amplitudes reduce rapidly with tone repetition, and this AEF reduction is largely complete by the second tone in a repeated series; and 3) differences between responses to the first (deviant) and later (standard) tones after a frequency transition resemble those previously observed in awake humans using a similar stimulus paradigm.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos , Magnetoencefalografia , Inibição Neural , Estimulação Acústica , Animais , Cobaias , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...