Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS Comput Biol ; 19(11): e1011595, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37934766

RESUMO

Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-ß) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.


Assuntos
Percepção da Fala , Fala , Ritmo beta , Idioma , Reconhecimento Psicológico
2.
PLoS Biol ; 21(3): e3002046, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36947552

RESUMO

Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.


Assuntos
Compreensão , Percepção da Fala , Humanos , Compreensão/fisiologia , Fala , Percepção da Fala/fisiologia , Encéfalo/fisiologia , Idioma
3.
Sci Rep ; 10(1): 18009, 2020 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-33093570

RESUMO

In face-to-face communication, audio-visual (AV) stimuli can be fused, combined or perceived as mismatching. While the left superior temporal sulcus (STS) is presumably the locus of AV integration, the process leading to combination is unknown. Based on previous modelling work, we hypothesize that combination results from a complex dynamic originating in a failure to integrate AV inputs, followed by a reconstruction of the most plausible AV sequence. In two different behavioural tasks and one MEG experiment, we observed that combination is more time demanding than fusion. Using time-/source-resolved human MEG analyses with linear and dynamic causal models, we show that both fusion and combination involve early detection of AV incongruence in the STS, whereas combination is further associated with enhanced activity of AV asynchrony-sensitive regions (auditory and inferior frontal cortices). Based on neural signal decoding, we finally show that only combination can be decoded from the IFG activity and that combination is decoded later than fusion in the STS. These results indicate that the AV speech integration outcome primarily depends on whether the STS converges or not onto an existing multimodal syllable representation, and that combination results from subsequent temporal processing, presumably the off-line re-ordering of incongruent AV stimuli.

4.
Nat Commun ; 11(1): 3117, 2020 06 19.
Artigo em Inglês | MEDLINE | ID: mdl-32561726

RESUMO

On-line comprehension of natural speech requires segmenting the acoustic stream into discrete linguistic elements. This process is argued to rely on theta-gamma oscillation coupling, which can parse syllables and encode them in decipherable neural activity. Speech comprehension also strongly depends on contextual cues that help predicting speech structure and content. To explore the effects of theta-gamma coupling on bottom-up/top-down dynamics during on-line syllable identification, we designed a computational model (Precoss-predictive coding and oscillations for speech) that can recognise syllable sequences in continuous speech. The model uses predictions from internal spectro-temporal representations of syllables and theta oscillations to signal syllable onsets and duration. Syllable recognition is best when theta-gamma coupling is used to temporally align spectro-temporal predictions with the acoustic input. This neurocomputational modelling work demonstrates that the notions of predictive coding and neural oscillations can be brought together to account for on-line dynamic sensory processing.


Assuntos
Córtex Auditivo/fisiologia , Ritmo Gama/fisiologia , Modelos Neurológicos , Percepção da Fala/fisiologia , Ritmo Teta/fisiologia , Estimulação Acústica , Compreensão/fisiologia , Simulação por Computador , Sinais (Psicologia) , Humanos , Fonética
5.
Neuroimage ; 218: 116882, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32439539

RESUMO

Neural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.


Assuntos
Córtex Auditivo/fisiologia , Simulação por Computador , Percepção da Fala/fisiologia , Adulto , Eletrocorticografia , Feminino , Humanos , Masculino , Processamento de Sinais Assistido por Computador
6.
Elife ; 92020 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-32223894

RESUMO

Speech perception presumably arises from internal models of how specific sensory features are associated with speech sounds. These features change constantly (e.g. different speakers, articulation modes etc.), and listeners need to recalibrate their internal models by appropriately weighing new versus old evidence. Models of speech recalibration classically ignore this volatility. The effect of volatility in tasks where sensory cues were associated with arbitrary experimenter-defined categories were well described by models that continuously adapt the learning rate while keeping a single representation of the category. Using neurocomputational modelling we show that recalibration of natural speech sound categories is better described by representing the latter at different time scales. We illustrate our proposal by modeling fast recalibration of speech sounds after experiencing the McGurk effect. We propose that working representations of speech categories are driven both by their current environment and their long-term memory representations.


People can distinguish words or syllables even though they may sound different with every speaker. This striking ability reflects the fact that our brain is continually modifying the way we recognise and interpret the spoken word based on what we have heard before, by comparing past experience with the most recent one to update expectations. This phenomenon also occurs in the McGurk effect: an auditory illusion in which someone hears one syllable but sees a person saying another syllable and ends up perceiving a third distinct sound. Abstract models, which provide a functional rather than a mechanistic description of what the brain does, can test how humans use expectations and prior knowledge to interpret the information delivered by the senses at any given moment. Olasagasti and Giraud have now built an abstract model of how brains recalibrate perception of natural speech sounds. By fitting the model with existing experimental data using the McGurk effect, the results suggest that, rather than using a single sound representation that is adjusted with each sensory experience, the brain recalibrates sounds at two different timescales. Over and above slow "procedural" learning, the findings show that there is also rapid recalibration of how different sounds are interpreted. This working representation of speech enables adaptation to changing or noisy environments and illustrates that the process is far more dynamic and flexible than previously thought.


Assuntos
Simulação por Computador , Fonética , Percepção da Fala , Fala/classificação , Estimulação Acústica , Percepção Auditiva , Humanos , Fala/fisiologia , Fatores de Tempo
7.
Cortex ; 68: 61-75, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26009260

RESUMO

The McGurk effect is a textbook illustration of the automaticity with which the human brain integrates audio-visual speech. It shows that even incongruent audiovisual (AV) speech stimuli can be combined into percepts that correspond neither to the auditory nor to the visual input, but to a mix of both. Typically, when presented with, e.g., visual /aga/ and acoustic /aba/ we perceive an illusory /ada/. In the inverse situation, however, when acoustic /aga/ is paired with visual /aba/, we perceive a combination of both stimuli, i.e., /abga/ or /agba/. Here we assessed the role of dynamic cross-modal predictions in the outcome of AV speech integration using a computational model that processes continuous audiovisual speech sensory inputs in a predictive coding framework. The model involves three processing levels: sensory units, units that encode the dynamics of stimuli, and multimodal recognition/identity units. The model exhibits a dynamic prediction behavior because evidence about speech tokens can be asynchronous across sensory modality, allowing for updating the activity of the recognition units from one modality while sending top-down predictions to the other modality. We explored the model's response to congruent and incongruent AV stimuli and found that, in the two-dimensional feature space spanned by the speech second formant and lip aperture, fusion stimuli are located in the neighborhood of congruent /ada/, which therefore provides a valid match. Conversely, stimuli that lead to combination percepts do not have a unique valid neighbor. In that case, acoustic and visual cues are both highly salient and generate conflicting predictions in the other modality that cannot be fused, forcing the elaboration of a combinatorial solution. We propose that dynamic predictive mechanisms play a decisive role in the dichotomous perception of incongruent audiovisual inputs.


Assuntos
Percepção Auditiva/fisiologia , Simulação por Computador , Modelos Neurológicos , Sensação/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Algoritmos , Auxiliares de Comunicação para Pessoas com Deficiência , Sinais (Psicologia) , Humanos , Lábio , Estimulação Luminosa , Percepção Visual/fisiologia
8.
Front Hum Neurosci ; 9: 171, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25870556

RESUMO

Subjects with autism often show language difficulties, but it is unclear how they relate to neurophysiological anomalies of cortical speech processing. We used combined EEG and fMRI in 13 subjects with autism and 13 control participants and show that in autism, gamma and theta cortical activity do not engage synergistically in response to speech. Theta activity in left auditory cortex fails to track speech modulations, and to down-regulate gamma oscillations in the group with autism. This deficit predicts the severity of both verbal impairment and autism symptoms in the affected sample. Finally, we found that oscillation-based connectivity between auditory and other language cortices is altered in autism. These results suggest that the verbal disorder in autism could be associated with an altered balance of slow and fast auditory oscillations, and that this anomaly could compromise the mapping between sensory input and higher-level cognitive representations.

9.
Invest Ophthalmol Vis Sci ; 55(4): 2297-306, 2014 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-24595381

RESUMO

PURPOSE: The optokinetic system in healthy humans is a negative-feedback system that stabilizes gaze: slow-phase eye movements (i.e., the output signal) minimize retinal slip (i.e., the error signal). A positive-feedback optokinetic system may exist due to the misrouting of optic fibers. Previous studies have shown that, in a zebrafish mutant with a high degree of the misrouting, the optokinetic response (OKR) is reversed. As a result, slow-phase eye movements amplify retinal slip, forming a positive-feedback optokinetic loop. The positive-feedback optokinetic system cannot stabilize gaze, thus leading to spontaneous eye oscillations (SEOs). Because the misrouting in human patients (e.g., with a condition of albinism or achiasmia) is partial, both positive- and negative-feedback loops co-exist. How this co-existence affects human ocular motor behavior remains unclear. METHODS: We presented a visual environment consisting of two stimuli in different parts of the visual field to healthy subjects. One mimicked positive-feedback optokinetic signals and the other preserved negative-feedback optokinetic signals. By changing the ratio and position of the visual field of these visual stimuli, various optic nerve misrouting patterns were simulated. Eye-movement responses to stationary and moving stimuli were measured and compared with computer simulations. The SEOs were correlated with the magnitude of the virtual positive-feedback optokinetic effect. RESULTS: We found a correlation among the simulated misrouting, the corresponding OKR, and the SEOs in humans. The proportion of the simulated misrouting needed to be greater than 50% to reverse the OKR and at least greater than or equal to 70% to evoke SEOs. Once the SEOs were evoked, the magnitude positively correlated to the strength of the positive-feedback OKR. CONCLUSIONS: This study provides a mechanism of how the misrouting of optic fibers in humans could lead to SEOs, offering a possible explanation for a subtype of infantile nystagmus syndrome (INS).


Assuntos
Retroalimentação Sensorial/fisiologia , Nistagmo Optocinético/fisiologia , Fluxo Óptico/fisiologia , Adulto , Humanos , Pessoa de Meia-Idade , Nistagmo Patológico/fisiopatologia , Estimulação Luminosa , Valores de Referência , Gravação em Vídeo , Adulto Jovem
10.
J Physiol ; 592(1): 203-14, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24218543

RESUMO

The optokinetic reflex (OKR) and the angular vestibulo-ocular reflex (aVOR) complement each other to stabilize images on the retina despite self- or world motion, a joint mechanism that is critical for effective vision. It is currently hypothesized that signals from both systems integrate, in a mathematical sense, in a network of neurons operating as a velocity storage mechanism (VSM). When exposed to a rotating visual surround, subjects display the OKR, slow following eye movements frequently interrupted by fast resetting eye movements. Subsequent to light-off during optokinetic stimulation, eye movements do not stop abruptly, but decay slowly, a phenomenon referred to as the optokinetic after-response (OKAR). The OKAR is most likely generated by the VSM. In this study, we observed the OKAR in developing larval zebrafish before the horizontal aVOR emerged. Our results suggest that the VSM develops prior to and without the need for a functional aVOR. It may be critical to ocular motor control in early development as it increases the efficiency of the OKR.


Assuntos
Movimentos Oculares , Locomoção , Reflexo Vestíbulo-Ocular , Animais , Larva/fisiologia , Estimulação Luminosa , Peixe-Zebra
11.
Neuron ; 79(5): 987-1000, 2013 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-24012010

RESUMO

Although many studies have identified neural correlates of memory, relatively little is known about the circuit properties connecting single-neuron physiology to behavior. Here we developed a modeling framework to bridge this gap and identify circuit interactions capable of maintaining short-term memory. Unlike typical studies that construct a phenomenological model and test whether it reproduces select aspects of neuronal data, we directly fit the synaptic connectivity of an oculomotor memory circuit to a broad range of anatomical, electrophysiological, and behavioral data. Simultaneous fits to all data, combined with sensitivity analyses, revealed complementary roles of synaptic and neuronal recruitment thresholds in providing the nonlinear interactions required to generate the observed circuit behavior. This work provides a methodology for identifying the cellular and synaptic mechanisms underlying short-term memory and demonstrates how the anatomical structure of a circuit may belie its functional organization.


Assuntos
Memória de Curto Prazo/fisiologia , Rede Nervosa/fisiologia , Neurônios/fisiologia , Recrutamento Neurofisiológico/fisiologia , Transmissão Sináptica/fisiologia , Animais , Carpa Dourada , Modelos Neurológicos
12.
Eur J Neurosci ; 38(8): 3248-60, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23899270

RESUMO

We investigated the effect of eye-in-head and head-on-trunk direction on heading discrimination. Participants were passively translated in darkness along linear trajectories in the horizontal plane deviating 2° or 5° to the right or left of straight-ahead as defined by the subject's trunk. Participants had to report whether the experienced translation was to the right or left of the trunk straight-ahead. In a first set of experiments, the head was centered on the trunk and fixation lights directed the eyes 16° either left or right. Although eye position was not correlated with the direction of translation, rightward reports were more frequent when looking right than when looking left, a shift of the point of subjective equivalence in the direction opposite to eye direction (two of the 38 participants showed the opposite effect). In a second experiment, subjects had to judge the same trunk-referenced trajectories with head-on-trunk deviated 16° left. Comparison with the performance in the head-centered paradigms showed an effect of the head in the same direction as the effect of eye eccentricity. These results can be qualitatively described by biases reflecting statistical regularities present in human behaviors such as the alignment of gaze and path. Given the known effects of gaze on auditory localization and perception of straight-ahead, we also expect contributions from a general influence of gaze on the head-to-trunk reference frame transformations needed to bring motion-related information from the head-centered otoliths into a trunk-referenced representation.


Assuntos
Discriminação Psicológica , Movimentos Oculares , Movimentos da Cabeça , Adulto , Humanos , Locomoção/fisiologia , Pessoa de Meia-Idade , Desempenho Psicomotor , Percepção Visual
13.
PLoS One ; 8(4): e61389, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23637824

RESUMO

Eccentric gaze in darkness evokes minor centripetal eye drifts in healthy subjects, as cerebellar control sufficiently compensates for the inherent deficiencies of the brainstem gaze-holding network. This behavior is commonly described using a leaky integrator model, which assumes that eye velocity grows linearly with gaze eccentricity. Results from previous studies in patients and healthy subjects suggest caution when this assumption is applied to eye eccentricities larger than 20 degrees. To obtain a detailed characterization of the centripetal gaze-evoked drift, we recorded horizontal eye position in 20 healthy subjects. With their head fixed, they were asked to fixate a flashing dot (50 ms every 2 s)that was quasi-stationary displacing(0.5 deg/s) between ± 40 deg horizontally in otherwise complete darkness. Drift velocity was weak at all angles tested. Linearity was assessed by dividing the range of gaze eccentricity in four bins of 20 deg each, and comparing the slopes of a linear function fitted to the horizontal velocity in each bin. The slopes of single subjects for gaze eccentricities of ± 0-20 deg were, in median,0.41 times the slopes obtained for gaze eccentricities of ± 20-40 deg. By smoothing the individual subjects' eye velocity as a function of gaze eccentricity, we derived a population of position-velocity curves. We show that a tangent function provides a better fit to the mean of these curves when large eccentricities are considered. This implies that the quasi-linear behavior within the typical ocular motor range is the result of a tuning procedure, which is optimized in the most commonly used range of gaze. We hypothesize that the observed non-linearity at eccentric gaze results from a saturation of the input that each neuron in the integrating network receives from the others. As a consequence, gaze-holding performance declines more rapidly at large eccentricities.


Assuntos
Fixação Ocular/fisiologia , Saúde , Adulto , Idoso , Simulação por Computador , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Neurônios/fisiologia , Dinâmica não Linear , Sinapses/fisiologia , Adulto Jovem
14.
Cerebellum ; 12(1): 97-107, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22777507

RESUMO

Vestibular velocity storage enhances the efficacy of the angular vestibulo-ocular reflex (VOR) during relatively low-frequency head rotations. This function is modulated by GABA-mediated inhibitory cerebellar projections. Velocity storage also exists in perceptual pathway and has similar functional principles as VOR. However, it is not known whether the neural substrate for perception and VOR overlap. We propose two possibilities. First, there is the same velocity storage for both VOR and perception; second, there are nonoverlapping neural networks: one might be involved in perception and the other for the VOR. We investigated these possibilities by measuring VOR and perceptual responses in healthy human subjects during whole-body, constant-velocity rotation steps about all three dimensions (yaw, pitch, and roll) before and after 10 mg of 4-aminopyridine (4-AP). 4-AP, a selective blocker of inward rectifier potassium conductance, can lead to increased synchronization and precision of Purkinje neuron discharge and possibly enhance the GABAergic action. Hence 4-AP could reduce the decay time constant of the perceived angular velocity and VOR. We found that 4-AP reduced the decay time constant, but the amount of reduction in the two processes, perception and VOR, was not the same, suggesting the possibility of nonoverlapping or partially overlapping neural substrates for VOR and perception. We also noted that, unlike the VOR, the perceived angular velocity gradually built up and plateau prior to decay. Hence, the perception pathway may have additional mechanism that changes the dynamics of perceived angular velocity beyond the velocity storage. 4-AP had no effects on the duration of build-up of perceived angular velocity, suggesting that the higher order processing of perception, beyond the velocity storage, might not occur under the influence of mechanism that could be influenced by 4-AP.


Assuntos
4-Aminopiridina/administração & dosagem , Cerebelo/fisiologia , Movimentos Oculares/fisiologia , Percepção de Movimento/fisiologia , Reflexo Vestíbulo-Ocular/fisiologia , Adulto , Tronco Encefálico/fisiologia , Cerebelo/efeitos dos fármacos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Percepção de Movimento/efeitos dos fármacos , Bloqueadores dos Canais de Potássio/administração & dosagem , Células de Purkinje/efeitos dos fármacos , Células de Purkinje/fisiologia , Reflexo Vestíbulo-Ocular/efeitos dos fármacos , Rotação , Vestíbulo do Labirinto/efeitos dos fármacos , Vestíbulo do Labirinto/fisiologia
15.
J Neurophysiol ; 107(11): 3095-106, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22442575

RESUMO

Gravicentric visual alignments become less precise when the head is roll-tilted relative to gravity, which is most likely due to decreasing otolith sensitivity. To align a luminous line with the perceived gravity vector (gravicentric task) or the perceived body-longitudinal axis (egocentric task), the roll orientation of the line on the retina and the torsional position of the eyes relative to the head must be integrated to obtain the line orientation relative to the head. Whether otolith input contributes to egocentric tasks and whether the modulation of variability is restricted to vision-dependent paradigms is unknown. In nine subjects we compared precision and accuracy of gravicentric and egocentric alignments in various roll positions (upright, 45°, and 75° right-ear down) using a luminous line (visual paradigm) in darkness. Trial-to-trial variability doubled for both egocentric and gravicentric alignments when roll-tilted. Two mechanisms might explain the roll-angle-dependent modulation in egocentric tasks: 1) Modulating variability in estimated ocular torsion, which reflects the roll-dependent precision of otolith signals, affects the precision of estimating the line orientation relative to the head; this hypothesis predicts that variability modulation is restricted to vision-dependent alignments. 2) Estimated body-longitudinal reflects the roll-dependent variability of perceived earth-vertical. Gravicentric cues are thereby integrated regardless of the task's reference frame. To test the two hypotheses the visual paradigm was repeated using a rod instead (haptic paradigm). As with the visual paradigm, precision significantly decreased with increasing head roll for both tasks. These findings propose that the CNS integrates input coded in a gravicentric frame to solve egocentric tasks. In analogy to gravicentric tasks, where trial-to-trial variability is mainly influenced by the properties of the otolith afferents, egocentric tasks may also integrate otolith input. Such a shared mechanism for both paradigms and frames of reference is supported by the significantly correlated trial-to-trial variabilities.


Assuntos
Ego , Sensação Gravitacional/fisiologia , Percepção de Movimento/fisiologia , Orientação/fisiologia , Membrana dos Otólitos/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Movimentos da Cabeça/fisiologia , Humanos , Masculino , Estimulação Luminosa/métodos , Rotação , Adulto Jovem
16.
J Neurosci ; 31(27): 9991-7, 2011 Jul 06.
Artigo em Inglês | MEDLINE | ID: mdl-21734290

RESUMO

When humans are accelerated along the body vertical, the right and left eyes show oppositely directed torsional modulation (cyclovergence). The origin of this paradoxical response is unknown. We studied cyclovergence during linear sinusoidal vertical motion in healthy humans. A small head-fixed visual target minimized horizontal and vertical motion of the eyes and therefore isolated the torsional component. For stimuli between 1 and 2 Hz (near the natural range of head motion), the phase of cyclovergence with respect to inertial acceleration was 8.7 ± 2.4° (mean ± 95% CI) and the sensitivity (in degrees per second per g) showed a small but statistically significant increase with frequency. These characteristics contrast with those of cycloversion (conjugate torsion) during horizontal (interaural) inertial stimuli at similar frequencies. From these and previous results, we propose that cyclovergence during vertical translation has two sources, one, like cycloversion, from the low-frequency component of linear acceleration, and another, which we term dynamic cyclovergence, with high-pass characteristics. Furthermore, we suggest that this cyclovergence response in humans is a vestige of the response of lateral-eyed animals to vertical linear acceleration of the head.


Assuntos
Aceleração , Movimentos Oculares/fisiologia , Percepção de Movimento/fisiologia , Dinâmica não Linear , Reflexo Vestíbulo-Ocular/fisiologia , Rotação , Estimulação Acústica , Lateralidade Funcional , Humanos
17.
Invest Ophthalmol Vis Sci ; 50(3): 1158-67, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19011016

RESUMO

PURPOSE: On close inspection, it can be seen that most saccadic trajectories are not straight but curve slightly; in other words, they are not single-axis ocular rotations. The authors asked whether saccade curvatures are systematically influenced by static ocular counterroll (OCR). METHODS: OCR was elicited by static whole-body roll position. Eight healthy human subjects performed horizontal and vertical saccades (10 degrees amplitude; 0 degrees and 10 degrees eccentricity; head-fixed coordinate system) in upright and ear-down whole-body roll positions (45 degrees right, 45 degrees left). Three-dimensional eye movements were recorded with modified dual-search coils at 1000 Hz. RESULTS: Saccade curvature was systematically modulated by OCR depending on saccade direction. In the horizontal-vertical plane, primarily vertical saccades were modulated with downward saccades curving toward the upper ear and upward saccades curving toward the lower ear. Modulation of saccade curvature in the torsional direction correlated significantly with OCR only in abducting saccades. CONCLUSIONS: No universal mechanism, such as visual-motor coordinate transformation or kinematic characteristics of the saccadic burst generator, alone could explain the complex modulation pattern of saccade curvature. OCR-induced changes of the ocular motor plant, including transient force imbalances between agonist eye muscles (vertical rectus and oblique muscles) and shifting eye muscle pulleys, are suitable to explain the found direction-dependent modulation pattern.


Assuntos
Convergência Ocular/fisiologia , Postura/fisiologia , Movimentos Sacádicos/fisiologia , Adulto , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Rotação , Anormalidade Torcional/fisiopatologia , Gravação em Vídeo , Visão Binocular/fisiologia
18.
Nat Neurosci ; 10(4): 494-504, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17369822

RESUMO

In neural integrators, transient inputs are accumulated into persistent firing rates that are a neural correlate of short-term memory. Integrators often contain two opposing cell populations that increase and decrease sustained firing as a stored parameter value rises. A leading hypothesis for the mechanism of persistence is positive feedback through mutual inhibition between these opposing populations. We tested predictions of this hypothesis in the goldfish oculomotor velocity-to-position integrator by measuring the eye position and firing rates of one population, while pharmacologically silencing the opposing one. In complementary experiments, we measured responses in a partially silenced single population. Contrary to predictions, induced drifts in neural firing were limited to half of the oculomotor range. We built network models with synaptic-input thresholds to demonstrate a new hypothesis suggested by these data: mutual inhibition between the populations does not provide positive feedback in support of integration, but rather coordinates persistent activity intrinsic to each population.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Rede Nervosa/fisiologia , Neurônios/fisiologia , Fenômenos Fisiológicos Oculares , Anestésicos Locais/farmacologia , Animais , Comportamento Animal , Encéfalo/citologia , Simulação por Computador , Dominância Ocular , Movimentos Oculares/fisiologia , Retroalimentação , Carpa Dourada , Lidocaína/farmacologia , Bloqueio Nervoso/métodos , Rede Nervosa/efeitos dos fármacos , Neurônios/classificação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...