Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Conscious Cogn ; 107: 103455, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36586291

RESUMO

It remains unclear whether multisensory interaction can implicitly occur at the abstract level. To address this issue, a same-different task was used to select comparable images and sounds in Experiment 1. Then, the stimuli with various levels of discrimination difficulty were adopted in a modified same-different task in Experiments 2, 3, and 4. The resultsshowed that only when the irrelevant stimuli were easily distinguishable, aconsistency effectcould beobservedin the testing phase. Moreover, when easily distinguishableirrelevant stimuliwere simultaneously presented with difficulttarget stimuli, irrelevant auditorystimuli facilitated responses to visual targets whereas irrelevant visual stimuli interfered with responses to auditorytargetsin the training phase,indicating an asymmetry in the role of visual and auditory in abstract multisensory integration. The results suggested that abstract multisensory information could be implicitly integrated and the inverse effectiveness principle might not apply to high-level processing of abstract multisensory integration.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Percepção Visual/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica , Estimulação Luminosa
2.
Atten Percept Psychophys ; 82(8): 3973-3992, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32935292

RESUMO

Correctly assessing the emotional state of others is a crucial part of social interaction. While facial expressions provide much information, faces are often not viewed in isolation, but occur with concurrent sounds, usually voices, which also provide information about the emotion being portrayed. Many studies have examined the crossmodal processing of faces and sounds, but results have been mixed, with different paradigms yielding different results. Using a psychophysical adaptation paradigm, we carried out a series of four experiments to determine whether there is a perceptual advantage when faces and voices match in emotion (congruent), versus when they do not match (incongruent). We presented a single face and a crowd of voices, a crowd of faces and a crowd of voices, a single face of reduced salience and a crowd of voices, and tested this last condition with and without attention directed to the emotion in the face. While we observed aftereffects in the hypothesized direction (adaptation to faces conveying positive emotion yielded negative, contrastive, perceptual aftereffects), we only found a congruent advantage (stronger adaptation effects) when faces were attended and of reduced salience, in line with the theory of inverse effectiveness.


Assuntos
Emoções , Voz , Atenção , Expressão Facial , Humanos , Percepção Visual
3.
Hum Brain Mapp ; 39(3): 1313-1326, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29235185

RESUMO

Object recognition benefits maximally from multimodal sensory input when stimulus presentation is noisy, or degraded. Whether this advantage can be attributed specifically to the extent of overlap in object-related information, or rather, to object-unspecific enhancement due to the mere presence of additional sensory stimulation, remains unclear. Further, the cortical processing differences driving increased multisensory integration (MSI) for degraded compared with clear information remain poorly understood. Here, two consecutive studies first compared behavioral benefits of audio-visual overlap of object-related information, relative to conditions where one channel carried information and the other carried noise. A hierarchical drift diffusion model indicated performance enhancement when auditory and visual object-related information was simultaneously present for degraded stimuli. A subsequent fMRI study revealed visual dominance on a behavioral and neural level for clear stimuli, while degraded stimulus processing was mainly characterized by activation of a frontoparietal multisensory network, including IPS. Connectivity analyses indicated that integration of degraded object-related information relied on IPS input, whereas clear stimuli were integrated through direct information exchange between visual and auditory sensory cortices. These results indicate that the inverse effectiveness observed for identification of degraded relative to clear objects in behavior and brain activation might be facilitated by selective recruitment of an executive cortical network which uses IPS as a relay mediating crossmodal sensory information exchange.


Assuntos
Percepção Auditiva/fisiologia , Lobo Parietal/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção Visual/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Lobo Parietal/diagnóstico por imagem
4.
Perception ; 46(12): 1356-1370, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28718747

RESUMO

Recent findings have shown that sounds improve visual detection in low vision individuals when the audiovisual stimuli pairs of stimuli are presented simultaneously and from the same spatial position. The present study purports to investigate the temporal aspects of the audiovisual enhancement effect previously reported. Low vision participants were asked to detect the presence of a visual stimulus (yes/no task) presented either alone or together with an auditory stimulus at different stimulus onset asynchronies (SOAs). In the first experiment, the sound was presented either simultaneously or before the visual stimulus (i.e., SOAs 0, 100, 250, 400 ms). The results show that the presence of a task-irrelevant auditory stimulus produced a significant visual detection enhancement in all the conditions. In the second experiment, the sound was either synchronized with, or randomly preceded/lagged behind the visual stimulus (i.e., SOAs 0, ± 250, ± 400 ms). The visual detection enhancement was reduced in magnitude and limited only to the synchronous condition and to the condition in which the sound stimulus was presented 250 ms before the visual stimulus. Taken together, the evidence of the present study seems to suggest that audiovisual interaction in low vision individuals is highly modulated by top-down mechanisms.


Assuntos
Localização de Som/fisiologia , Disparidade Visual/fisiologia , Baixa Visão/fisiopatologia , Percepção Visual/fisiologia , Estimulação Acústica/métodos , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Tempo de Reação , Adulto Jovem
5.
Front Psychol ; 7: 595, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27199829

RESUMO

Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

6.
Neuroscience ; 247: 145-51, 2013 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-23673276

RESUMO

In the present study, we investigated the multisensory gain as the difference of speech recognition accuracies between the audio-visual (AV) and auditory-only (A) conditions, and the multisensory gain as the difference between the event-related potentials (ERPs) evoked under the AV condition and the sum of the ERPs evoked under the A and visual-only (V) conditions in different noise environments. Videos of a female speaker articulating the Chinese monosyllable words accompanied with different levels of pink noise were used as the stimulus materials. The selected signal-to-noise ratios (SNRs) were -16, -12, -8, -4 and 0 dB. Under the A, V and AV conditions the accuracy of the speech recognition was measured and the ERPs evoked under different conditions were analyzed, respectively. The behavioral results showed that the maximum gain as the difference of speech recognition accuracies between the AV and A conditions was at the -12 dB SNR. The ERP results showed that the multisensory gain as the difference between the ERPs evoked under the AV condition and the sum of ERPs evoked under the A and V conditions at the -12 dB SNR was significantly higher than those at the other SNRs in the time window of 130-200 ms in the area from frontal to central region. The multisensory gains in audio-visual speech recognition at different SNRs were not completely accordant with the principle of inverse effectiveness, but confirmed to cross-modal stochastic resonance.


Assuntos
Estimulação Acústica/métodos , Eletroencefalografia , Estimulação Luminosa/métodos , Razão Sinal-Ruído , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Percepção Auditiva/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Masculino , Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA