Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Percept Mot Skills ; 131(4): 1321-1340, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38758033

RESUMO

Our aim in this study was to investigate the effects of motionless interventions, based on visual-auditory integration with a sonification technique, on the learning a complex rhythmic motor skill. We recruited 22 male participants with high physical fitness and provided them four acquisition sessions in which to practice hurdle running, based on a visual-auditory instructional pattern. Next, we divided participants into three groups: visual-auditory, auditory, and control. In six sessions of motionless interventions, with no physical practice, participants in the visual-auditory group received a visual-auditory pattern similar to their experience during the acquisition period. The auditory group only listened to the sound of sonified movements of an expert hurdler, and the control group received no instructional interventions. Finally, participants in all three groups underwent post-intervention and transfer tests to determine their errors in the spatial and relative timing of their leading leg's knee angular displacement. Both visual-auditory and auditory groups had significantly less spatial error than the control group. However, there were no significant group differences in relative timing in any test phase. These results indicate that the use of the sonification technique in the form of visual-auditory instruction adapted to the athletes' needs benefitted perception-sensory capacities to improve motor skill learning.


Assuntos
Percepção Auditiva , Aprendizagem , Destreza Motora , Percepção Visual , Humanos , Masculino , Destreza Motora/fisiologia , Adulto , Adulto Jovem , Percepção Auditiva/fisiologia , Percepção Visual/fisiologia , Aprendizagem/fisiologia , Desempenho Psicomotor/fisiologia , Corrida/fisiologia
2.
Front Hum Neurosci ; 17: 1237395, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37810763

RESUMO

Introduction: Speech communication is multi-sensory in nature. Seeing a speaker's head and face movements may significantly influence the listeners' speech processing, especially when the auditory information is not clear enough. However, research on the visual-auditory integration speech processing has left prosodic perception less well investigated than segmental perception. Furthermore, while native Japanese speakers tend to use less visual cues in segmental perception than in other western languages, to what extent the visual cues are used in Japanese focus perception by the native and non-native listeners remains unknown. To fill in these gaps, we test focus perception in Japanese among native Japanese speakers and Cantonese speakers who learn Japanese, using auditory-only and auditory-visual sentences as stimuli. Methodology: Thirty native Tokyo Japanese speakers and thirty Cantonese-speaking Japanese learners who had passed the Japanese-Language Proficiency Test with level N2 or N3 were asked to judge the naturalness of 28 question-answer pairs made up of broad focus eliciting questions and three-word answers carrying broad focus, or contrastive or non-contrastive narrow focus on the middle object words. Question-answer pairs were presented in two sensory modalities, auditory-only and visual-auditory modalities in two separate experimental sessions. Results: Both the Japanese and Cantonese groups showed weak integration of visual cues in the judgement of naturalness. Visual-auditory modality only significantly influenced Japanese participants' perception when the questions and answers were mismatched, but when the answers carried non-contrastive narrow focus, the visual cues impeded rather than facilitated their judgement. Also, the influences of specific visual cues like the displacement of eyebrows or head movements of both Japanese and Cantonese participants' responses were only significant when the questions and answers were mismatched. While Japanese participants consistently relied on the left eyebrow for focus perception, the Cantonese participants referred to head movements more often. Discussion: The lack of visual-auditory integration in Japanese speaking population found in segmental perception also exist in prosodic perception of focus. Not much foreign language effects has been found among the Cantonese-speaking learners either, suggesting a limited use of facial expressions in focus marking by native and non-native Japanese speakers. Overall, the present findings indicate that the integration of visual cues in perception of focus may be specific to languages rather than universal, adding to our understanding of multisensory speech perception.

3.
Philos Trans R Soc Lond B Biol Sci ; 378(1886): 20220340, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37545299

RESUMO

Auditory and visual information involve different coordinate systems, with auditory spatial cues anchored to the head and visual spatial cues anchored to the eyes. Information about eye movements is therefore critical for reconciling visual and auditory spatial signals. The recent discovery of eye movement-related eardrum oscillations (EMREOs) suggests that this process could begin as early as the auditory periphery. How this reconciliation might happen remains poorly understood. Because humans and monkeys both have mobile eyes and therefore both must perform this shift of reference frames, comparison of the EMREO across species can provide insights to shared and therefore important parameters of the signal. Here we show that rhesus monkeys, like humans, have a consistent, significant EMREO signal that carries parametric information about eye displacement as well as onset times of eye movements. The dependence of the EMREO on the horizontal displacement of the eye is its most consistent feature, and is shared across behavioural tasks, subjects and species. Differences chiefly involve the waveform frequency (higher in monkeys than in humans) and patterns of individual variation (more prominent in monkeys than in humans), and the waveform of the EMREO when factors due to horizontal and vertical eye displacements were controlled for. This article is part of the theme issue 'Decision and control processes in multisensory perception'.


Assuntos
Movimentos Oculares , Membrana Timpânica , Humanos , Sinais (Psicologia) , Movimento
4.
Front Neuroanat ; 14: 610324, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33584207

RESUMO

The middle longitudinal fascicle (MdLF) is a long, associative white matter tract connecting the superior temporal gyrus (STG) with the parietal and occipital lobe. Previous studies show different cortical terminations, and a possible segmentation pattern of the tract. In this study, we performed a post-mortem white matter dissection of 12 human hemispheres and an in vivo deterministic fiber tracking of 24 subjects acquired from the Human Connectome Project to establish whether a constant organization of fibers exists among the MdLF subcomponents and to acquire anatomical information on each subcomponent. Moreover, two clinical cases of brain tumors impinged on MdLF territories are reported to further discuss the anatomical results in light of previously published data on the functional involvement of this bundle. The main finding is that the MdLF is consistently organized into two layers: an antero-ventral segment (aMdLF) connecting the anterior STG (including temporal pole and planum polare) and the extrastriate lateral occipital cortex, and a posterior-dorsal segment (pMdLF) connecting the posterior STG, anterior transverse temporal gyrus and planum temporale with the superior parietal lobule and lateral occipital cortex. The anatomical connectivity pattern and quantitative differences between the MdLF subcomponents along with the clinical cases reported in this paper support the role of MdLF in high-order functions related to acoustic information. We suggest that pMdLF may contribute to the learning process associated with verbal-auditory stimuli, especially on left side, while aMdLF may play a role in processing/retrieving auditory information already consolidated within the temporal lobe.

5.
Front Neural Circuits ; 13: 80, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32038178

RESUMO

Methamphetamine (meth) can greatly damage the prefrontal cortex of the brain and trigger dysfunction of the cognitive control loop, which triggers not only drug dependence but also emotional disorders. The imbalance between the cognitive and emotional systems will lead to crossmodal emotional deficits. Until now, the negative impact of meth dependence on crossmodal emotional processing has not received attention. Therefore, the present study firstly examined the differences in crossmodal emotional processing between healthy controls and meth dependents (MADs) and then investigated the role of visual- or auditory-leading cues in the promotion of crossmodal emotional processing. Experiment 1 found that MADs made a visual-auditory integration disorder for fearful emotion, which may be related to the defects in information transmission between the auditory and auditory cortex. Experiment 2 found that MADs had a crossmodal disorder pertaining to fear under visual-leading cues, but the fearful sound improved the detection of facial emotions for MADs. Experiment 3 reconfirmed that, for MADs, A-leading cues could induce crossmodal integration immediately more easily than V-leading ones. These findings provided sufficient quantitative indicators and evidences that meth dependence was associated with crossmodal integration disorders, which in turn was associated with auditory-leading cues that enhanced the recognition ability of MADs for complex emotions (all results are available at: https://osf.io/x6rv5/). These results provided a better understanding for individuals using drugs in order to enhance the cognition for the complex crossmodal emotional integration.


Assuntos
Estimulação Acústica/psicologia , Emoções/fisiologia , Metanfetamina/efeitos adversos , Estimulação Luminosa , Tempo de Reação/fisiologia , Transtornos Relacionados ao Uso de Substâncias/psicologia , Estimulação Acústica/métodos , Adulto , Percepção Auditiva/fisiologia , Previsões , Humanos , Masculino , Estimulação Luminosa/métodos , Transtornos Relacionados ao Uso de Substâncias/fisiopatologia , Percepção Visual/fisiologia
6.
Front Psychol ; 7: 595, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27199829

RESUMO

Recent evidence suggests that visual-auditory cue integration may change as a function of age such that integration is heightened among older adults. Our goal was to determine whether these changes in multisensory integration are also observed in the context of self-motion perception under realistic task constraints. Thus, we developed a simulated driving paradigm in which we provided older and younger adults with visual motion cues (i.e., optic flow) and systematically manipulated the presence or absence of congruent auditory cues to self-motion (i.e., engine, tire, and wind sounds). Results demonstrated that the presence or absence of congruent auditory input had different effects on older and younger adults. Both age groups demonstrated a reduction in speed variability when auditory cues were present compared to when they were absent, but older adults demonstrated a proportionally greater reduction in speed variability under combined sensory conditions. These results are consistent with evidence indicating that multisensory integration is heightened in older adults. Importantly, this study is the first to provide evidence to suggest that age differences in multisensory integration may generalize from simple stimulus detection tasks to the integration of the more complex and dynamic visual and auditory cues that are experienced during self-motion.

7.
Autism Res ; 8(5): 534-44, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25735657

RESUMO

Recent research has shown that adults with autism spectrum disorders (ASD) have difficulty interpreting others' emotional responses, in order to work out what actually happened to them. It is unclear what underlies this difficulty; important cues may be missed from fast paced dynamic stimuli, or spontaneous emotional responses may be too complex for those with ASD to successfully recognise. To explore these possibilities, 17 adolescents and adults with ASD and 17 neurotypical controls viewed 21 videos and pictures of peoples' emotional responses to gifts (chocolate, a handmade novelty or Monopoly money), then inferred what gift the person received and the emotion expressed by the person while eye movements were measured. Participants with ASD were significantly more accurate at distinguishing who received a chocolate or homemade gift from static (compared to dynamic) stimuli, but significantly less accurate when inferring who received Monopoly money from static (compared to dynamic) stimuli. Both groups made similar emotion attributions to each gift in both conditions (positive for chocolate, feigned positive for homemade and confused for Monopoly money). Participants with ASD only made marginally significantly fewer fixations to the eyes of the face, and face of the person than typical controls in both conditions. Results suggest adolescents and adults with ASD can distinguish subtle emotion cues for certain emotions (genuine from feigned positive) when given sufficient processing time, however, dynamic cues are informative for recognising emotion blends (e.g., smiling in confusion). This indicates difficulties processing complex emotion responses in ASD.


Assuntos
Transtorno do Espectro Autista/fisiopatologia , Transtorno do Espectro Autista/psicologia , Emoções/fisiologia , Expressão Facial , Estimulação Luminosa/métodos , Percepção Social , Adolescente , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Adulto Jovem
8.
J Exerc Rehabil ; 9(2): 316-25, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24278878

RESUMO

We investigated the effects of complex treatment using visual and auditory stimuli on the symptoms of attention deficit/hyperactivity disorder (ADHD) in children. Forty-seven male children (7-13 yr old), who were clinically diagnosed with ADHD at the Balance Brain Center in Seoul, Korea, were included in this study. The complex treatment consisted of visual and auditory stimuli, core muscle exercise, targeting ball exercise, ocular motor exercise, and visual motor integration. All subjects completed the complex treatment for 60 min/day, 2-3 times/week for more than 12 weeks. Data regarding visual and auditory reaction time and cognitive function were obtained using the Neurosync program, Stroop Color-Word Test, and test of nonverbal intelligence (TONI) at pre- and post-treatment. The complex treatment significantly decreased the total reaction time, while it increased the number of combo actions on visual and auditory stimuli (P< 0.05). The Stroop color, word, and color-word scores were significantly increased at post-treatment compared to the scores at pretreatment (P< 0.05). There was no significant change in the TONI scores, although a tendency toward an increase in these scores was observed. In conclusion, complex treatment using visual and auditory stimuli alleviated the symptoms of ADHD and improved cognitive function in children. In addition, visual and auditory function might be possible indicators for demonstrating effective ADHD intervention.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA