Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 18(10): e0292059, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37812651

RESUMO

Musical performers synchronize to each other despite differences in sound-onset timings which reflect each musician's sense of the beat. A dynamical system of Kuramoto oscillators can simulate this spread of onsets at varying levels of temporal alignment with a variety of tempo and sound densities which also influence individual abilities for beat extraction. Here, we examined how people's sense of beat emerges when tapping with Kuramoto oscillators of varying coupling strengths which distribute onsets around periodic moments in time. We hypothesized that people tap regularly close to the sound onset density peaks when coupling is strong. When weaker coupling produces multiple inter-onset intervals that are more widely spread, people may interpret their variety and distributions differently in order to form a sense of beat. Experiment 1 with a small in-person cohort indeed showed a few individuals who responded with high frequency tapping to slightly weak coupled stimuli although the rest found regular beats. Experiment 2 with a larger on-line cohort revealed three groups based on characteristics of inter-tap-intervals analyzed by k-means clustering, namely a Regular group (about 1/3 of the final sample) with the most robust beat extraction, Fast group (1/6) who maintained frequent tapping except for the strongest coupling, and Hybrid group (1/2) who maintained beats except for the weakest coupling. Furthermore, the adaptation time course of tap interval variability was slowest in Regular group. We suggest that people's internal criterion for forming beats may involve different perceptual timescales where multiple stimulus intervals could be integrated or processed sequentially as is, and that the highly frequent tapping may reflect their approach in actively seeking synchronization. Our study provides the first documentation of the novel limits of sensorimotor synchronization and individual differences using coupled oscillator dynamics as a generative model of collective behavior.


Assuntos
Percepção Auditiva , Música , Humanos , Individualidade , Desempenho Psicomotor , Som
3.
Front Psychol ; 12: 707090, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34630213

RESUMO

Today's audio, visual, and internet technologies allow people to interact despite physical distances, for casual conversation, group workouts, or musical performance. Musical ensemble performance is unique because interaction integrity critically depends on the timing between each performer's actions and when their acoustic outcomes arrive. Acoustic transmission latency (ATL) between players is substantially longer for networked music performance (NMP) compared to traditional in-person spaces where musicians can easily adapt. Previous work has shown that longer ATLs slow the average tempo in ensemble performance, and that asymmetric co-actor roles and empathy-related traits affect coordination patterns in joint action. Thus, we are interested in how musicians collectively adapt to a given latency and how such adaptation patterns vary with their task-related and person-related asymmetries. Here, we examined how two pianists performed duets while hearing each other's auditory outcomes with an ATL of 10, 20, or 40 ms. To test the hypotheses regarding task-related asymmetries, we designed duets such that pianists had: (1) a starting or joining role and (2) a similar or dissimilar musical part compared to their co-performer, with respect to pitch range and melodic contour. Results replicated previous clapping-duet findings showing that longer ATLs are associated with greater temporal asynchrony between partners and increased average tempo slowing. While co-performer asynchronies were not affected by performer role or part similarity, at the longer ATLs starting performers displayed slower tempos and smaller tempo variability than joining performers. This asymmetry of stability vs. flexibility between starters and joiners may sustain coordination, consistent with recent joint action findings. Our data also suggest that relative independence in musical parts may mitigate ATL-related challenges. Additionally, there may be a relationship between co-performer differences in empathy-related personality traits such as locus of control and coordination during performance under the influence of ATL. Incorporating the emergent coordinative dynamics between performers could help further innovation of music technologies and composition techniques for NMP.

4.
Sci Rep ; 11(1): 4777, 2021 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-33637784

RESUMO

Recent studies have reported evidence that listeners' brains process meaning differently in speech with an in-group as compared to an out-group accent. However, among studies that have used electroencephalography (EEG) to examine neural correlates of semantic processing of speech in different accents, the details of findings are often in conflict, potentially reflecting critical variations in experimental design and/or data analysis parameters. To determine which of these factors might be driving inconsistencies in results across studies, we systematically investigate how analysis parameter sets from several of these studies impact results obtained from our own EEG data set. Data were collected from forty-nine monolingual North American English listeners in an event-related potential (ERP) paradigm as they listened to semantically congruent and incongruent sentences spoken in an American accent and an Indian accent. Several key effects of in-group as compared to out-group accent were robust across the range of parameters found in the literature, including more negative scalp-wide responses to incongruence in the N400 range, more positive posterior responses to congruence in the N400 range, and more positive posterior responses to incongruence in the P600 range. These findings, however, are not fully consistent with the reported observations of the studies whose parameters we used, indicating variation in experimental design may be at play. Other reported effects only emerged under a subset of the analytical parameters tested, suggesting that analytical parameters also drive differences. We hope this spurs discussion of analytical parameters and investigation of the contributions of individual study design variables in this growing field.


Assuntos
Percepção da Fala , Fala , Encéfalo/fisiologia , Eletroencefalografia , Potenciais Evocados , Humanos , Idioma , Semântica
5.
Int J Psychophysiol ; 153: 53-64, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32325078

RESUMO

Sound predictability resulting from repetitive patterns can be implicitly learned and often neither requires nor captures our conscious attention. Recently, predictive coding theory has been used as a framework to explain how predictable or expected stimuli evoke and gradually attenuate obligatory neural responses over time compared to those elicited by unpredictable events. However, these results were obtained using the repetition of simple auditory objects such as pairs of tones or phonemes. Here we examined whether the same principle would hold for more abstract temporal structures of sounds. If this is the case, we hypothesized that a regular repetition schedule of a set of musical patterns would reduce neural processing over the course of listening compared to stimuli with an irregular repetition schedule (and the same set of musical patterns). Electroencephalography (EEG) was recorded while participants passively listened to 6-8 min stimulus sequences in which five different four-tone patterns with temporally regular or irregular repetition were presented successively in a randomized order. N1 amplitudes in response to the first tone of each musical pattern were significantly less negative at the end of the regular sequence compared to the beginning, while such reduction was absent in the irregular sequence. These results extend previous findings by showing that N1 reflects automatic learning of the predictable higher-order structure of sound sequences, while continuous engagement of preattentive auditory processing is necessary for the unpredictable structure.


Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Música , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Adulto Jovem
6.
Clin Neurophysiol ; 131(5): 1102-1118, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32200092

RESUMO

OBJECTIVE: Stroke lesions in non-auditory areas may affect higher-order central auditory processing. We sought to characterize auditory functions in chronic stroke survivors with unilateral arm/hand impairment using auditory evoked responses (AERs) with lesion and perception metrics. METHODS: The AERs in 29 stroke survivors and 14 controls were recorded with single tones, active and passive frequency-oddballs, and a dual-oddball with pitch-contour and time-interval deviants. Performance in speech-in-noise, mistuning detection, and moving-sound detection was assessed. Relationships between AERs, behaviour, and lesion overlap with functional networks, were examined. RESULTS: Despite their normal hearing, eight patients showed unilateral AER in the hemisphere ipsilateral to the affected hand with reduced amplitude compared to those with bilateral AERs. Both groups showed increasing attenuation of later components. Hemispheric asymmetry of AER sources was reduced in bilateral-AER patients. The N1 wave (100 ms latency) and P2 (200 ms) were delayed in individuals with lesions in the basal-ganglia and white-matter, while lesions in the attention network reduced the frequency-MMN (mismatch negativity) responses and increased the pitch-contour P3a response. Patients' impaired speech-in-noise perception was explained by AER measures and frequency-deviant detection performance with multiple regression. CONCLUSION: AERs reflect disruption of auditory functions due to damage outside of temporal lobe, and further explain complexity of neural mechanisms underlying higher-order auditory perception. SIGNIFICANCE: Stroke survivors without obvious hearing problems may benefit from rehabilitation for central auditory processing.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Perda Auditiva , Magnetoencefalografia/métodos , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/fisiopatologia , Estimulação Acústica/métodos , Adulto , Idoso , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Doença Crônica , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Tempo de Reação/fisiologia
7.
Sci Rep ; 10(1): 4207, 2020 03 06.
Artigo em Inglês | MEDLINE | ID: mdl-32144306

RESUMO

Induced beta-band power modulations in auditory and motor-related brain areas have been associated with automatic temporal processing of isochronous beats and explicit, temporally-oriented attention. Here, we investigated how explicit top-down anticipation before upcoming tempo changes, a sustained process commonly required during music performance, changed beta power modulations during listening to isochronous beats. Musicians' electroencephalograms were recorded during the task of anticipating accelerating, decelerating, or steady beats after direction-specific visual cues. In separate behavioural testing for tempo-change onset detection, such cues were found to facilitate faster responses, thus effectively inducing high-level anticipation. In the electroencephalograms, periodic beta power reductions in a frontocentral topographic component with seed-based source contributions from auditory and sensorimotor cortices were apparent after isochronous beats with anticipation in all conditions, generally replicating patterns found previously during passive listening to isochronous beats. With anticipation before accelerations, the magnitude of the power reduction was significantly weaker than in the steady condition. Between the accelerating and decelerating conditions, no differences were found, suggesting that the observed beta patterns may represent an aspect of high-level anticipation common before both tempo changes, like increased attention. Overall, these results indicate that top-down anticipation influences ongoing auditory beat processing in beta-band networks.

8.
Front Neurosci ; 13: 1088, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31680824

RESUMO

Recent work in interpersonal coordination has revealed that neural oscillations, occurring spontaneously in the human brain, are modulated during the sensory, motor, and cognitive processes involved in interpersonal interactions. In particular, alpha-band (8-12 Hz) activity, linked to attention in general, is related to coordination dynamics and empathy traits. Researchers have also identified an association between each individual's attentiveness to their co-actor and the relative similarity in the co-actors' roles, influencing their behavioral synchronization patterns. We employed music ensemble performance to evaluate patterns of behavioral and neural activity when roles between co-performers are systematically varied with complete counterbalancing. Specifically, we designed a piano duet task, with three types of co-actor dissimilarity, or asymmetry: (1) musical role (starting vs. joining), (2) musical task similarity (similar vs. dissimilar melodic parts), and (3) performer animacy (human-to-human vs. human-to-non-adaptive computer). We examined how the experience of these asymmetries in four initial musical phrases, alternatingly played by the co-performers, influenced the pianists' performance of a subsequent unison phrase. Electroencephalography was recorded simultaneously from both performers while playing keyboards. We evaluated note-onset timing and alpha modulation around the unison phrase. We also investigated whether each individual's self-reported empathy was related to behavioral and neural activity. Our findings revealed closer behavioral synchronization when pianists played with a human vs. computer partner, likely because the computer was non-adaptive. When performers played with a human partner, or a joining performer played with a computer partner, having a similar vs. dissimilar musical part did not have a significant effect on their alpha modulation immediately prior to unison. However, when starting performers played with a computer partner with a dissimilar vs. similar part there was significantly greater alpha synchronization. In other words, starting players attended less to the computer partner playing a similar accompaniment, operating in a solo-like mode. Moreover, this alpha difference based on melodic similarity was related to a difference in note-onset adaptivity, which was in turn correlated with performer trait empathy. Collectively our results extend previous findings by showing that musical ensemble performance gives rise to a socialized context whose lasting effects encompass attentiveness, perceptual-motor coordination, and empathy.

9.
PLoS Comput Biol ; 15(10): e1007371, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31671096

RESUMO

Dancing and playing music require people to coordinate actions with auditory rhythms. In laboratory perception-action coordination tasks, people are asked to synchronize taps with a metronome. When synchronizing with a metronome, people tend to anticipate stimulus onsets, tapping slightly before the stimulus. The anticipation tendency increases with longer stimulus periods of up to 3500ms, but is less pronounced in trained individuals like musicians compared to non-musicians. Furthermore, external factors influence the timing of tapping. These factors include the presence of auditory feedback from one's own taps, the presence of a partner performing coordinated joint tapping, and transmission latencies (TLs) between coordinating partners. Phenomena like the anticipation tendency can be explained by delay-coupled systems, which may be inherent to the sensorimotor system during perception-action coordination. Here we tested whether a dynamical systems model based on this hypothesis reproduces observed patterns of human synchronization. We simulated behavior with a model consisting of an oscillator receiving its own delayed activity as input. Three simulation experiments were conducted using previously-published behavioral data from 1) simple tapping, 2) two-person alternating beat-tapping, and 3) two-person alternating rhythm-clapping in the presence of a range of constant auditory TLs. In Experiment 1, our model replicated the larger anticipation observed for longer stimulus intervals and adjusting the amplitude of the delayed feedback reproduced the difference between musicians and non-musicians. In Experiment 2, by connecting two models we replicated the smaller anticipation observed in human joint tapping with bi-directional auditory feedback compared to joint tapping without feedback. In Experiment 3, we varied TLs between two models alternately receiving signals from one another. Results showed reciprocal lags at points of alternation, consistent with behavioral patterns. Overall, our model explains various anticipatory behaviors, and has potential to inform theories of adaptive human synchronization.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Percepção do Tempo/fisiologia , Ciclos de Atividade , Antecipação Psicológica/fisiologia , Ciências Biocomportamentais , Simulação por Computador , Retroalimentação , Retroalimentação Sensorial/fisiologia , Humanos , Música , Periodicidade , Desempenho Psicomotor
10.
Neuroscience ; 413: 11-21, 2019 08 10.
Artigo em Inglês | MEDLINE | ID: mdl-31220540

RESUMO

People commonly synchronize taps to rhythmic sounds and can continue tapping after the sounds stop, indicating that time intervals between sounds can be internalized. Here, we investigate what happens in the brain after simply listening to auditory beats in order to understand more about the automatic internalization of temporal intervals without tapping. Electroencephalograms were recorded while musicians attended to accelerating, decelerating, or steady click sequences. Evoked responses and induced beta power modulations (13-30 Hz) were examined for one beat following the last physical beat of each sequence (termed the silent beat) and compared to responses obtained during physical beats near the sequence endings. In response to the silent beat, P3a was observed with the largest amplitude occurring after accelerations and the smallest after decelerations. Late beta power modulations were also found after the silent beat, and the magnitude of the beta-power suppressions was significantly correlated with the concurrent P3a amplitudes. In contrast, physical beats elicited P2 responses and early beta suppressions, likely reflecting a combination of stimulus-related processing and temporal prediction. These results suggest that the activities observed after the silent beat were not produced via sustained entrainment after the physical beats, but via automatically-formed expectation for an additional beat. Therefore, beta modulations may be generated endogenously by expectation violation, while P3a amplitudes may relate to strength of expectation, with acceleration endings causing the strongest expectations for sequence continuation.


Assuntos
Antecipação Psicológica/fisiologia , Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Música , Estimulação Acústica , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Periodicidade , Percepção do Tempo/fisiologia , Adulto Jovem
11.
Neuropsychologia ; 129: 223-235, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30951740

RESUMO

Music and language processing share and sometimes compete for brain resources. An extreme case of such shared processing occurs in improvised rap music, in which performers, or 'lyricists', combine rhyming, rhythmic, and semantic structures of language with musical rhythm, harmony, and phrasing to create integrally meaningful musical expressions. We used event-related potentials (ERPs) to investigate how auditory rhyme sequence processing differed between expert lyricists and non-lyricists. Participants listened to rhythmically presented pseudo-word triplets each of which terminated in a full-rhyme (e.g., STEEK, PREEK; FLEEK), half-rhyme (e.g., STEEK, PREEK; FREET), or non-rhyme (e.g., STEEK, PREEK; YAME), then judged each sequence in its aesthetic (Do you 'like' the rhyme?) or technical (Is the rhyme 'perfect'?) aspect. Phonological N450 showed rhyming effects between conditions (i.e., non vs. full; half vs. full; non vs. half) similarly across groups in parietal electrodes. However, concurrent activity in frontocentral electrodes showed left-laterality in non-lyricists, but not lyricists. Furthermore, non-lyricists' responses to the three conditions were distinct in morphology and amplitude at left-hemisphere electrodes with no condition difference at right-hemisphere electrodes, while lyricists' responses to half-rhymes they deemed unsatisfactory were similar to full-rhyme at left-hemisphere electrodes, and similar to non-rhyme at right-hemisphere electrodes. The CNV response observed while waiting for the second and third pseudo-word in the sequence was more enhanced to aesthetic rhyme judgments tasks than to technical rhyme judgment tasks in non-lyricists, suggesting their investment of greater effort for aesthetic rhyme judgments. No task effects were observed in lyricists, suggesting that aesthetic and technical rhyme judgments may engage the same processes for experts. Overall, our findings suggest that extensive practice of improvised lyricism may uniquely encourage the neuroplasticity of integrated linguistic and musical feature processing in the brain.


Assuntos
Percepção Auditiva , Potenciais Evocados , Julgamento , Idioma , Música , Adulto , Variação Contingente Negativa , Eletroencefalografia , Estética , Humanos , Plasticidade Neuronal
12.
Soc Neurosci ; 14(4): 449-461, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-29938589

RESUMO

During joint action tasks, expectations for outcomes of one's own and other's actions are collectively monitored. Recent evidence suggests that trait empathy levels may also influence performance monitoring processes. The present study investigated how outcome expectation and empathy interact during a turn-taking piano duet task, using simultaneous electroencephalography (EEG) recording. During the performances, one note in each player's part was altered in pitch to elicit the feedback-related negativity (FRN) and subsequent P3 complex. Pianists memorized and performed pieces containing either a similar or dissimilar sequence as their partner. For additional blocks, pianists also played both sequence types with an audio-only computer partner. The FRN and P3a were larger in response to self than other, while P3b occurred only in response to self, suggesting greater online monitoring of self- compared to other-produced actions during turn-taking joint action. P3a was larger when pianists played a similar sequence as their partner. Finally, as trait empathy level increased, FRN in response to self decreased. This association was absent for FRN in response to other. This may reflect that highly-empathetic musicians during joint performance could use a strategy to suppress exclusive focus on self-monitoring.


Assuntos
Estimulação Acústica/métodos , Comportamento Cooperativo , Eletroencefalografia/métodos , Potenciais Evocados P300/fisiologia , Música/psicologia , Desempenho Psicomotor/fisiologia , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
13.
Sci Rep ; 8(1): 9480, 2018 06 21.
Artigo em Inglês | MEDLINE | ID: mdl-29930399

RESUMO

Biomarkers that represent the structural and functional integrity of the motor system enable us to better assess motor outcome post-stroke. The degree of overlap between the stroke lesion and corticospinal tract (CST Injury) is a measure of the structural integrity of the motor system, whereas the left-to-right motor cortex resting state connectivity (LM1-RM1 rs-connectivity) is a measure of its functional integrity. CST Injury and LM1-RM1 rs-connectivity each individually correlate with motor outcome post-stroke, but less is understood about the relationship between these biomarkers. Thus, this study investigates the relationship between CST Injury and LM1-RM1 rs-connectivity, individually and together, with motor outcome. Twenty-seven participants with upper limb motor deficits post-stroke completed motor assessments and underwent MRI at one time point. CST Injury and LM1-RM1 rs-connectivity were derived from T1-weighted and resting state functional MRI scans, respectively. We performed hierarchical multiple regression analyses to determine the contribution of each biomarker in explaining motor outcome. The interaction between CST Injury and LM1-RM1 rs-connectivity does not significantly contribute to the variability in motor outcome. However, inclusion of both CST Injury and LM1-RM1 rs-connectivity explains more variability in motor outcome, than either alone. We suggest both biomarkers provide distinct information about an individual's motor outcome.


Assuntos
Córtex Motor/fisiopatologia , Destreza Motora , Tratos Piramidais/fisiopatologia , Acidente Vascular Cerebral/fisiopatologia , Adulto , Idoso , Conectoma , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Córtex Motor/diagnóstico por imagem , Tratos Piramidais/diagnóstico por imagem , Acidente Vascular Cerebral/diagnóstico por imagem , Extremidade Superior/inervação , Extremidade Superior/fisiopatologia
14.
Ann N Y Acad Sci ; 2018 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-29797585

RESUMO

Neuroplasticity accompanying learning is a key mediator of stroke rehabilitation. Training in playing music in healthy populations and patients with movement disorders requires resources within motor, sensory, cognitive, and affective systems, and coordination among these systems. We investigated effects of music-supported therapy (MST) in chronic stroke on motor, cognitive, and psychosocial functions compared to conventional physical training (GRASP). Twenty-eight adults with unilateral arm and hand impairment were randomly assigned to MST (n = 14) and GRASP (n = 14) and received 30 h of training over a 10-week period. The assessment was conducted at four time points: before intervention, after 5 weeks, after 10 weeks, and 3 months after training completion. As for two of our three primary outcome measures concerning motor function, all patients slightly improved in Chedoke-McMaster Stroke Assessment hand score, while the time to complete Action Research Arm Test became shorter in the MST group. The third primary outcome measure for well-being, Stroke Impact Scale, was improved for emotion and social communication earlier in MST and coincided with the improved executive function for task switching and music rhythm perception. The results confirmed previous findings and expanded the potential usage of MST for enhancing quality of life in community-dwelling chronic-stage survivors.

15.
J Neurol Sci ; 384: 21-29, 2018 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-29249372

RESUMO

Movement is traditionally viewed as a process that involves motor brain regions. However, movement also implicates non-motor regions such as prefrontal and parietal cortex, regions whose integrity may thus be important for motor recovery after stroke. Importantly, focal brain damage can affect neural functioning within and between distinct brain networks implicated in the damage. The aim of this study is to investigate how resting state connectivity (rs-connectivity) within and between motor and frontoparietal networks are affected post-stroke in correlation with motor outcome. Twenty-seven participants with chronic stroke with unilateral upper limb deficits underwent motor assessments and magnetic resonance imaging. Participants completed the Chedoke-McMaster Stroke Assessment as a measure of arm (CMSA-Arm) and hand (CMSA-Hand) impairment and the Action Research Arm Test (ARAT) as a measure of motor function. We used a seed-based rs-connectivity approach defining the motor (seed=contralesional primary motor cortex (M1)) and frontoparietal (seed=contralesional dorsolateral prefrontal cortex (DLPFC)) networks. We analyzed the rs-connectivity within each network (intra-network connectivity) and between both networks (inter-network connectivity), and performed correlations between: a) intra-network connectivity and motor assessment scores; b) inter-network connectivity and motor assessment scores. We found: a) Participants with high rs-connectivity within the motor network (between M1 and supplementary motor area) have higher CMSA-Hand stage (z=3.62, p=0.003) and higher ARAT score (z=3.41, p=0.02). Rs-connectivity within the motor network was not significantly correlated with CMSA-Arm stage (z=1.83, p>0.05); b) Participants with high rs-connectivity within the frontoparietal network (between DLPFC and mid-ventrolateral prefrontal cortex) have higher CMSA-Hand stage (z=3.64, p=0.01). Rs-connectivity within the frontoparietal network was not significantly correlated with CMSA-Arm stage (z=0.93, p=0.03) or ARAT score (z=2.53, p=0.05); and c) Participants with high rs-connectivity between motor and frontoparietal networks have higher CMSA-Hand stage (rs=0.54, p=0.01) and higher ARAT score (rs=0.54, p=0.009). Rs-connectivity between the motor and frontoparietal networks was not significantly correlated with CMSA-Arm stage (rs=0.34, p=0.13). Taken together, the connectivity within and between the motor and frontoparietal networks correlate with motor outcome post-stroke. The integrity of these regions may be important for an individual's motor outcome. Motor-frontoparietal connectivity may be a potential biomarker of motor recovery post-stroke.


Assuntos
Lobo Frontal/fisiopatologia , Mãos/fisiopatologia , Córtex Motor/fisiopatologia , Destreza Motora , Lobo Parietal/fisiopatologia , Acidente Vascular Cerebral/fisiopatologia , Adulto , Idoso , Mapeamento Encefálico , Doença Crônica , Estudos Transversais , Avaliação da Deficiência , Feminino , Lobo Frontal/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Córtex Motor/diagnóstico por imagem , Destreza Motora/fisiologia , Vias Neurais/diagnóstico por imagem , Vias Neurais/fisiopatologia , Lobo Parietal/diagnóstico por imagem , Estudo de Prova de Conceito , Recuperação de Função Fisiológica/fisiologia , Descanso , Acidente Vascular Cerebral/diagnóstico por imagem
16.
Front Psychol ; 9: 2528, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30618951

RESUMO

The early right anterior negativity (ERAN) in event-related potentials (ERPs) is typically elicited by syntactically unexpected events in Western tonal music. We examined how visual predictive information influences syntactic processing, how musical or non-musical cues have different effects, and how they interact with sequential effects between trials, which could modulate with the strength of the sense of established tonality. The EEG was recorded from musicians who listened to chord sequences paired with one of four types of visual stimuli; two provided predictive information about the syntactic validity of the last chord through either musical notation of the whole sequence, or the word "regular" or "irregular," while the other two, empty musical staves or a blank screen, provided no information. Half of the sequences ended with the syntactically invalid Neapolitan sixth chord, while the other half ended with the Tonic chord. Clear ERAN was observed in frontocentral electrodes in all conditions. A principal component analysis (PCA) was performed on the grand average response in the audio-only condition, to separate spatio-temporal dynamics of different scalp areas as principal components (PCs) and use them to extract auditory-related neural activities in the other visual-cue conditions. The first principal component (PC1) showed a symmetrical frontocentral topography, while the second (PC2) showed a right-lateralized frontal concentration. A source analysis confirmed the relative contribution of temporal sources to the former and a right frontal source to the latter. Cue predictability affected only the ERAN projected onto PC1, especially when the previous trial ended with the Tonic chord. The ERAN in PC2 was reduced in the trials following Neapolitan endings in general. However, the extent of this reduction differed between cue-styles, whereby it was nearly absent when musical notation was used, regardless of whether the staves were filled with notes or empty. The results suggest that the right frontal areas carry out the primary role in musical syntactic analysis and integration of the ongoing context, which produce schematic expectations that, together with the veridical expectation incorporated by the temporal areas, inform musical syntactic processing in musicians.

17.
Eur J Neurosci ; 46(8): 2339-2354, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28887898

RESUMO

Sub-second time intervals in musical rhythms provide predictive cues about future events to performers and listeners through an internalized representation of timing. While the acuity of automatic, sub-second timing as well as cognitively controlled, supra-second timing declines with ageing, musical experts are less affected. This study investigated the influence of piano training on temporal processing abilities in older adults using behavioural and neuronal correlates. We hypothesized that neuroplastic changes in beta networks, caused by training in sensorimotor coordination with timing processing, can be assessed even in the absence of movement. Behavioural performance of internal timing stability was assessed with synchronization-continuation finger-tapping paradigms. Magnetoencephalography (MEG) was recorded from older adults before and after one month of one-on-one training. For neural measures of automatic timing processing, we focused on beta oscillations (13-30 Hz) during passive listening to metronome beats. Periodic beta-band modulations in older adults before training were similar to previous findings in young listeners at a beat interval of 800 ms. After training, behavioural performance for continuation tapping was improved and accompanied by an increased range of beat-induced beta modulation, compared to participants who did not receive training. Beta changes were observed in the caudate, auditory, sensorimotor and premotor cortices, parietal lobe, cerebellum and medial prefrontal cortex, suggesting that increased resources are involved in timing processing and goal-oriented monitoring as well as reward-based sensorimotor learning.


Assuntos
Percepção Auditiva , Ritmo beta , Aprendizagem , Música , Idoso , Estudos de Casos e Controles , Cerebelo/fisiologia , Córtex Cerebral/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação
18.
J Neurosci ; 37(24): 5948-5959, 2017 06 14.
Artigo em Inglês | MEDLINE | ID: mdl-28539421

RESUMO

Auditory and sensorimotor brain areas interact during the action-perception cycle of sound making. Neurophysiological evidence of a feedforward model of the action and its outcome has been associated with attenuation of the N1 wave of auditory evoked responses elicited by self-generated sounds, such as talking and singing or playing a musical instrument. Moreover, neural oscillations at ß-band frequencies have been related to predicting the sound outcome after action initiation. We hypothesized that a newly learned action-perception association would immediately modify interpretation of the sound during subsequent listening. Nineteen healthy young adults (7 female, 12 male) participated in three magnetoencephalographic recordings while first passively listening to recorded sounds of a bell ringing, then actively striking the bell with a mallet, and then again listening to recorded sounds. Auditory cortex activity showed characteristic P1-N1-P2 waves. The N1 was attenuated during sound making, while P2 responses were unchanged. In contrast, P2 became larger when listening after sound making compared with the initial naive listening. The P2 increase occurred immediately, while in previous learning-by-listening studies P2 increases occurred on a later day. Also, reactivity of ß-band oscillations, as well as θ coherence between auditory and sensorimotor cortices, was stronger in the second listening block. These changes were significantly larger than those observed in control participants (eight female, five male), who triggered recorded sounds by a key press. We propose that P2 characterizes familiarity with sound objects, whereas ß-band oscillation signifies involvement of the action-perception cycle, and both measures objectively indicate functional neuroplasticity in auditory perceptual learning.SIGNIFICANCE STATEMENT While suppression of auditory responses to self-generated sounds is well known, it is not clear whether the learned action-sound association modifies subsequent perception. Our study demonstrated the immediate effects of sound-making experience on perception using magnetoencephalographic recordings, as reflected in the increased auditory evoked P2 wave, increased responsiveness of ß oscillations, and enhanced connectivity between auditory and sensorimotor cortices. The importance of motor learning was underscored as the changes were much smaller in a control group using a key press to generate the sounds instead of learning to play the musical instrument. The results support the rapid integration of a feedforward model during perception and provide a neurophysiological basis for the application of music making in motor rehabilitation training.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Ritmo beta/fisiologia , Potenciais Evocados Auditivos/fisiologia , Aprendizagem/fisiologia , Plasticidade Neuronal/fisiologia , Som , Adulto , Relógios Biológicos/fisiologia , Feminino , Humanos , Magnetoencefalografia/métodos , Masculino
19.
Brain Cogn ; 111: 144-155, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27940303

RESUMO

In music, a melodic motif is often played repeatedly in different pitch ranges and at different times. Event-related potential (ERP) studies have shown that the mismatch negativity (MMN) reflects memory trace processing that encodes two separate melodic lines ("voices") with different motifs. Here we investigated whether a single motif presented in two voices is encoded as a single entity or two separate entities, and whether motifs overlapping in time impede or enhance encoding strength. Electroencephalogram (EEG) from 11 musically-trained participants was recorded while they passively listened to sequences of 5-note motifs where the 5th note either descended (standard) or ascended (deviant) relative to the previous note (20% deviant rate). Motifs were presented either in one pitch range, or alternated between two pitch ranges, creating an "upper" and a "lower" voice. Further, motifs were either temporally isolated (silence in between), or temporally concurrent with two tones overlapping. When motifs were temporally isolated, MMN amplitude in the one-pitch-range condition was similar to that in the two-pitch-range upper voice. In contrast, no MMN, but P3a, was observed in the two-pitch-range lower voice. When motifs were temporally concurrent and presented in two pitch ranges, MMN exhibited a more posterior distribution in the upper voice, but again, was absent in the lower voice. These results suggest that motifs presented in two separate voices are not encoded entirely independently, but hierarchically, causing asymmetry between the upper and lower voice encoding even when no simultaneous pitches are presented.


Assuntos
Percepção Auditiva/fisiologia , Potenciais Evocados Auditivos/fisiologia , Música , Adulto , Eletroencefalografia , Feminino , Humanos , Masculino , Percepção da Altura Sonora/fisiologia , Adulto Jovem
20.
Psychophysiology ; 53(7): 974-90, 2016 07.
Artigo em Inglês | MEDLINE | ID: mdl-27080577

RESUMO

Auditory object perception requires binding of elementary features of complex stimuli. Synchronization of high-frequency oscillation in neural networks has been proposed as an effective alternative to binding via hard-wired connections because binding in an oscillatory network can be dynamically adjusted to the ever-changing sensory environment. Previously, we demonstrated in young adults that gamma oscillations are critical for sensory integration and found that they were affected by concurrent noise. Here, we aimed to support the hypothesis that stimulus evoked auditory 40-Hz responses are a component of thalamocortical gamma oscillations and examined whether this oscillatory system may become less effective in aging. In young and older adults, we recorded neuromagnetic 40-Hz oscillations, elicited by monaural amplitude-modulated sound. Comparing responses in quiet and under contralateral masking with multitalker babble noise revealed two functionally distinct components of auditory 40-Hz responses. The first component followed changes in the auditory input with high fidelity and was of similar amplitude in young and older adults. The second, significantly smaller in older adults, showed a 200-ms interval of amplitude and phase rebound and was strongly attenuated by contralateral noise. The amplitude of the second component was correlated with behavioral speech-in-noise performance. Concurrent noise also reduced the P2 wave of auditory evoked responses at 200-ms latency, but not the earlier N1 wave. P2 modulation was reduced in older adults. The results support the model of sensory binding through thalamocortical gamma oscillations. Limitation of neural resources for this process in older adults may contribute to their speech-in-noise understanding deficits.


Assuntos
Envelhecimento , Potenciais Evocados Auditivos , Ritmo Gama , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Idoso , Percepção Auditiva/fisiologia , Feminino , Humanos , Magnetoencefalografia , Masculino , Pessoa de Meia-Idade , Mascaramento Perceptivo/fisiologia , Psicoacústica , Processamento de Sinais Assistido por Computador , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA