RESUMEN
Humans have a remarkable capacity for perceiving and producing rhythm. Rhythmic competence is often viewed as a single concept, with participants who perform more or less accurately on a single rhythm task. However, research is revealing numerous sub-processes and competencies involved in rhythm perception and production, which can be selectively impaired or enhanced. To investigate whether different patterns of performance emerge across tasks and individuals, we measured performance across a range of rhythm tasks from different test batteries. Distinct performance patterns could potentially reveal separable rhythmic competencies that may draw on distinct neural mechanisms. Participants completed nine rhythm perception and production tasks selected from the Battery for the Assessment of Auditory Sensorimotor and Timing Abilities (BAASTA), the Beat Alignment Test (BAT), the Beat-Based Advantage task (BBA), and two tasks from the Burgundy best Musical Aptitude Test (BbMAT). Principal component analyses revealed clear separation of task performance along three main dimensions: production, beat-based rhythm perception, and sequence memory-based rhythm perception. Hierarchical cluster analyses supported these results, revealing clusters of participants who performed selectively more or less accurately along different dimensions. The current results support the hypothesis of divergence of rhythmic skills. Based on these results, we provide guidelines towards a comprehensive testing of rhythm abilities, including at least three short tasks measuring: (1) rhythm production (e.g., tapping to metronome/music), (2) beat-based rhythm perception (e.g., BAT), and (3) sequence memory-based rhythm processing (e.g., BBA). Implications for underlying neural mechanisms, future research, and potential directions for rehabilitation and training programs are discussed.
Asunto(s)
Percepción Auditiva , Música , Humanos , Memoria , Análisis y Desempeño de TareasRESUMEN
Gait dysfunctions in Parkinson's disease can be partly relieved by rhythmic auditory cueing. This consists in asking patients to walk with a rhythmic auditory stimulus such as a metronome or music. The effect on gait is visible immediately in terms of increased speed and stride length. Moreover, training programs based on rhythmic cueing can have long-term benefits. The effect of rhythmic cueing, however, varies from one patient to the other. Patients' response to the stimulation may depend on rhythmic abilities, often deteriorating with the disease. Relatively spared abilities to track the beat favor a positive response to rhythmic cueing. On the other hand, most patients with poor rhythmic abilities either do not respond to the cues or experience gait worsening when walking with cues. An individualized approach to rhythmic auditory cueing with music is proposed to cope with this variability in patients' response. This approach calls for using assistive mobile technologies capable of delivering cues that adapt in real time to patients' gait kinematics, thus affording step synchronization to the beat. Individualized rhythmic cueing can provide a safe and cost-effective alternative to standard cueing that patients may want to use in their everyday lives.
RESUMEN
Humans talk, sing and play music. Some species of birds and whales sing long and complex songs. All these behaviours and sounds exhibit hierarchical structure-syllables and notes are positioned within words and musical phrases, words and motives in sentences and musical phrases, and so on. We developed a new method to measure and compare hierarchical temporal structures in speech, song and music. The method identifies temporal events as peaks in the sound amplitude envelope, and quantifies event clustering across a range of timescales using Allan factor (AF) variance. AF variances were analysed and compared for over 200 different recordings from more than 16 different categories of signals, including recordings of speech in different contexts and languages, musical compositions and performances from different genres. Non-human vocalizations from two bird species and two types of marine mammals were also analysed for comparison. The resulting patterns of AF variance across timescales were distinct to each of four natural categories of complex sound: speech, popular music, classical music and complex animal vocalizations. Comparisons within and across categories indicated that nested clustering in longer timescales was more prominent when prosodic variation was greater, and when sounds came from interactions among individuals, including interactions between speakers, musicians, and even killer whales. Nested clustering also was more prominent for music compared with speech, and reflected beat structure for popular music and self-similarity across timescales for classical music. In summary, hierarchical temporal structures reflect the behavioural and social processes underlying complex vocalizations and musical performances.
Asunto(s)
Aves/fisiología , Yubarta/fisiología , Música , Habla , Vocalización Animal/fisiología , Orca/fisiología , Animales , HumanosRESUMEN
Training based on rhythmic auditory stimulation (RAS) can improve gait in patients with idiopathic Parkinson's disease (IPD). Patients typically walk faster and exhibit greater stride length after RAS. However, this effect is highly variable among patients, with some exhibiting little or no response to the intervention. These individual differences may depend on patients' ability to synchronize their movements to a beat. To test this possibility, 14 IPD patients were submitted to RAS for four weeks, in which they walked to music with an embedded metronome. Before and after the training, patients' synchronization was assessed with auditory paced hand tapping and walking to auditory cues. Patients increased gait speed and stride length in non-cued gait after training. However, individual differences were apparent as some patients showed a positive response to RAS and others, either no response, or a negative response. A positive response to RAS was predicted by the synchronization performance in hand tapping and gait tasks. More severe gait impairment, low synchronization variability, and a prompt response to a stimulation change foster a positive response to RAS training. Thus, sensorimotor timing skills underpinning the synchronization of steps to an auditory cue may allow predicting the success of RAS in IPD.
Asunto(s)
Estimulación Acústica/métodos , Terapia por Ejercicio/métodos , Marcha , Destreza Motora , Música , Enfermedad de Parkinson/rehabilitación , Periodicidad , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Rehabilitación Neurológica/métodos , Enfermedad de Parkinson/fisiopatología , Distribución AleatoriaRESUMEN
Auditory stimulation via rhythmic cues can be used successfully in the rehabilitation of motor function in patients with motor disorders. A prototypical example is provided by dysfunctional gait in patients with idiopathic Parkinson's disease (PD). Coupling steps to external rhythmic cues (the beat of music or the sounds of a metronome) leads to long-term motor improvements, such as increased walking speed and greater stride length. These effects are likely to be underpinned by compensatory brain mechanisms involving cerebellar-thalamocortical networks. Because these areas are also involved in perceptual and motor timing, parallel improvement in timing tasks is expected in PD beyond purely motor benefits. In keeping with this idea, we report here recent behavioral data showing beneficial effects of musically cued gait training (MCGT) on gait performance (i.e., increased stride length and speed), perceptual timing (e.g., discriminating stimulus durations), and sensorimotor timing abilities (i.e., in paced tapping tasks) in PD patients. Particular attention is paid to individual differences in timing abilities in PD, thus paving the ground for an individualized MCGT-based therapy.
Asunto(s)
Marcha , Destreza Motora/fisiología , Musicoterapia/métodos , Música , Enfermedad de Parkinson/fisiopatología , Enfermedad de Parkinson/rehabilitación , Estimulación Acústica , Percepción Auditiva , Conducta , Fenómenos Biomecánicos , Encéfalo/fisiología , Estudios de Casos y Controles , Señales (Psicología) , Femenino , Marcha/fisiología , Audición , Humanos , Masculino , Factores de TiempoRESUMEN
Singing is as natural as speaking for the majority of people. Yet some individuals (i.e., 10-15%) are poor singers, typically performing or imitating pitches and melodies inaccurately. This condition, commonly referred to as "tone deafness," has been observed both in the presence and absence of deficient pitch perception. In this article we review the existing literature concerning normal singing, poor-pitch singing, and, briefly, the sources of this condition. Considering that pitch plays a prominent role in the structure of both music and speech we also focus on the possibility that speech production (or imitation) is similarly impaired in poor-pitch singers. Preliminary evidence from our laboratory suggests that pitch imitation may be selectively inaccurate in the music domain without being affected in speech. This finding points to separability of mechanisms subserving pitch production in music and language.
RESUMEN
We examined the effect of rate on finger kinematics in goal-directed actions of pianists. In addition, we evaluated whether movement kinematics can be treated as an indicator of personal identity. Pianists' finger movements were recorded with a motion capture system while they performed melodies from memory at different rates. Pianists' peak finger heights above the keys preceding keystrokes increased as tempo increased, and were attained about one tone before keypress. These rate effects were not simply due to a strategy to increase key velocity (associated with tone intensity) of the corresponding keystroke. Greater finger heights may compensate via greater tactile feedback for a speed-accuracy tradeoff that underlies the tendency toward larger temporal variability at faster tempi. This would allow pianists to maintain high temporal accuracy when playing at fast rates. In addition, finger velocity and accelerations as pianists' fingers approached keys were sufficiently unique to allow pianists' identification with a neural-network classifier. Classification success was higher in pianists with more extensive musical training. Pianists' movement "signatures" may reflect unique goal-directed movement kinematic patterns, leading to individualistic sound.
Asunto(s)
Dedos/fisiología , Destreza Motora/fisiología , Música , Adulto , Fenómenos Biomecánicos/fisiología , Femenino , Humanos , Masculino , Movimiento/fisiología , Redes Neurales de la Computación , Factores de Tiempo , Adulto JovenRESUMEN
When people synchronize taps with isochronously presented stimuli, taps usually precede the pacing stimuli [negative mean asynchrony (NMA)]. One explanation of NMA [sensory accumulation model (SAM), Aschersleben in Brain Cogn 48:66-79, 2002] is that more time is needed to generate a central code for kinesthetic-tactile information than for auditory or visual stimuli. The SAM predicts that raising the intensity of the pacing stimuli shortens the time for their sensory accumulation, thereby increasing NMA. This prediction was tested by asking participants to synchronize finger force pulses with target isochronous stimuli with various intensities. In addition, participants performed a simple reaction-time task, for comparison. Higher intensity led to shorter reaction times. However, intensity manipulation did not affect NMA in the synchronization task. This finding is not consistent with the predictions based on the SAM. Discrepancies in sensitivity to stimulus intensity between sensorimotor synchronization and reaction-time tasks point to the involvement of different timing mechanisms in these two tasks.
Asunto(s)
Estimulación Acústica/métodos , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Adulto , Femenino , Humanos , Masculino , Estudiantes/psicología , Adulto JovenRESUMEN
Eight adults with a music-specific learning disability (i.e., tone deafness, but we prefer the term "congenital amusia") were asked to tap along with music (e.g., Ravel's Bolero) and with nonmusical isochronous sequences (i.e., noise bursts). The amusic persons' tapping performance was poorly synchronized with music compared to that of nine matched control participants. By contrast, synchronization with the noise bursts was normal, suggesting that amusic persons' timing difficulty is limited to music.
Asunto(s)
Percepción Auditiva/fisiología , Discapacidades para el Aprendizaje/fisiopatología , Música , Trastornos de la Percepción/fisiopatología , Percepción del Tiempo/fisiología , Adulto , Anciano , Humanos , Desempeño PsicomotorRESUMEN
The goal of the present study was to determine whether relaxing music (as compared to silence) might facilitate recovery from a psychologically stressful task. To this aim, changes in salivary cortisol levels were regularly monitored in 24 students before and after the Trier Social Stress Test. The data show that in the presence of music, the salivary cortisol level ceased to increase after the stressor, whereas in silence it continued to increase for 30 minutes.
Asunto(s)
Musicoterapia , Música/psicología , Terapia por Relajación , Estrés Psicológico/terapia , Adulto , Ansiedad/psicología , Ansiedad/terapia , Emociones , Humanos , Hidrocortisona/metabolismo , Masculino , Glándulas Salivales/metabolismo , Estrés Psicológico/psicologíaRESUMEN
Recognizing a well-known melody (e.g., one's national anthem) is not an all-or-none process. Instead, recognition develops progressively while the melody unfolds over time. To examine which factors govern the time course of this recognition process, the gating paradigm, initially designed to study auditory word recognition, was adapted to music. Musicians and nonmusicians were presented with segments of increasing duration of familiar and unfamiliar melodies (i.e., the first note, then the first two notes, then the first three notes, and so forth). Recognition was assessed after each segment either by requiring participants to provide a familiarity judgment (Experiment 1) or by asking them to sing the melody that they thought had been presented (Experiment 2). In general, the more familiar the melody, the fewer the notes required for recognition. Musicians judged music's familiarity within fewer notes than did nonmusicians, whereas the reverse situation (i.e., musicians were slower than nonmusicians) occurred when a sung response was requested. However, both musicians and nonmusicians appeared to segment melodies into the same perceptual units (i.e., motives) in order to access the correct representation in memory. These results are interpreted in light of the cohort model (Marslen-Wilson, 1987), as applied to the music domain.