Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 46
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Adv Exp Med Biol ; 1455: 227-256, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38918355

RESUMO

The aim of this chapter is to give an overview of how the perception of rhythmic temporal regularity such as a regular beat in music can be studied in human adults, human newborns, and nonhuman primates using event-related brain potentials (ERPs). First, we discuss different aspects of temporal structure in general, and musical rhythm in particular, and we discuss the possible mechanisms underlying the perception of regularity (e.g., a beat) in rhythm. Additionally, we highlight the importance of dissociating beat perception from the perception of other types of structure in rhythm, such as predictable sequences of temporal intervals, ordinal structure, and rhythmic grouping. In the second section of the chapter, we start with a discussion of auditory ERPs elicited by infrequent and frequent sounds: ERP responses to regularity violations, such as mismatch negativity (MMN), N2b, and P3, as well as early sensory responses to sounds, such as P1 and N1, have been shown to be instrumental in probing beat perception. Subsequently, we discuss how beat perception can be probed by comparing ERP responses to sounds in regular and irregular sequences, and by comparing ERP responses to sounds in different metrical positions in a rhythm, such as on and off the beat or on strong and weak beats. Finally, we will discuss previous research that has used the aforementioned ERPs and paradigms to study beat perception in human adults, human newborns, and nonhuman primates. In doing so, we consider the possible pitfalls and prospects of the technique, as well as future perspectives.


Assuntos
Percepção Auditiva , Música , Primatas , Humanos , Animais , Percepção Auditiva/fisiologia , Recém-Nascido , Adulto , Primatas/fisiologia , Potenciais Evocados Auditivos/fisiologia , Estimulação Acústica/métodos , Potenciais Evocados/fisiologia , Eletroencefalografia
2.
Cognition ; 243: 105670, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38016227

RESUMO

Newborn infants have been shown to extract temporal regularities from sound sequences, both in the form of learning regular sequential properties, and extracting periodicity in the input, commonly referred to as a regular pulse or the 'beat'. However, these two types of regularities are often indistinguishable in isochronous sequences, as both statistical learning and beat perception can be elicited by the regular alternation of accented and unaccented sounds. Here, we manipulated the isochrony of sound sequences in order to disentangle statistical learning from beat perception in sleeping newborn infants in an EEG experiment, as previously done in adults and macaque monkeys. We used a binary accented sequence that induces a beat when presented with isochronous timing, but not when presented with randomly jittered timing. We compared mismatch responses to infrequent deviants falling on either accented or unaccented (i.e., odd and even) positions. Results showed a clear difference between metrical positions in the isochronous sequence, but not in the equivalent jittered sequence. This suggests that beat processing is present in newborns. Despite previous evidence for statistical learning in newborns the effects of this ability were not detected in the jittered condition. These results show that statistical learning by itself does not fully explain beat processing in newborn infants.


Assuntos
Percepção Auditiva , Música , Humanos , Recém-Nascido , Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Periodicidade
3.
Front Psychol ; 14: 1218394, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38022909

RESUMO

Music is a cultural activity universally present in all human societies. Several hypotheses have been formulated to understand the possible origins of music and the reasons for its emergence. Here, we test two hypotheses: (1) the coalition signaling hypothesis which posits that music could have emerged as a tool to signal cooperative intent and signal strength of alliances and (2) music as a strategy to deter potential predators. In addition, we further explore the link between tactile cues and the propensity of mothers to sing toward infants. For this, we investigated the singing behaviors of hunter-gatherer mothers during daily foraging trips among the Mbendjele BaYaka in the Republic of the Congo. Although singing is a significant component of their daily activities, such as when walking in the forest or collecting food sources, studies on human music production in hunter-gatherer societies are mostly conducted during their ritual ceremonies. In this study, we collected foraging and singing behavioral data of mothers by using focal follows of five BaYaka women during their foraging trips in the forest. In accordance with our predictions for the coalition signaling hypothesis, women were more likely to sing when present in large groups, especially when group members were less familiar. However, predictions of the predation deterrence hypothesis were not supported as the interaction between group size and distance from the village did not have a significant effect on the likelihood of singing. The latter may be due to limited variation in predation risk in the foraging areas, because of the intense bush meat trade, and hence, future studies should include foraging areas with higher densities of wild animals. Lastly, we found that mothers were more likely to sing when they were carrying infants compared to when infants were close, but carried by others, supporting the prediction that touch plays an important prerequisite role in musical interaction between the mother and child. Our study provides important insight into the role of music as a tool in displaying the intent between or within groups to strengthen potentially conflict-free alliances during joint foraging activities.

4.
Anim Cogn ; 26(4): 1161-1175, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-36934374

RESUMO

Zebra finches rely mainly on syllable phonology rather than on syllable sequence when they discriminate between two songs. However, they can also learn to discriminate two strings containing the same set of syllables by their sequence. How learning about the phonological characteristics of syllables and their sequence relate to each other and to the composition of the stimuli is still an open question. We compared whether and how the zebra finches' relative sensitivity for syllable phonology and syllable sequence depends on the differences between syllable strings. Two groups of zebra finches were trained in a Go-Left/Go-Right task to discriminate either between two strings in which each string contained a unique set of song syllables ('Different-syllables group') or two strings in which both strings contained the same set of syllables, but in a different sequential order ('Same-syllables group'). We assessed to what extent the birds in the two experimental groups attend to the spectral characteristics and the sequence of the syllables by measuring the responses to test strings consisting of spectral modifications or sequence changes. Our results showed no difference in the number of trials needed to discriminate strings consisting of either different or identical sets of syllables. Both experimental groups attended to changes in spectral features in a similar way, but the group for which both training strings consisted of the same set of syllables responded more strongly to changes in sequence than the group for which the training strings consisted of different sets of syllables. This outcome suggests the presence of an additional learning process to learn about syllable sequence when learning about syllable phonology is not sufficient to discriminate two strings. Our study thus demonstrates that the relative importance of syllable phonology and sequence depends on how these features vary among stimuli. This indicates cognitive flexibility in the acoustic features that songbirds might use in their song recognition.


Assuntos
Tentilhões , Animais , Tentilhões/fisiologia , Vocalização Animal/fisiologia , Aprendizagem , Percepção Auditiva/fisiologia , Cognição
6.
Behav Brain Sci ; 44: e78, 2021 09 30.
Artigo em Inglês | MEDLINE | ID: mdl-34588038

RESUMO

The two target articles address the origins of music in complementary ways. However, both proposals focus on overt musical behaviour, largely ignoring the role of perception and cognition, and they blur the boundaries between the potential origins of language and music. To resolve this, an alternative research strategy is proposed that focuses on the core cognitive components of musicality.


Assuntos
Música , Cognição , Humanos , Idioma
7.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200324, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34420379

RESUMO

This theme issue assembles current studies that ask how and why precise synchronization and related forms of rhythm interaction are expressed in a wide range of behaviour. The studies cover human activity, with an emphasis on music, and social behaviour, reproduction and communication in non-human animals. In most cases, the temporally aligned rhythms have short-from several seconds down to a fraction of a second-periods and are regulated by central nervous system pacemakers, but interactions involving rhythms that are 24 h or longer and originate in biological clocks also occur. Across this spectrum of activities, species and time scales, empirical work and modelling suggest that synchrony arises from a limited number of coupled-oscillator mechanisms with which individuals mutually entrain. Phylogenetic distribution of these common mechanisms points towards convergent evolution. Studies of animal communication indicate that many synchronous interactions between the signals of neighbouring individuals are specifically favoured by selection. However, synchronous displays are often emergent properties of entrainment between signalling individuals, and in some situations, the very signallers who produce a display might not gain any benefit from the collective timing of their production. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Assuntos
Comunicação Animal , Encéfalo/fisiologia , Atividades Humanas , Música , Periodicidade , Reprodução , Comportamento Social , Animais , Humanos
8.
Philos Trans R Soc Lond B Biol Sci ; 376(1835): 20200325, 2021 10 11.
Artigo em Inglês | MEDLINE | ID: mdl-34420381

RESUMO

Humans perceive and spontaneously move to one or several levels of periodic pulses (a meter, for short) when listening to musical rhythm, even when the sensory input does not provide prominent periodic cues to their temporal location. Here, we review a multi-levelled framework to understanding how external rhythmic inputs are mapped onto internally represented metric pulses. This mapping is studied using an approach to quantify and directly compare representations of metric pulses in signals corresponding to sensory inputs, neural activity and behaviour (typically body movement). Based on this approach, recent empirical evidence can be drawn together into a conceptual framework that unpacks the phenomenon of meter into four levels. Each level highlights specific functional processes that critically enable and shape the mapping from sensory input to internal meter. We discuss the nature, constraints and neural substrates of these processes, starting with fundamental mechanisms investigated in macaque monkeys that enable basic forms of mapping between simple rhythmic stimuli and internally represented metric pulse. We propose that human evolution has gradually built a robust and flexible system upon these fundamental processes, allowing more complex levels of mapping to emerge in musical behaviours. This approach opens promising avenues to understand the many facets of rhythmic behaviours across individuals and species. This article is part of the theme issue 'Synchrony and rhythm interaction: from the brain to behavioural ecology'.


Assuntos
Percepção Auditiva/fisiologia , Encéfalo/fisiologia , Periodicidade , Primatas/fisiologia , Estimulação Acústica , Animais , Sinais (Psicologia) , Humanos , Macaca/fisiologia
9.
PLoS One ; 15(3): e0229109, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32130244

RESUMO

Music and language have long been considered two distinct cognitive faculties governed by domain-specific cognitive and neural mechanisms. Recent work into the domain-specificity of pitch processing in both domains appears to suggest pitch processing to be governed by shared neural mechanisms. The current study aimed to explore the domain-specificity of pitch processing by simultaneously presenting pitch contours in speech and music to speakers of a tonal language, and measuring behavioral response and event-related potentials (ERPs). Native speakers of Mandarin were exposed to concurrent pitch contours in melody and speech. Contours in melody emulated those in speech were either congruent or incongruent with the pitch contour of the lexical tone (i.e., rising or falling). Component magnitudes of the N2b and N400 were used as indices of lexical processing. We found that the N2b was modulated by melodic pitch; incongruent item evoked significantly stronger amplitude. There was a trend of N400 to be modulated in the same way. Interestingly, these effects were present only on rising tones. Amplitude and time-course of the N2b and N400 may suggest an interference of melodic pitch contours with both early and late stages of phonological and semantic processing.


Assuntos
Idioma , Música/psicologia , Percepção da Altura Sonora/fisiologia , Semântica , Percepção da Fala/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Povo Asiático/psicologia , Percepção Auditiva/fisiologia , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Vias Neurais/fisiologia , Tempo de Reação , Adulto Jovem
10.
J Cogn Neurosci ; 32(7): 1221-1241, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-31933432

RESUMO

Predicting the timing of incoming information allows the brain to optimize information processing in dynamic environments. Behaviorally, temporal expectations have been shown to facilitate processing of events at expected time points, such as sounds that coincide with the beat in musical rhythm. Yet, temporal expectations can develop based on different forms of structure in the environment, not just the regularity afforded by a musical beat. Little is still known about how different types of temporal expectations are neurally implemented and affect performance. Here, we orthogonally manipulated the periodicity and predictability of rhythmic sequences to examine the mechanisms underlying beat-based and memory-based temporal expectations, respectively. Behaviorally and using EEG, we looked at the effects of beat-based and memory-based expectations on auditory processing when rhythms were task-relevant or task-irrelevant. At expected time points, both beat-based and memory-based expectations facilitated target detection and led to attenuation of P1 and N1 responses, even when expectations were task-irrelevant (unattended). For beat-based expectations, we additionally found reduced target detection and enhanced N1 responses for events at unexpected time points (e.g., off-beat), regardless of the presence of memory-based expectations or task relevance. This latter finding supports the notion that periodicity selectively induces rhythmic fluctuations in neural excitability and furthermore indicates that, although beat-based and memory-based expectations may similarly affect auditory processing of expected events, their underlying neural mechanisms may be different.


Assuntos
Motivação , Música , Atenção , Percepção Auditiva , Encéfalo , Humanos , Periodicidade
11.
Music Percept ; 37(3): 185-195, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36936548

RESUMO

Many foundational questions in the psychology of music require cross-cultural approaches, yet the vast majority of work in the field to date has been conducted with Western participants and Western music. For cross-cultural research to thrive, it will require collaboration between people from different disciplinary backgrounds, as well as strategies for overcoming differences in assumptions, methods, and terminology. This position paper surveys the current state of the field and offers a number of concrete recommendations focused on issues involving ethics, empirical methods, and definitions of "music" and "culture."

12.
PLoS One ; 13(11): e0207265, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30419066

RESUMO

BACKGROUND: Previous literature has shown a putative relationship between playing a musical instrument and a benefit in various cognitive domains. However, to date it still remains unknown whether the exposure to a musically-enriched environment instead of playing an instrument yourself might also increase cognitive domains such as language, mathematics or executive sub-functions such as for example planning or working memory in primary school children. DESIGN: Cross-sectional. METHOD: Exposure to a musically-enriched environment like listening to music at home, during play or when attending concerts was assessed using a comprehensive intake questionnaire administered to a sample of 176 primary school children. Furthermore, participants completed the verbal intelligence section of the Wechsler Intelligence Scale (WISC III), performed executive sub-function tasks such as planning (Tower of London), working memory (Klingberg Matrix backward span) and inhibition (Go/no-Go task), and a short-term memory task (Klingberg Matrix forward span). RESULTS: Linear and multiple regression analyses showed no significant relationship between exposure to a musically-enriched environment, executive sub-functions (planning, inhibition and working memory), and short-term memory. The relationship between an enriched musical environment and verbal IQ has revealed trends. DISCUSSION: Experiencing a musically enriched environment does not serve as predictor for higher performance on executive sub-functions, however, can influence verbal IQ.


Assuntos
Função Executiva , Inteligência , Idioma , Memória de Curto Prazo , Música/psicologia , Criança , Estudos Transversais , Feminino , Humanos , Masculino , Meio Social
13.
Front Neurosci ; 12: 475, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30061809

RESUMO

Charles Darwin suggested the perception of rhythm to be common to all animals. While only recently experimental research is finding some support for this claim, there are also aspects of rhythm cognition that appear to be species-specific, such as the capability to perceive a regular pulse (or beat) in a varying rhythm. In the current study, using EEG, we adapted an auditory oddball paradigm that allows for disentangling the contributions of beat perception and isochrony to the temporal predictability of the stimulus. We presented two rhesus monkeys (Macaca mulatta) with a rhythmic sequence in two versions: an isochronous version, that was acoustically accented such that it could induce a duple meter (like a march), and a jittered version using the same acoustically accented sequence but that was presented in a randomly timed fashion, as such disabling beat induction. The results reveal that monkeys are sensitive to the isochrony of the stimulus, but not its metrical structure. The MMN was influenced by the isochrony of the stimulus, resulting in a larger MMN in the isochronous as opposed to the jittered condition. However, the MMN for both monkeys showed no interaction between metrical position and isochrony. So, while the monkey brain appears to be sensitive to the isochrony of the stimulus, we find no evidence in support of beat perception. We discuss these results in the context of the gradual audiomotor evolution (GAE) hypothesis (Merchant and Honing, 2014) that suggests beat-based timing to be omnipresent in humans but only weakly so or absent in non-human primates.

14.
Front Neurosci ; 12: 103, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29541017

RESUMO

Background: Research on the effects of music education on cognitive abilities has generated increasing interest across the scientific community. Nonetheless, longitudinal studies investigating the effects of structured music education on cognitive sub-functions are still rare. Prime candidates for investigating a relationship between academic achievement and music education appear to be executive functions such as planning, working memory, and inhibition. Methods: One hundred and forty-seven primary school children, Mage = 6.4 years, SD = 0.65 were followed for 2.5 years. Participants were randomized into four groups: two music intervention groups, one active visual arts group, and a no arts control group. Neuropsychological tests assessed verbal intelligence and executive functions. Additionally, a national pupil monitor provided data on academic performance. Results: Children in the visual arts group perform better on visuospatial memory tasks as compared to the three other conditions. However, the test scores on inhibition, planning and verbal intelligence increased significantly in the two music groups over time as compared to the visual art and no arts controls. Mediation analysis with executive functions and verbal IQ as mediator for academic performance have shown a possible far transfer effect from executive sub-function to academic performance scores. Discussion: The present results indicate a positive influence of long-term music education on cognitive abilities such as inhibition and planning. Of note, following a two-and-a-half year long visual arts program significantly improves scores on a visuospatial memory task. All results combined, this study supports a far transfer effect from music education to academic achievement mediated by executive sub-functions.

15.
Ann N Y Acad Sci ; 2018 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-29542134

RESUMO

In recent years, music and musicality have been the focus of an increasing amount of research effort. This has led to a growing role and visibility of the contribution of (bio)musicology to the field of neuroscience and cognitive sciences at large. While it has been widely acknowledged that there are commonalities between speech, language, and musicality, several researchers explain this by considering musicality as an epiphenomenon of language. However, an alternative hypothesis is that musicality is an innate and widely shared capacity for music that can be seen as a natural, spontaneously developing set of traits based on and constrained by our cognitive abilities and their underlying biology. A comparative study of musicality in humans and well-known animal models (monkeys, birds, pinnipeds) will further our insights on which features of musicality are exclusive to humans and which are shared between humans and nonhuman animals, contribute to an understanding of the musical phenotype, and further constrain existing evolutionary theories of music and musicality.

16.
Front Psychol ; 9: 38, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29441035

RESUMO

Despite differences in their function and domain-specific elements, syntactic processing in music and language is believed to share cognitive resources. This study aims to investigate whether the simultaneous processing of language and music share the use of a common syntactic processor or more general attentional resources. To investigate this matter we tested musicians and non-musicians using visually presented sentences and aurally presented melodies containing syntactic local and long-distance dependencies. Accuracy rates and reaction times of participants' responses were collected. In both sentences and melodies, unexpected syntactic anomalies were introduced. This is the first study to address the processing of local and long-distance dependencies in language and music combined while reducing the effect of sensory memory. Participants were instructed to focus on language (language session), music (music session), or both (dual session). In the language session, musicians and non-musicians performed comparably in terms of accuracy rates and reaction times. As expected, groups' differences appeared in the music session, with musicians being more accurate in their responses than non-musicians and only the latter showing an interaction between the accuracy rates for music and language syntax. In the dual session musicians were overall more accurate than non-musicians. However, both groups showed comparable behavior, by displaying an interaction between the accuracy rates for language and music syntax responses. In our study, accuracy rates seem to better capture the interaction between language and music syntax; and this interaction seems to indicate the use of distinct, however, interacting mechanisms as part of decision making strategy. This interaction seems to be subject of an increase of attentional load and domain proficiency. Our study contributes to the long-lasting debate about the commonalities between language and music by providing evidence for their interaction at a more domain-general level.

17.
PLoS One ; 13(1): e0190322, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29320533

RESUMO

Perception of a regular beat in music is inferred from different types of accents. For example, increases in loudness cause intensity accents, and the grouping of time intervals in a rhythm creates temporal accents. Accents are expected to occur on the beat: when accents are "missing" on the beat, the beat is more difficult to find. However, it is unclear whether accents occurring off the beat alter beat perception similarly to missing accents on the beat. Moreover, no one has examined whether intensity accents influence beat perception more or less strongly than temporal accents, nor how musical expertise affects sensitivity to each type of accent. In two experiments, we obtained ratings of difficulty in finding the beat in rhythms with either temporal or intensity accents, and which varied in the number of accents on the beat as well as the number of accents off the beat. In both experiments, the occurrence of accents on the beat facilitated beat detection more in musical experts than in musical novices. In addition, the number of accents on the beat affected beat finding more in rhythms with temporal accents than in rhythms with intensity accents. The effect of accents off the beat was much weaker than the effect of accents on the beat and appeared to depend on musical expertise, as well as on the number of accents on the beat: when many accents on the beat are missing, beat perception is quite difficult, and adding accents off the beat may not reduce beat perception further. Overall, the different types of accents were processed qualitatively differently, depending on musical expertise. Therefore, these findings indicate the importance of designing ecologically valid stimuli when testing beat perception in musical novices, who may need different types of accent information than musical experts to be able to find a beat. Furthermore, our findings stress the importance of carefully designing rhythms for social and clinical applications of beat perception, as not all listeners treat all rhythms alike.


Assuntos
Percepção Auditiva , Música , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
18.
Front Psychol ; 8: 824, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28588533

RESUMO

Enculturation is known to shape the perception of meter in music but this is not explicitly accounted for by current cognitive models of meter perception. We hypothesize that the induction of meter is a result of predictive coding: interpreting onsets in a rhythm relative to a periodic meter facilitates prediction of future onsets. Such prediction, we hypothesize, is based on previous exposure to rhythms. As such, predictive coding provides a possible explanation for the way meter perception is shaped by the cultural environment. Based on this hypothesis, we present a probabilistic model of meter perception that uses statistical properties of the relation between rhythm and meter to infer meter from quantized rhythms. We show that our model can successfully predict annotated time signatures from quantized rhythmic patterns derived from folk melodies. Furthermore, we show that by inferring meter, our model improves prediction of the onsets of future events compared to a similar probabilistic model that does not infer meter. Finally, as a proof of concept, we demonstrate how our model can be used in a simulation of enculturation. From the results of this simulation, we derive a class of rhythms that are likely to be interpreted differently by enculturated listeners with different histories of exposure to rhythms.

20.
Front Psychol ; 8: 621, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28487668

RESUMO

We present a hypothesis-driven study on the variation of melody phrases in a collection of Dutch folk songs. We investigate the variation of phrases within the folk songs through a pattern matching method which detects occurrences of these phrases within folk song variants, and ask the question: do the phrases which show less variation have different properties than those which do? We hypothesize that theories on melody recall may predict variation, and as such, investigate phrase length, the position and number of repetitions of a given phrase in the melody in which it occurs, as well as expectancy and motif repetivity. We show that all of these predictors account for the observed variation to a moderate degree, and that, as hypothesized, those phrases vary less which are rather short, contain highly expected melodic material, occur relatively early in the melody, and contain small pitch intervals. A large portion of the variance is left unexplained by the current model, however, which leads us to a discussion of future approaches to study memorability of melodies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...