Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(36): e2319459121, 2024 Sep 03.
Artículo en Inglés | MEDLINE | ID: mdl-39186645

RESUMEN

The perception of musical phrase boundaries is a critical aspect of human musical experience: It allows us to organize, understand, derive pleasure from, and remember music. Identifying boundaries is a prerequisite for segmenting music into meaningful chunks, facilitating efficient processing and storage while providing an enjoyable, fulfilling listening experience through the anticipation of upcoming musical events. Expanding on Sridharan et al.'s [Neuron 55, 521-532 (2007)] work on coarse musical boundaries between symphonic movements, we examined finer-grained boundaries. We measured the fMRI responses of 18 musicians and 18 nonmusicians during music listening. Using general linear model, independent component analysis, and Granger causality, we observed heightened auditory integration in anticipation to musical boundaries, and an extensive decrease within the fronto-temporal-parietal network during and immediately following boundaries. Notably, responses were modulated by musicianship. Findings uncover the intricate interplay between musical structure, expertise, and cognitive processing, advancing our knowledge of how the brain makes sense of music.


Asunto(s)
Percepción Auditiva , Encéfalo , Imagen por Resonancia Magnética , Música , Humanos , Música/psicología , Percepción Auditiva/fisiología , Masculino , Adulto , Femenino , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Adulto Joven , Estimulación Acústica
2.
Sci Rep ; 14(1): 15741, 2024 07 08.
Artículo en Inglés | MEDLINE | ID: mdl-38977822

RESUMEN

Rhythmic entrainment is a fundamental aspect of musical behavior, but the skills required to accurately synchronize movement to the beat seem to develop over many years. Motion capture studies of corporeal synchronization have shown immature abilities to lock in to the beat in children before age 5, and reliable synchronization ability in adults without musical training; yet there is a lack of data on full-body synchronization skills between early childhood and adulthood. To document typical rhythmic synchronization during middle childhood, we used a wireless motion capture device to measure period- and phase-locking of full body movement to rhythm and metronome stimuli in 6 to 11 year-old children in comparison with adult data. Results show a gradual improvement with age; however children's performance did not reach adult levels by age 12, suggesting that these skills continue to develop during adolescence. Our results suggest that in the absence of specific music training, full-body rhythmic entrainment skills improve gradually during middle childhood, and provide metrics for examining the continued maturation of these skills during adolescence.


Asunto(s)
Música , Humanos , Niño , Masculino , Femenino , Desarrollo Infantil/fisiología , Periodicidad , Adulto , Movimiento/fisiología , Adolescente
3.
Psychol Music ; 52(3): 305-321, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38708378

RESUMEN

Music that evokes strong emotional responses is often experienced as autobiographically salient. Through emotional experience, the musical features of songs could also contribute to their subjective autobiographical saliency. Songs which have been popular during adolescence or young adulthood (ages 10-30) are more likely to evoke stronger memories, a phenomenon known as a reminiscence bump. In the present study, we sought to determine how song-specific age, emotional responsiveness to music, musical features, and subjective memory functioning contribute to the subjective autobiographical saliency of music in older adults. In a music listening study, 112 participants rated excerpts of popular songs from the 1950s to the 1980s for autobiographical saliency. Additionally, they filled out questionnaires about emotional responsiveness to music and subjective memory functioning. The song excerpts' musical features were extracted computationally using MIRtoolbox. Results showed that autobiographical saliency was best predicted by song-specific age and emotional responsiveness to music and musical features. Newer songs that were more similar in rhythm to older songs were also rated higher in autobiographical saliency. Overall, this study contributes to autobiographical memory research by uncovering a set of factors affecting the subjective autobiographical saliency of music.

4.
Ann N Y Acad Sci ; 1530(1): 18-22, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37847675

RESUMEN

Music listening is a dynamic process that entails complex interactions between sensory, cognitive, and emotional processes. The naturalistic paradigm provides a means to investigate these processes in an ecologically valid manner by allowing experimental settings that mimic real-life musical experiences. In this paper, we highlight the importance of the naturalistic paradigm in studying dynamic music processing and discuss how it allows for investigating both the segregation and integration of brain processes using model-based and model-free methods. We further suggest that studying individual difference-modulated music processing in this paradigm can provide insights into the mechanisms of brain plasticity, which can have implications for the development of interventions and therapies in a personalized way. Finally, despite the challenges that the naturalistic paradigm poses, we end with a discussion on future prospects of music and neuroscience research, especially with the continued development and refinement of naturalistic paradigms and the adoption of open science practices.


Asunto(s)
Mapeo Encefálico , Música , Humanos , Mapeo Encefálico/métodos , Percepción Auditiva , Imagen por Resonancia Magnética , Encéfalo
5.
Cogn Sci ; 47(4): e13281, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-37096347

RESUMEN

Body movement is a primary nonverbal communication channel in humans. Coordinated social behaviors, such as dancing together, encourage multifarious rhythmic and interpersonally coupled movements from which observers can extract socially and contextually relevant information. The investigation of relations between visual social perception and kinematic motor coupling is important for social cognition. Perceived coupling of dyads spontaneously dancing to pop music has been shown to be highly driven by the degree of frontal orientation between dancers. The perceptual salience of other aspects, including postural congruence, movement frequencies, time-delayed relations, and horizontal mirroring remains, however, uncertain. In a motion capture study, 90 participant dyads moved freely to 16 musical excerpts from eight musical genres, while their movements were recorded using optical motion capture. A total from 128 recordings from 8 dyads maximally facing each other were selected to generate silent 8-s animations. Three kinematic features describing simultaneous and sequential full body coupling were extracted from the dyads. In an online experiment, the animations were presented to 432 observers, who were asked to rate perceived similarity and interaction between dancers. We found dyadic kinematic coupling estimates to be higher than those obtained from surrogate estimates, providing evidence for a social dimension of entrainment in dance. Further, we observed links between perceived similarity and coupling of both slower simultaneous horizontal gestures and posture bounding volumes. Perceived interaction, on the other hand, was more related to coupling of faster simultaneous gestures and to sequential coupling. Also, dyads who were perceived as more coupled tended to mirror their pair's movements.


Asunto(s)
Gestos , Música , Humanos , Conducta Imitativa , Movimiento , Postura , Percepción Visual
6.
PLoS One ; 17(9): e0275228, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36174020

RESUMEN

Previous literature has shown that music preferences (and thus preferred musical features) differ depending on the listening context and reasons for listening (RL). Yet, to our knowledge no research has investigated how features of music that people dance or move to relate to particular RL. Consequently, in two online surveys, participants (N = 173) were asked to name songs they move to ("dance music"). Additionally, participants (N = 105) from Survey 1 provided RL for their selected songs. To investigate relationships between the two, we first extracted audio features from dance music using the Spotify API and compared those features with a baseline dataset that is considered to represent music in general. Analyses revealed that, compared to the baseline, the dance music dataset had significantly higher levels of energy, danceability, valence, and loudness, and lower speechiness, instrumentalness and acousticness. Second, to identify potential subgroups of dance music, a cluster analysis was performed on its Spotify audio features. Results of this cluster analysis suggested five subgroups of dance music with varying combinations of Spotify audio features: "fast-lyrical", "sad-instrumental", "soft-acoustic", "sad-energy", and "happy-energy". Third, a factor analysis revealed three main RL categories: "achieving self-awareness", "regulation of arousal and mood", and "expression of social relatedness". Finally, we identified variations in people's RL ratings for each subgroup of dance music. This suggests that certain characteristics of dance music are more suitable for listeners' particular RL, which shape their music preferences. Importantly, the highest-rated RL items for dance music belonged to the "regulation of mood and arousal" category. This might be interpreted as the main function of dance music. We hope that future research will elaborate on connections between musical qualities of dance music and particular music listening functions.


Asunto(s)
Medios de Comunicación , Música , Acústica , Percepción Auditiva , Auscultación , Humanos
7.
Sci Rep ; 12(1): 2672, 2022 02 17.
Artículo en Inglés | MEDLINE | ID: mdl-35177683

RESUMEN

Movement is a universal response to music, with dance often taking place in social settings. Although previous work has suggested that socially relevant information, such as personality and gender, are encoded in dance movement, the generalizability of previous work is limited. The current study aims to decode dancers' gender, personality traits, and music preference from music-induced movements. We propose a method that predicts such individual difference from free dance movements, and demonstrate the robustness of the proposed method by using two data sets collected using different musical stimuli. In addition, we introduce a novel measure to explore the relative importance of different joints in predicting individual differences. Results demonstrated near perfect classification of gender, and notably high prediction of personality and music preferences. Furthermore, learned models demonstrated generalizability across datasets highlighting the importance of certain joints in intrinsic movement patterns specific to individual differences. Results further support theories of embodied music cognition and the role of bodily movement in musical experiences by demonstrating the influence of gender, personality, and music preferences on embodied responses to heard music.

8.
Hum Mov Sci ; 81: 102894, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34798445

RESUMEN

Humans are able to synchronize with musical events whilst coordinating their movements with others. Interpersonal entrainment phenomena, such as dance, involve multiple body parts and movement directions. Along with being multidimensional, dance movement interaction is plurifrequential, since it can occur at different frequencies simultaneously. Moreover, it is prone to nonstationarity, due to, for instance, displacements around the dance floor. Various methodological approaches have been adopted for the study of human entrainment, but only spectrogram-based techniques allow for an integral analysis thereof. This article proposes an alternative approach based upon the cross-wavelet transform, a state-of-the-art technique for nonstationary and plurifrequential analysis of univariate interaction. The presented approach generalizes the cross-wavelet transform to multidimensional signals. It allows to identify, for different frequencies of movement, estimates of interaction and leader-follower dynamics across body parts and movement directions. Further, the generalized cross-wavelet transform can be used to quantify the frequency-wise contribution of individual body parts and movement directions to overall movement synchrony. Since both in- and anti-phase relationships are dominant modes of coordination, the proposed implementation ignores whether movements are identical or opposite in phase. The article provides a thorough mathematical description of the method and includes proofs of its invariance under translation, rotation, and reflection. Finally, its properties and performance are illustrated via four examples using simulated data and behavioral data collected through a mirror game task and a free dance movement task.


Asunto(s)
Movimiento , Análisis de Ondículas , Humanos
9.
Front Psychol ; 12: 647756, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34017286

RESUMEN

Although music is known to be a part of everyday life and a resource for mood and emotion management, everyday life has changed significantly for many due to the global coronavirus pandemic, making the role of music in everyday life less certain. An online survey in which participants responded to Likert scale questions as well as providing free text responses was used to explore how participants were engaging with music during the first wave of the pandemic, whether and how they were using music for mood regulation, and how their engagement with music related to their experiences of worry and anxiety resulting from the pandemic. Results indicated that, for the majority of participants, while many felt their use of music had changed since the beginning of the pandemic, the amount of their music listening behaviors were either unaffected by the pandemic or increased. This was especially true of listening to self-selected music and watching live streamed concerts. Analysis revealed correlations between participants' use of mood for music regulation, their musical engagement, and their levels of anxiety and worry. A small number of participants described having negative emotional responses to music, the majority of whom also reported severe levels of anxiety.

10.
PLoS One ; 16(5): e0251692, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33989366

RESUMEN

BACKGROUND AND OBJECTIVES: Music has a unique capacity to evoke both strong emotions and vivid autobiographical memories. Previous music information retrieval (MIR) studies have shown that the emotional experience of music is influenced by a combination of musical features, including tonal, rhythmic, and loudness features. Here, our aim was to explore the relationship between music-evoked emotions and music-evoked memories and how musical features (derived with MIR) can predict them both. METHODS: Healthy older adults (N = 113, age ≥ 60 years) participated in a listening task in which they rated a total of 140 song excerpts comprising folk songs and popular songs from 1950s to 1980s on five domains measuring the emotional (valence, arousal, emotional intensity) and memory (familiarity, autobiographical salience) experience of the songs. A set of 24 musical features were extracted from the songs using computational MIR methods. Principal component analyses were applied to reduce multicollinearity, resulting in six core musical components, which were then used to predict the behavioural ratings in multiple regression analyses. RESULTS: All correlations between behavioural ratings were positive and ranged from moderate to very high (r = 0.46-0.92). Emotional intensity showed the highest correlation to both autobiographical salience and familiarity. In the MIR data, three musical components measuring salience of the musical pulse (Pulse strength), relative strength of high harmonics (Brightness), and fluctuation in the frequencies between 200-800 Hz (Low-mid) predicted both music-evoked emotions and memories. Emotional intensity (and valence to a lesser extent) mediated the predictive effect of the musical components on music-evoked memories. CONCLUSIONS: The results suggest that music-evoked emotions are strongly related to music-evoked memories in healthy older adults and that both music-evoked emotions and memories are predicted by the same core musical features.


Asunto(s)
Estimulación Acústica , Emociones/fisiología , Memoria Episódica , Recuerdo Mental/fisiología , Música , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Persona de Mediana Edad
12.
Int J Neural Syst ; 31(3): 2150001, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33353528

RESUMEN

To examine the electrophysiological underpinnings of the functional networks involved in music listening, previous approaches based on spatial independent component analysis (ICA) have recently been used to ongoing electroencephalography (EEG) and magnetoencephalography (MEG). However, those studies focused on healthy subjects, and failed to examine the group-level comparisons during music listening. Here, we combined group-level spatial Fourier ICA with acoustic feature extraction, to enable group comparisons in frequency-specific brain networks of musical feature processing. It was then applied to healthy subjects and subjects with major depressive disorder (MDD). The music-induced oscillatory brain patterns were determined by permutation correlation analysis between individual time courses of Fourier-ICA components and musical features. We found that (1) three components, including a beta sensorimotor network, a beta auditory network and an alpha medial visual network, were involved in music processing among most healthy subjects; and that (2) one alpha lateral component located in the left angular gyrus was engaged in music perception in most individuals with MDD. The proposed method allowed the statistical group comparison, and we found that: (1) the alpha lateral component was activated more strongly in healthy subjects than in the MDD individuals, and that (2) the derived frequency-dependent networks of musical feature processing seemed to be altered in MDD participants compared to healthy subjects. The proposed pipeline appears to be valuable for studying disrupted brain oscillations in psychiatric disorders during naturalistic paradigms.


Asunto(s)
Trastorno Depresivo Mayor , Música , Percepción Auditiva , Encéfalo , Mapeo Encefálico , Depresión , Electroencefalografía , Humanos
13.
Brain Topogr ; 33(3): 289-302, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-32124110

RESUMEN

Recently, exploring brain activity based on functional networks during naturalistic stimuli especially music and video represents an attractive challenge because of the low signal-to-noise ratio in collected brain data. Although most efforts focusing on exploring the listening brain have been made through functional magnetic resonance imaging (fMRI), sensor-level electro- or magnetoencephalography (EEG/MEG) technique, little is known about how neural rhythms are involved in the brain network activity under naturalistic stimuli. This study exploited cortical oscillations through analysis of ongoing EEG and musical feature during freely listening to music. We used a data-driven method that combined music information retrieval with spatial Fourier Independent Components Analysis (spatial Fourier-ICA) to probe the interplay between the spatial profiles and the spectral patterns of the brain network emerging from music listening. Correlation analysis was performed between time courses of brain networks extracted from EEG data and musical feature time series extracted from music stimuli to derive the musical feature related oscillatory patterns in the listening brain. We found brain networks of musical feature processing were frequency-dependent. Musical feature time series, especially fluctuation centroid and key feature, were associated with an increased beta activation in the bilateral superior temporal gyrus. An increased alpha oscillation in the bilateral occipital cortex emerged during music listening, which was consistent with alpha functional suppression hypothesis in task-irrelevant regions. We also observed an increased delta-beta oscillatory activity in the prefrontal cortex associated with musical feature processing. In addition to these findings, the proposed method seems valuable for characterizing the large-scale frequency-dependent brain activity engaged in musical feature processing.


Asunto(s)
Percepción Auditiva , Mapeo Encefálico , Música , Encéfalo/diagnóstico por imagen , Electroencefalografía , Humanos
14.
Neuroimage ; 216: 116191, 2020 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-31525500

RESUMEN

Keeping time is fundamental for our everyday existence. Various isochronous activities, such as locomotion, require us to use internal timekeeping. This phenomenon comes into play also in other human pursuits such as dance and music. When listening to music, we spontaneously perceive and predict its beat. The process of beat perception comprises both beat inference and beat maintenance, their relative importance depending on the salience of beat in the music. To study functional connectivity associated with these processes in a naturalistic situation, we used functional magnetic resonance imaging to measure brain responses of participants while they were listening to a piece of music containing strong contrasts in beat salience. Subsequently, we utilized dynamic graph analysis and psychophysiological interactions (PPI) analysis in connection with computational modelling of beat salience to investigate how functional connectivity manifests these processes. As the main effect, correlation analyses between the obtained dynamic graph measures and the beat salience measure revealed increased centrality in auditory-motor cortices, cerebellum, and extrastriate visual areas during low beat salience, whereas regions of the default mode- and central executive networks displayed high centrality during high beat salience. PPI analyses revealed partial dissociation of functional networks belonging to this pathway indicating complementary neural mechanisms crucial in beat inference and maintenance, processes pivotal for extracting and predicting temporal regularities in our environment.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Cerebelo/fisiología , Conectoma/psicología , Corteza Motora/fisiología , Música/psicología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/diagnóstico por imagen , Cerebelo/diagnóstico por imagen , Conectoma/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Corteza Motora/diagnóstico por imagen , Periodicidad , Adulto Joven
15.
J Neurosci Methods ; 330: 108502, 2020 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-31730873

RESUMEN

BACKGROUND: Ongoing EEG data are recorded as mixtures of stimulus-elicited EEG, spontaneous EEG and noises, which require advanced signal processing techniques for separation and analysis. Existing methods cannot simultaneously consider common and individual characteristics among/within subjects when extracting stimulus-elicited brain activities from ongoing EEG elicited by 512-s long modern tango music. NEW METHOD: Aiming to discover the commonly music-elicited brain activities among subjects, we provide a comprehensive framework based on fast double-coupled nonnegative tensor decomposition (FDC-NTD) algorithm. The proposed algorithm with a generalized model is capable of simultaneously decomposing EEG tensors into common and individual components. RESULTS: With the proposed framework, the brain activities can be effectively extracted and sorted into the clusters of interest. The proposed algorithm based on the generalized model achieved higher fittings and stronger robustness. In addition to the distribution of centro-parietal and occipito-parietal regions with theta and alpha oscillations, the music-elicited brain activities were also located in the frontal region and distributed in the 4∼11 Hz band. COMPARISON WITH EXISTING METHOD(S): The present study, by providing a solution of how to separate common stimulus-elicited brain activities using coupled tensor decomposition, has shed new light on the processing and analysis of ongoing EEG data in multi-subject level. It can also reveal more links between brain responses and the continuous musical stimulus. CONCLUSIONS: The proposed framework based on coupled tensor decomposition can be successfully applied to group analysis of ongoing EEG data, as it can be reliably inferred that those brain activities we obtained are associated with musical stimulus.


Asunto(s)
Algoritmos , Encéfalo/fisiología , Electroencefalografía/métodos , Neuroimagen Funcional/métodos , Procesamiento de Señales Asistido por Computador , Adulto , Percepción Auditiva/fisiología , Humanos , Persona de Mediana Edad , Música , Adulto Joven
16.
Sci Rep ; 9(1): 15594, 2019 10 30.
Artículo en Inglés | MEDLINE | ID: mdl-31666586

RESUMEN

We investigated the relationships between perceptions of similarity and interaction in spontaneously dancing dyads, and movement features extracted using novel computational methods. We hypothesized that dancers' movements would be perceived as more similar when they exhibited spatially and temporally comparable movement patterns, and as more interactive when they spatially oriented more towards each other. Pairs of dancers were asked to move freely to two musical excerpts while their movements were recorded using optical motion capture. Subsequently, in two separate perceptual experiments we presented stick figure animations of the dyads to observers, who rated degree of interaction and similarity between dancers. Mean perceptual ratings were compared with three different approaches for quantifying coordination: torso orientation, temporal coupling, and spatial coupling. Correlations and partial correlations across dyads were computed between each estimate and the perceptual measures. A systematic exploration showed that torso orientation (dancers facing more towards each other) is a strong predictor of perceived interaction even after controlling for other features, whereas temporal and spatial coupling (dancers moving similarly in space and in time) are better predictors for perceived similarity. Further, our results suggest that similarity is a necessary but not sufficient condition for interaction.

17.
PLoS One ; 14(5): e0216499, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31051008

RESUMEN

Learning, attention and action play a crucial role in determining how stimulus predictions are formed, stored, and updated. Years-long experience with the specific repertoires of sounds of one or more musical styles is what characterizes professional musicians. Here we contrasted active experience with sounds, namely long-lasting motor practice, theoretical study and engaged listening to the acoustic features characterizing a musical style of choice in professional musicians with mainly passive experience of sounds in laypersons. We hypothesized that long-term active experience of sounds would influence the neural predictions of the stylistic features in professional musicians in a distinct way from the mainly passive experience of sounds in laypersons. Participants with different musical backgrounds were recruited: professional jazz and classical musicians, amateur musicians and non-musicians. They were presented with a musical multi-feature paradigm eliciting mismatch negativity (MMN), a prediction error signal to changes in six sound features for only 12 minutes of electroencephalography (EEG) and magnetoencephalography (MEG) recordings. We observed a generally larger MMN amplitudes-indicative of stronger automatic neural signals to violated priors-in jazz musicians (but not in classical musicians) as compared to non-musicians and amateurs. The specific MMN enhancements were found for spectral features (timbre, pitch, slide) and sound intensity. In participants who were not musicians, the higher preference for jazz music was associated with reduced MMN to pitch slide (a feature common in jazz music style). Our results suggest that long-lasting, active experience of a musical style is associated with accurate neural priors for the sound features of the preferred style, in contrast to passive listening.


Asunto(s)
Estimulación Acústica/métodos , Percepción Sonora/fisiología , Percepción de la Altura Tonal/fisiología , Adulto , Electroencefalografía , Femenino , Humanos , Magnetoencefalografía , Masculino , Música , Adulto Joven
18.
Atten Percept Psychophys ; 81(7): 2461-2472, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31062302

RESUMEN

For both musicians and music psychologists, beat rate (BPM) has often been regarded as a transparent measure of musical speed or tempo, yet recent research has shown that tempo is more than just BPM. In a previous study, London, Burger, Thompson, and Toiviainen (Acta Psychologica, 164, 70-80, 2016) presented participants with original as well as "time-stretched" versions of classic R&B songs; time stretching slows down or speeds up a recording without changing its pitch or timbre. In that study we discovered a tempo anchoring effect (TAE): Although relative tempo judgments (original vs. time-stretched versions of the same song) were correct, they were at odds with BPM rates of each stimulus. As previous studies have shown that synchronous movement enhances rhythm perception, we hypothesized that tapping along to the beat of these songs would reduce or eliminate the TAE and increase the salience of the beat rate of each stimulus. In the current study participants were presented with the London et al. (Acta Psychologica, 164, 70-80, 2016) stimuli in nonmovement and movement conditions. We found that although participants were able to make BPM-based tempo judgments of generic drumming patterns, and were able to tap along to the R&B stimuli at the correct beat rates, the TAE persisted in both movement and nonmovement conditions. Thus, contrary to our hypothesis that movement would reduce or eliminate the TAE, we found a disjunction between correctly synchronized motor behavior and tempo judgment. The implications of the tapping-TAE dissociation in the broader context of tempo and rhythm perception are discussed, and further approaches to studying the TAE-tapping dissociation are suggested.


Asunto(s)
Percepción Auditiva/fisiología , Dedos/fisiología , Juicio/fisiología , Movimiento/fisiología , Música/psicología , Adolescente , Adulto , Femenino , Humanos , Masculino , Movimiento (Física) , Factores de Tiempo , Adulto Joven
19.
PLoS One ; 13(4): e0196065, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29672597

RESUMEN

Expertise in music has been investigated for decades and the results have been applied not only in composition, performance and music education, but also in understanding brain plasticity in a larger context. Several studies have revealed a strong connection between auditory and motor processes and listening to and performing music, and music imagination. Recently, as a logical next step in music and movement, the cognitive and affective neurosciences have been directed towards expertise in dance. To understand the versatile and overlapping processes during artistic stimuli, such as music and dance, it is necessary to study them with continuous naturalistic stimuli. Thus, we used long excerpts from the contemporary dance piece Carmen presented with and without music to professional dancers, musicians, and laymen in an EEG laboratory. We were interested in the cortical phase synchrony within each participant group over several frequency bands during uni- and multimodal processing. Dancers had strengthened theta and gamma synchrony during music relative to silence and silent dance, whereas the presence of music decreased systematically the alpha and beta synchrony in musicians. Laymen were the only group of participants with significant results related to dance. Future studies are required to understand whether these results are related to some other factor (such as familiarity to the stimuli), or if our results reveal a new point of view to dance observation and expertise.


Asunto(s)
Corteza Cerebral/fisiología , Baile , Música , Estimulación Acústica , Adulto , Ondas Encefálicas , Fenómenos Electrofisiológicos , Femenino , Humanos , Masculino , Adulto Joven
20.
J Neurosci Methods ; 303: 1-6, 2018 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-29596859

RESUMEN

BACKGROUND: There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. RESULTS: The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. COMPARISON WITH EXISTING METHOD: Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. CONCLUSIONS: Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Encéfalo/fisiología , Neurociencia Cognitiva/métodos , Música , Adulto , Encéfalo/diagnóstico por imagen , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Análisis de Componente Principal , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA