Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 11: 1480, 2020.
Article in English | MEDLINE | ID: mdl-32733333

ABSTRACT

To this day, the study of the substratum of thought and its implied mechanisms is rarely directly addressed. Nowadays, systemic approaches based on introspective methodologies are no longer fashionable and are often overlooked or ignored. Most frequently, reductionist approaches are followed for deciphering the neuronal circuits functionally associated with cognitive processes. However, we argue that systemic studies of individual thought may still contribute to a useful and complementary description of the multimodal nature of perception, because they can take into account individual diversity while still identifying the common features of perceptual processes. We propose to address this question by looking at one possible task for recognition of a "signifying sound", as an example of conceptual grasping of a perceptual response. By adopting a mixed approach combining qualitative analyses of interviews based on introspection with quantitative statistical analyses carried out on the resulting categorization, this study describes a variety of mental strategies used by musicians to identify notes' pitch. Sixty-seven musicians (music students and professionals) were interviewed, revealing that musicians utilize intermediate steps during note identification by selecting or activating cognitive bricks that help construct and reach the correct decision. We named these elements "mental anchorpoints" (MA). Although the anchorpoints are not universal, and differ between individuals, they can be grouped into categories related to three main sensory modalities - auditory, visual and kinesthetic. Such categorization enabled us to characterize the mental representations (MR) that allow musicians to name notes in relationship to eleven basic typologies of anchorpoints. We propose a conceptual framework which summarizes the process of note identification in five steps, starting from sensory detection and ending with the verbalization of the note pitch, passing through the pivotal role of MAs and MRs. We found that musicians use multiple strategies and select individual combinations of MAs belonging to these three different sensory modalities, both in isolation and in combination.

2.
Front Psychol ; 10: 1024, 2019.
Article in English | MEDLINE | ID: mdl-31231262

ABSTRACT

This article deals with the question of how the perception of the "immanent accents" can be predicted and modeled. By immanent accent we mean any musical event in the score that is related to important points in the musical structure (e.g., tactus positions, melodic peaks) and is therefore able to capture the attention of a listener. Our aim was to investigate the underlying principles of these accented notes by combining quantitative modeling, music analysis and experimental methods. A listening experiment was conducted where 30 participants indicated perceived accented notes for 60 melodies, vocal and instrumental, selected from Baroque, Romantic and Post-tonal styles. This produced a large and unique collection of perceptual data about the perceived immanent accents, organized by styles consisting of vocal and instrumental melodies within Western art music. The music analysis of the indicated accents provided a preliminary list of musical features that could be identified as possible reasons for the raters' perception of the immanent accents. These features related to the score in different ways, e.g., repeated fragments, single notes, or overall structure. A modeling approach was used to quantify the influence of feature groups related to pitch contour, tempo, timing, simple phrasing, and meter. A set of 43 computational features was defined from the music analysis and previous studies and extracted from the score representation. The mean ratings of the participants were predicted using multiple linear regression and support vector regression. The latter method (using cross-validation) obtained the best result of about 66% explained variance (r = 0.81) across all melodies and for a selected group of raters. The independent contribution of each feature group was relatively high for pitch contour and timing (9.6 and 7.0%). There were also significant contributions from tempo (4.5%), simple phrasing (4.4%), and meter (3.9%). Interestingly, the independent contribution varied greatly across participants, implying different listener strategies, and also some variability across different styles. The large differences among listeners emphasize the importance of considering the individual listener's perception in future research in music perception.

3.
Front Psychol ; 10: 317, 2019.
Article in English | MEDLINE | ID: mdl-30984047

ABSTRACT

Accents are local musical events that attract the attention of the listener, and can be either immanent (evident from the score) or performed (added by the performer). Immanent accents involve temporal grouping (phrasing), meter, melody, and harmony; performed accents involve changes in timing, dynamics, articulation, and timbre. In the past, grouping, metrical and melodic accents were investigated in the context of expressive music performance. We present a novel computational model of immanent accent salience in tonal music that automatically predicts the positions and saliences of metrical, melodic and harmonic accents. The model extends previous research by improving on preliminary formulations of metrical and melodic accents and introducing a new model for harmonic accents that combines harmonic dissonance and harmonic surprise. In an analysis-by-synthesis approach, model predictions were compared with data from two experiments, respectively involving 239 sonorities and 638 sonorities, and 16 musicians and 5 experts in music theory. Average pair-wise correlations between raters were lower for metrical (0.27) and melodic accents (0.37) than for harmonic accents (0.49). In both experiments, when combining all the raters into a single measure expressing their consensus, correlations between ratings and model predictions ranged from 0.43 to 0.62. When different accent categories of accents were combined together, correlations were higher than for separate categories (r = 0.66). This suggests that raters might use strategies different from individual metrical, melodic or harmonic accent models to mark the musical events.

SELECTION OF CITATIONS
SEARCH DETAIL
...