Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
JASA Express Lett ; 3(8)2023 08 01.
Article in English | MEDLINE | ID: mdl-37566904

ABSTRACT

Temporal and frequency auditory streaming capacities were assessed for non-musician (NM), expert musician (EM), and amateur musician (AM) listeners using a local-global task and an interleaved melody recognition task, respectively. Data replicate differences previously observed between NM and EM, and reveal that while AM exhibits a local-over-global processing change comparable to EM, their performance for segregating a melody embedded in a stream remains as poor as NM. The observed group partitioning along the temporal-frequency auditory streaming capacity map suggests a sequential, two-step development model of musical learning, whose contributing factors are discussed.


Subject(s)
Music , Recognition, Psychology
2.
Sci Rep ; 13(1): 5180, 2023 03 30.
Article in English | MEDLINE | ID: mdl-36997613

ABSTRACT

Communication between sound and music experts is based on the shared understanding of a metaphorical vocabulary derived from other sensory modalities. Yet, the impact of sound expertise on the mental representation of these sound concepts remains blurry. To address this issue, we investigated the acoustic portraits of four metaphorical sound concepts (brightness, warmth, roundness, and roughness) in three groups of participants (sound engineers, conductors, and non-experts). Participants (N = 24) rated a corpus of orchestral instrument sounds (N = 520) using Best-Worst Scaling. With this data-driven method, we sorted the sound corpus for each concept and population. We compared the population ratings and ran machine learning algorithms to unveil the acoustic portraits of each concept. Overall, the results revealed that sound engineers were the most consistent. We found that roughness is widely shared while brightness is expertise dependent. The frequent use of brightness by expert populations suggests that its meaning got specified through sound expertise. As for roundness and warmth, it seems that the importance of pitch and noise in their acoustic definition is the key to distinguishing them. These results provide crucial information on the mental representations of a metaphorical vocabulary of sound and whether it is shared or refined by sound expertise.


Subject(s)
Music , Sound , Humans , Acoustic Stimulation , Noise , Acoustics , Vocabulary
3.
JASA Express Lett ; 2(6): 064404, 2022 06.
Article in English | MEDLINE | ID: mdl-36154161

ABSTRACT

When designing sound evaluation experiments, researchers rely on listening test methods, such as rating scales (RS). This work aims to investigate the suitability of best-worst scaling (BWS) for the perceptual evaluation of sound qualities. To do so, 20 participants rated the "brightness" of a corpus of instrumental sounds (N = 100) with RS and BWS methods. The results show that BWS procedure is the fastest and that RS and BWS are equivalent in terms of performance. Interestingly, participants preferred BWS over RS. Therefore, BWS is an alternative method that reliably measures perceptual sound qualities and could be used in many-sounds paradigm.

4.
Psychol Res ; 86(2): 421-442, 2022 Mar.
Article in English | MEDLINE | ID: mdl-33881610

ABSTRACT

Short-term memory has mostly been investigated with verbal or visuospatial stimuli and less so with other categories of stimuli. Moreover, the influence of sensory modality has been explored almost solely in the verbal domain. The present study compared visual and auditory short-term memory for different types of materials, aiming to understand whether sensory modality and material type can influence short-term memory performance. Furthermore, we aimed to assess if music expertise can modulate memory performance, as previous research has reported better auditory memory (and to some extent, visual memory), and better auditory contour recognition for musicians than non-musicians. To do so, we adapted the same recognition paradigm (delayed-matching to sample) across different types of stimuli. In each trial, participants (musicians and non-musicians) were presented with two sequences of events, separated by a silent delay, and had to indicate whether the two sequences were identical or different. The performance was compared for auditory and visual materials belonging to three different categories: (1) verbal (i.e., syllables); (2) nonverbal (i.e., that could not be easily denominated) with contour (based on loudness or luminance variations); and (3) nonverbal without contour (pink noise sequences or kanji letters sequences). Contour and no-contour conditions referred to whether the sequence can entail (or not) a contour (i.e., a pattern of up and down changes) based on non-pitch features. Results revealed a selective advantage of musicians for auditory no-contour stimuli and for contour stimuli (both visual and auditory), suggesting that musical expertise is associated with specific short-term memory advantages in domains close to the trained domain, also extending cross-modally when stimuli have contour information. Moreover, our results suggest a role of encoding strategies (i.e., how the material is represented mentally during the task) for short-term-memory performance.


Subject(s)
Music , Acoustic Stimulation/methods , Auditory Perception , Cognition , Humans , Memory, Short-Term , Recognition, Psychology
5.
Sci Rep ; 10(1): 16390, 2020 10 02.
Article in English | MEDLINE | ID: mdl-33009439

ABSTRACT

The way the visual system processes different scales of spatial information has been widely studied, highlighting the dominant role of global over local processing. Recent studies addressing how the auditory system deals with local-global temporal information suggest a comparable processing scheme, but little is known about how this organization is modulated by long-term musical training, in particular regarding musical sequences. Here, we investigate how non-musicians and expert musicians detect local and global pitch changes in short hierarchical tone sequences structured across temporally-segregated triplets made of musical intervals (local scale) forming a melodic contour (global scale) varying either in one direction (monotonic) or both (non-monotonic). Our data reveal a clearly distinct organization between both groups. Non-musicians show global advantage (enhanced performance to detect global over local modifications) and global-to-local interference effects (interference of global over local processing) only for monotonic sequences, while musicians exhibit the reversed pattern for non-monotonic sequences. These results suggest that the local-global processing scheme depends on the complexity of the melodic contour, and that long-term musical training induces a prominent perceptual reorganization that reshapes its initial global dominance to favour local information processing. This latter result supports the theory of "analytic" processing acquisition in musicians.


Subject(s)
Auditory Perception/physiology , Pitch Discrimination/physiology , Acoustic Stimulation/methods , Adult , Cognition/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Music , Pitch Perception/physiology , Reaction Time/physiology , Time Perception/physiology , Young Adult
6.
J Acoust Soc Am ; 146(2): EL172, 2019 08.
Article in English | MEDLINE | ID: mdl-31472560

ABSTRACT

Influence of loudness on sound recognition was investigated in an explicit memory experiment based on a conscious recollection-test phase-of previously encoded information-study phase. Three encoding conditions were compared: semantic (sounds were sorted in three different categories), sensory (sounds were rated in loudness), and control (participants were solely asked to listen to the sounds). Results revealed a significant study-to-test change effect: loudness change between the study and the test phases affects recognition. The effect was not specific to the encoding condition (semantic vs sensory) suggesting that loudness is an important hint for everyday sounds recognition.


Subject(s)
Speech Acoustics , Speech Perception , Adult , Female , Humans , Male , Semantics
7.
J Acoust Soc Am ; 142(2): 878, 2017 08.
Article in English | MEDLINE | ID: mdl-28863587

ABSTRACT

Sounds involving liquid sources are part of everyday life. They form a category of sounds easily identified by human listeners in different experimental studies. Unlike acoustic models that focus on bubble vibrations, real life instances of liquid sounds, such as sounds produced by liquids with or without other materials, are very diverse and include water drop sounds, noisy flows, and even solid vibrations. The process that allows listeners to group these different sounds in the same category remains unclear. This article presents a perceptual experiment based on a sorting task of liquid sounds from a household environment. It seeks to reveal the cognitive subcategories of this set of sounds. The clarification of this perceptive process led to the observation of similarities between the perception of liquid sounds and other categories of environmental sounds. Furthermore, the results provide a taxonomy of liquid sounds on which an acoustic analysis was performed that highlights the acoustical properties of the categories, including different rates of air bubble vibration.

8.
PLoS One ; 12(7): e0181786, 2017.
Article in English | MEDLINE | ID: mdl-28750071

ABSTRACT

Communicating an auditory experience with words is a difficult task and, in consequence, people often rely on imitative non-verbal vocalizations and gestures. This work explored the combination of such vocalizations and gestures to communicate auditory sensations and representations elicited by non-vocal everyday sounds. Whereas our previous studies have analyzed vocal imitations, the present research focused on gestural depictions of sounds. To this end, two studies investigated the combination of gestures and non-verbal vocalizations. A first, observational study examined a set of vocal and gestural imitations of recordings of sounds representative of a typical everyday environment (ecological sounds) with manual annotations. A second, experimental study used non-ecological sounds whose parameters had been specifically designed to elicit the behaviors highlighted in the observational study, and used quantitative measures and inferential statistics. The results showed that these depicting gestures are based on systematic analogies between a referent sound, as interpreted by a receiver, and the visual aspects of the gestures: auditory-visual metaphors. The results also suggested a different role for vocalizations and gestures. Whereas the vocalizations reproduce all features of the referent sounds as faithfully as vocally possible, the gestures focus on one salient feature with metaphors based on auditory-visual correspondences. Both studies highlighted two metaphors consistently shared across participants: the spatial metaphor of pitch (mapping different pitches to different positions on the vertical dimension), and the rustling metaphor of random fluctuations (rapidly shaking of hands and fingers). We interpret these metaphors as the result of two kinds of representations elicited by sounds: auditory sensations (pitch and loudness) mapped to spatial position, and causal representations of the sound sources (e.g. rain drops, rustling leaves) pantomimed and embodied by the participants' gestures.


Subject(s)
Gestures , Metaphor , Sound , Adolescent , Adult , Female , Humans , Male , Middle Aged , Pitch Perception , Sound Spectrography , Young Adult
9.
PLoS One ; 11(12): e0168167, 2016.
Article in English | MEDLINE | ID: mdl-27992480

ABSTRACT

Imitative behaviors are widespread in humans, in particular whenever two persons communicate and interact. Several tokens of spoken languages (onomatopoeias, ideophones, and phonesthemes) also display different degrees of iconicity between the sound of a word and what it refers to. Thus, it probably comes at no surprise that human speakers use a lot of imitative vocalizations and gestures when they communicate about sounds, as sounds are notably difficult to describe. What is more surprising is that vocal imitations of non-vocal everyday sounds (e.g. the sound of a car passing by) are in practice very effective: listeners identify sounds better with vocal imitations than with verbal descriptions, despite the fact that vocal imitations are inaccurate reproductions of a sound created by a particular mechanical system (e.g. a car driving by) through a different system (the voice apparatus). The present study investigated the semantic representations evoked by vocal imitations of sounds by experimentally quantifying how well listeners could match sounds to category labels. The experiment used three different types of sounds: recordings of easily identifiable sounds (sounds of human actions and manufactured products), human vocal imitations, and computational "auditory sketches" (created by algorithmic computations). The results show that performance with the best vocal imitations was similar to the best auditory sketches for most categories of sounds, and even to the referent sounds themselves in some cases. More detailed analyses showed that the acoustic distance between a vocal imitation and a referent sound is not sufficient to account for such performance. Analyses suggested that instead of trying to reproduce the referent sound as accurately as vocally possible, vocal imitations focus on a few important features, which depend on each particular sound category. These results offer perspectives for understanding how human listeners store and access long-term sound representations, and sets the stage for the development of human-computer interfaces based on vocalizations.


Subject(s)
Imitative Behavior , Sound , Voice/physiology , Acoustic Stimulation , Acoustics , Adult , Animals , Auditory Perception/physiology , Female , Hearing/physiology , Humans , Male , Video Recording , Young Adult
10.
Front Neurosci ; 10: 385, 2016.
Article in English | MEDLINE | ID: mdl-27610071

ABSTRACT

This article reports on an interdisciplinary research project on movement sonification for sensori-motor learning. First, we describe different research fields which have contributed to movement sonification, from music technology including gesture-controlled sound synthesis, sonic interaction design, to research on sensori-motor learning with auditory-feedback. In particular, we propose to distinguish between sound-oriented tasks and movement-oriented tasks in experiments involving interactive sound feedback. We describe several research questions and recently published results on movement control, learning and perception. In particular, we studied the effect of the auditory feedback on movements considering several cases: from experiments on pointing and visuo-motor tracking to more complex tasks where interactive sound feedback can guide movements, or cases of sensory substitution where the auditory feedback can inform on object shapes. We also developed specific methodologies and technologies for designing the sonic feedback and movement sonification. We conclude with a discussion on key future research challenges in sensori-motor learning with movement sonification. We also point out toward promising applications such as rehabilitation, sport training or product design.

11.
J Acoust Soc Am ; 139(1): 290-300, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26827025

ABSTRACT

Describing complex sounds with words is a difficult task. In fact, previous studies have shown that vocal imitations of sounds are more effective than verbal descriptions [Lemaitre and Rocchesso (2014). J. Acoust. Soc. Am. 135, 862-873]. The current study investigated how vocal imitations of sounds enable their recognition by studying how two expert and two lay participants reproduced four basic auditory features: pitch, tempo, sharpness, and onset. It used 4 sets of 16 referent sounds (modulated narrowband noises and pure tones), based on 1 feature or crossing 2 of the 4 features. Dissimilarity rating experiments and multidimensional scaling analyses confirmed that listeners could accurately perceive the four features composing the four sets of referent sounds. The four participants recorded vocal imitations of the four sets of sounds. Analyses identified three strategies: (1) Vocal imitations of pitch and tempo reproduced faithfully the absolute value of the feature; (2) Vocal imitations of sharpness transposed the feature into the participants' registers; (3) Vocal imitations of onsets categorized the continuum of onset values into two discrete morphological profiles. Overall, these results highlight that vocal imitations do not simply mimic the referent sounds, but seek to emphasize the characteristic features of the referent sounds within the constraints of human vocal production.


Subject(s)
Pitch Discrimination/physiology , Recognition, Psychology/physiology , Voice/physiology , Acoustics , Adolescent , Adult , Analysis of Variance , Female , Humans , Loudness Perception/physiology , Male , Middle Aged , Sound Spectrography , Time Perception/physiology , Young Adult
12.
J Exp Psychol Appl ; 18(1): 52-80, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22122114

ABSTRACT

In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second experimental study, 30 participants classified and described 56 sounds exclusively made by solid objects. The participants were required to concentrate on the actions causing the sounds independent of the sound source. The classifications were analyzed with a specific hierarchical cluster technique that accounted for possible cross-classifications, and the verbalizations were submitted to statistical lexical analyses. The results of the first study highlighted 4 main categories of sounds: solids, liquids, gases, and machines. The results of the second study indicated a distinction between discrete interactions (e.g., impacts) and continuous interactions (e.g., tearing) and suggested that actions and objects were not independent organizational principles. We propose a general structure of environmental sound categorization based on the sounds' temporal patterning, which has practical implications for the automatic classification of environmental sounds.


Subject(s)
Acoustic Stimulation/classification , Auditory Perception , Environment , Sound , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Middle Aged
13.
Psicológica (Valencia, Ed. impr.) ; 32(1): 31-48, 2011. tab
Article in English | IBECS | ID: ibc-84596

ABSTRACT

This study examined the within-subject stability of 150 participants who performed both a sorting task and a property-generation task over multiple sessions, focusing on three concrete concept categories (food, animals and bathroom products). We hypothesized that (1) the within-subject stability would be higher in the sorting task than in the property-generation task and (2) the nature of the category would influence both the within-subject stability of the classification groups in the sorting task and the properties generated to define these groups. The results show that the within-subject stability of conceptual representations depends both on the task and on the nature of the category. The stability of the representations was greater in the sorting task than in the property-generation task and in the food category. These results are discussed from a longitudinal perspective(AU)


Subject(s)
Humans , Male , Female , Adult , Psychology, Social/methods , Psychology, Social/trends , Food Hygiene , Neuropsychology/methods , Neuropsychology/trends , Students/psychology , Students, Health Occupations/psychology , Feeding Behavior/psychology , Feeding Methods/psychology , Data Analysis/methods , Data Analysis/statistics & numerical data , Analysis of Variance , Psychology, Social/instrumentation , Hygiene , Mental Health , Longitudinal Studies , Habits , Feeding Behavior/psychology
14.
J Exp Psychol Appl ; 16(1): 16-32, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20350041

ABSTRACT

The influence of listener's expertise and sound identification on the categorization of environmental sounds is reported in three studies. In Study 1, the causal uncertainty of 96 sounds was measured by counting the different causes described by 29 participants. In Study 2, 15 experts and 15 nonexperts classified a selection of 60 sounds and indicated the similarities they used. In Study 3, 38 participants indicated their confidence in identifying the sounds. Participants reported using either acoustical similarities or similarities of the causes of the sounds. Experts used acoustical similarity more often than nonexperts, who used the similarity of the cause of the sounds. Sounds with a low causal uncertainty were more often grouped together because of the similarities of the cause, whereas sounds with a high causal uncertainty were grouped together more often because of the acoustical similarities. The same conclusions were reached for identification confidence. This measure allowed the sound classification to be predicted, and is a straightforward method to determine the appropriate description of a sound.


Subject(s)
Auditory Perception , Environment , Professional Competence , Signal Detection, Psychological , Sound , Adult , Female , Humans , Male , Middle Aged , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...