Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychol ; 9: 1277, 2018.
Article in English | MEDLINE | ID: mdl-30104989

ABSTRACT

Five evidence-based taxonomies of everyday sounds frequently reported in the soundscape literature have been generated. An online sorting and category-labeling method that elicits rather than prescribes descriptive words was used. A total of N = 242 participants took part. The main categories of the soundscape taxonomy were people, nature, and manmade, with each dividing into further categories. Sounds within the nature and manmade categories, and two further individual sound sources, dogs, and engines, were explored further by repeating the procedure using multiple exemplars. By generating multidimensional spaces containing both sounds and the spontaneously generated descriptive words the procedure allows for the interpretation of the psychological dimensions along which sounds are organized. This reveals how category formation is based upon different cues - sound source-event identification, subjective-states, and explicit assessment of the acoustic signal - in different contexts. At higher levels of the taxonomy the majority of words described sound source-events. In contrast, when categorizing dog sounds a greater proportion of the words described subjective-states, and valence and arousal scores of these words correlated with their coordinates along the first two dimensions of the data. This is consistent with valence and arousal judgments being the primary categorization strategy used for dog sounds. In contrast, when categorizing engine sounds a greater proportion of the words explicitly described the acoustic signal. The coordinates of sounds along the first two dimensions were found to correlate with fluctuation strength and sharpness, consistent with explicit assessment of acoustic signal features underlying category formation for engine sounds. By eliciting descriptive words the method makes explicit the subjective meaning of these judgments based upon valence and arousal and acoustic properties, and the results demonstrate distinct strategies being spontaneously used to categorize different types of sounds.

2.
Neuropsychologia ; 104: 48-53, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28782544

ABSTRACT

Amusia is a pitch perception disorder associated with deficits in processing and production of both musical and lexical tones, which previous reports have suggested may be constrained to fine-grained pitch judgements. In the present study speakers of tone-languages, in which lexical tones are used to convey meaning, identified words present in chimera stimuli containing conflicting pitch-cues in the temporal fine-structure and temporal envelope, and which therefore conveyed two distinct utterances. Amusics were found to be more likely than controls to judge the word according to the envelope pitch-cues. This demonstrates that amusia is not associated with fine-grained pitch judgements alone, and is consistent with there being two distinct pitch mechanisms and with amusics having an atypical reliance on a secondary mechanism based upon envelope cues.


Subject(s)
Auditory Perceptual Disorders/physiopathology , Phonetics , Pitch Discrimination/physiology , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Auditory Perceptual Disorders/genetics , Cues , Female , Humans , Judgment/physiology , Male , Music , Speech , Young Adult
3.
J Neurosci ; 35(9): 4071-80, 2015 Mar 04.
Article in English | MEDLINE | ID: mdl-25740534

ABSTRACT

When two musical notes with simple frequency ratios are played simultaneously, the resulting musical chord is pleasing and evokes a sense of resolution or "consonance". Complex frequency ratios, on the other hand, evoke feelings of tension or "dissonance". Consonance and dissonance form the basis of harmony, a central component of Western music. In earlier work, we provided evidence that consonance perception is based on neural temporal coding in the brainstem (Bones et al., 2014). Here, we show that for listeners with clinically normal hearing, aging is associated with a decline in both the perceptual distinction and the distinctiveness of the neural representations of different categories of two-note chords. Compared with younger listeners, older listeners rated consonant chords as less pleasant and dissonant chords as more pleasant. Older listeners also had less distinct neural representations of consonant and dissonant chords as measured using a Neural Consonance Index derived from the electrophysiological "frequency-following response." The results withstood a control for the effect of age on general affect, suggesting that different mechanisms are responsible for the perceived pleasantness of musical chords and affective voices and that, for listeners with clinically normal hearing, age-related differences in consonance perception are likely to be related to differences in neural temporal coding.


Subject(s)
Aging/psychology , Auditory Perception/physiology , Brain Stem/growth & development , Brain Stem/physiology , Music/psychology , Acoustic Stimulation , Adolescent , Adult , Aged , Aged, 80 and over , Evoked Potentials, Auditory/physiology , Female , Happiness , Humans , Male , Middle Aged , Young Adult
4.
Hear Res ; 323: 9-21, 2015 May.
Article in English | MEDLINE | ID: mdl-25636498

ABSTRACT

When two notes are played simultaneously they form a musical dyad. The sensation of pleasantness, or "consonance", of a dyad is likely driven by the harmonic relation of the frequency components of the combined spectrum of the two notes. Previous work has demonstrated a relation between individual preference for consonant over dissonant dyads, and the strength of neural temporal coding of the harmonicity of consonant relative to dissonant dyads as measured using the electrophysiological "frequency-following response" (FFR). However, this work also demonstrated that both these variables correlate strongly with musical experience. The current study was designed to determine whether the relation between consonance preference and neural temporal coding is maintained when controlling for musical experience. The results demonstrate that strength of neural coding of harmonicity is predictive of individual preference for consonance even for non-musicians. An additional purpose of the current study was to assess the cochlear generation site of the FFR to low-frequency dyads. By comparing the reduction in FFR strength when high-pass masking noise was added to the output of a model of the auditory periphery, the results provide evidence for the FFR to low-frequency dyads resulting in part from basal cochlear generators.


Subject(s)
Auditory Cortex/physiology , Cochlea/innervation , Music , Pitch Perception , Acoustic Stimulation , Acoustics , Adult , Audiometry , Auditory Pathways/physiology , Auditory Threshold , Female , Humans , Male , Noise/adverse effects , Pattern Recognition, Physiological , Perceptual Masking , Pleasure , Psychoacoustics , Sound Spectrography , Time Factors , Young Adult
5.
Neuropsychologia ; 58: 23-32, 2014 May.
Article in English | MEDLINE | ID: mdl-24690415

ABSTRACT

When musical notes are combined to make a chord, the closeness of fit of the combined spectrum to a single harmonic series (the 'harmonicity' of the chord) predicts the perceived consonance (how pleasant and stable the chord sounds; McDermott, Lehr, & Oxenham, 2010). The distinction between consonance and dissonance is central to Western musical form. Harmonicity is represented in the temporal firing patterns of populations of brainstem neurons. The current study investigates the role of brainstem temporal coding of harmonicity in the perception of consonance. Individual preference for consonant over dissonant chords was measured using a rating scale for pairs of simultaneous notes. In order to investigate the effects of cochlear interactions, notes were presented in two ways: both notes to both ears or each note to different ears. The electrophysiological frequency following response (FFR), reflecting sustained neural activity in the brainstem synchronised to the stimulus, was also measured. When both notes were presented to both ears the perceptual distinction between consonant and dissonant chords was stronger than when the notes were presented to different ears. In the condition in which both notes were presented to the both ears additional low-frequency components, corresponding to difference tones resulting from nonlinear cochlear processing, were observable in the FFR effectively enhancing the neural harmonicity of consonant chords but not dissonant chords. Suppressing the cochlear envelope component of the FFR also suppressed the additional frequency components. This suggests that, in the case of consonant chords, difference tones generated by interactions between notes in the cochlea enhance the perception of consonance. Furthermore, individuals with a greater distinction between consonant and dissonant chords in the FFR to individual harmonics had a stronger preference for consonant over dissonant chords. Overall, the results provide compelling evidence for the role of neural temporal coding in the perception of consonance, and suggest that the representation of harmonicity in phase locked neural firing drives the perception of consonance.


Subject(s)
Auditory Perception/physiology , Brain Stem/physiology , Music , Neurons/physiology , Acoustic Stimulation , Adolescent , Adult , Evoked Potentials , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...