Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Front Psychol ; 14: 1167003, 2023.
Article in English | MEDLINE | ID: mdl-37303916

ABSTRACT

Rhythm is a key feature of music and language, but the way rhythm unfolds within each domain differs. Music induces perception of a beat, a regular repeating pulse spaced by roughly equal durations, whereas speech does not have the same isochronous framework. Although rhythmic regularity is a defining feature of music and language, it is difficult to derive acoustic indices of the differences in rhythmic regularity between domains. The current study examined whether participants could provide subjective ratings of rhythmic regularity for acoustically matched (syllable-, tempo-, and contour-matched) and acoustically unmatched (varying in tempo, syllable number, semantics, and contour) exemplars of speech and song. We used subjective ratings to index the presence or absence of an underlying beat and correlated ratings with stimulus features to identify acoustic metrics of regularity. Experiment 1 highlighted that ratings based on the term "rhythmic regularity" did not result in consistent definitions of regularity across participants, with opposite ratings for participants who adopted a beat-based definition (song greater than speech), a normal-prosody definition (speech greater than song), or an unclear definition (no difference). Experiment 2 defined rhythmic regularity as how easy it would be to tap or clap to the utterances. Participants rated song as easier to clap or tap to than speech for both acoustically matched and unmatched datasets. Subjective regularity ratings from Experiment 2 illustrated that stimuli with longer syllable durations and with less spectral flux were rated as more rhythmically regular across domains. Our findings demonstrate that rhythmic regularity distinguishes speech from song and several key acoustic features can be used to predict listeners' perception of rhythmic regularity within and across domains as well.

2.
J Autism Dev Disord ; 2023 May 04.
Article in English | MEDLINE | ID: mdl-37140745

ABSTRACT

PURPOSE: Processing real-world sounds requires acoustic and higher-order semantic information. We tested the theory that individuals with autism spectrum disorder (ASD) show enhanced processing of acoustic features and impaired processing of semantic information. METHODS: We used a change deafness task that required detection of speech and non-speech auditory objects being replaced and a speech-in-noise task using spoken sentences that must be comprehended in the presence of background speech to examine the extent to which 7-15 year old children with ASD (n = 27) rely on acoustic and semantic information, compared to age-matched (n = 27) and IQ-matched (n = 27) groups of typically developing (TD) children. Within a larger group of 7-15 year old TD children (n = 105) we correlated IQ, ASD symptoms, and the use of acoustic and semantic information. RESULTS: Children with ASD performed worse overall at the change deafness task relative to the age-matched TD controls, but they did not differ from IQ-matched controls. All groups utilized acoustic and semantic information similarly and displayed an attentional bias towards changes that involved the human voice. Similarly, for the speech-in-noise task, age-matched-but not IQ-matched-TD controls performed better overall than the ASD group. However, all groups used semantic context to a similar degree. Among TD children, neither IQ nor the presence of ASD symptoms predict the use of acoustic or semantic information. CONCLUSION: Children with and without ASD used acoustic and semantic information similarly during auditory change deafness and speech-in-noise tasks.

3.
Dev Sci ; 26(5): e13346, 2023 09.
Article in English | MEDLINE | ID: mdl-36419407

ABSTRACT

Music and language are two fundamental forms of human communication. Many studies examine the development of music- and language-specific knowledge, but few studies compare how listeners know they are listening to music or language. Although we readily differentiate these domains, how we distinguish music and language-and especially speech and song- is not obvious. In two studies, we asked how listeners categorize speech and song. Study 1 used online survey data to illustrate that 4- to 17-year-olds and adults have verbalizable distinctions for speech and song. At all ages, listeners described speech and song differences based on acoustic features, but compared with older children, 4- to 7-year-olds more often used volume to describe differences, suggesting that they are still learning to identify the features most useful for differentiating speech from song. Study 2 used a perceptual categorization task to demonstrate that 4-8-year-olds and adults readily categorize speech and song, but this ability improves with age especially for identifying song. Despite generally rating song as more speech-like, 4- and 6-year-olds rated ambiguous speech-song stimuli as more song-like than 8-year-olds and adults. Four acoustic features predicted song ratings: F0 instability, utterance duration, harmonicity, and spectral flux. However, 4- and 6-year-olds' song ratings were better predicted by F0 instability than by harmonicity and utterance duration. These studies characterize how children develop conceptual and perceptual understandings of speech and song and suggest that children under age 8 are still learning what features are important for categorizing utterances as speech or song. RESEARCH HIGHLIGHTS: Children and adults conceptually and perceptually categorize speech and song from age 4. Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song. Acoustic cue weighting changes with age, becoming adult-like at age 8 for perceptual categorization and at age 12 for conceptual differentiation. Young children are still learning to categorize speech and song, which leaves open the possibility that music- and language-specific skills are not so domain-specific.


Subject(s)
Music , Speech Perception , Voice , Adult , Child , Humans , Adolescent , Child, Preschool , Speech , Auditory Perception , Learning
4.
Neuroimage ; 252: 119049, 2022 05 15.
Article in English | MEDLINE | ID: mdl-35248707

ABSTRACT

Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.


Subject(s)
Music , Singing , Auditory Perception , Humans , Recognition, Psychology , Speech
5.
Dev Psychol ; 57(9): 1411-1422, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34929087

ABSTRACT

How do infants learn the sounds of their native language when there are many simultaneous sounds competing for their attention? Adults and children detect when speech sounds change in complex scenes better than when other sounds change. We examined whether infants have similar biases to detect when human speech changes better than nonspeech sounds including musical instruments, water, and animal calls in complex auditory scenes. We used a change deafness paradigm to examine whether 5-month-olds' change detection is biased toward certain sounds within high-level categories (e.g., biological or generated by humans) or whether change detection depends on low-level salient physical features such that detection is better for sounds with more distinct acoustic properties, such as water. In Experiment 1, 5-month-olds showed some evidence for detecting speech and music changes better than no change trials. In Experiment 2, when speech and music were compared separately with animal and water sounds, infants detected when speech and water changed, but not when music changed across scenes. Infants' change detection is both biased for certain sound categories, as they detected small speech changes better than other sounds, and affected by the size of the acoustic change, similar to young infants' attentional priorities in complex visual scenes. By 5 months, infants show some preferential processing of speech changes in complex auditory environments, which could help bootstrap the language learning process. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Subject(s)
Phonetics , Speech , Attention , Bias , Humans , Language Development
6.
Neuroimage ; 214: 116767, 2020 07 01.
Article in English | MEDLINE | ID: mdl-32217165

ABSTRACT

Neural activity synchronizes with the rhythmic input of many environmental signals, but the capacity of neural activity to entrain to the slow rhythms of speech is particularly important for successful communication. Compared to speech, song has greater rhythmic regularity, a more stable fundamental frequency, discrete pitch movements, and a metrical structure, this may provide a temporal framework that helps listeners neurally track information better than the rhythmically irregular rhythms of speech. The current study used EEG to examine whether entrainment to the syllable rate of linguistic utterances, as indexed by cerebro-acoustic phase coherence, was greater when listeners heard sung than spoken sentences. We assessed listeners phase-locking in both easy (no time compression) and hard (50% time-compression) utterance conditions. Adults phase-locked equally well to speech and song in the easy listening condition. However, in the time-compressed condition, phase-locking was greater for sung than spoken utterances in the theta band (3.67-5 â€‹Hz). Thus, the musical temporal and spectral characteristics of song related to better phase-locking to the slow phrasal and syllable information (4-7 â€‹Hz) in the speech stream. These results highlight the possibility of using song as a tool for improving speech processing in individuals with language processing deficits, such as dyslexia.


Subject(s)
Auditory Perception/physiology , Brain/physiology , Electroencephalography Phase Synchronization/physiology , Music , Singing , Speech Perception/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Attention/physiology , Electroencephalography/methods , Female , Humans , Male , Periodicity , Young Adult
7.
Psychol Res ; 84(3): 585-601, 2020 Apr.
Article in English | MEDLINE | ID: mdl-30120544

ABSTRACT

Our world is a sonically busy place and we use both acoustic information and experience-based knowledge to make sense of the sounds arriving at our ears. The knowledge we gain through experience has the potential to shape what sounds are prioritized in a complex scene. There are many examples of how visual expertise influences how we perceive objects in visual scenes, but few studies examine how auditory expertise is associated with attentional biases toward familiar real-world sounds in complex scenes. In the current study, we investigated whether musical expertise is associated with the ability to detect changes to real-world sounds in complex auditory scenes, and whether any such benefit is specific to musical instrument sounds. We also examined whether change detection is better for human-generated sounds in general or only communicative human sounds. We found that musicians had less change deafness overall. All listeners were better at detecting human communicative sounds compared to human non-communicative sounds, but this benefit was driven by speech sounds and sounds that were vocally generated. Musical listening skill, speech-in-noise, and executive function abilities were used to predict rates of change deafness. Auditory memory, musical training, fine-grained pitch processing, and an interaction between training and pitch processing accounted for 45.8% of the variance in change deafness. To better understand perceptual and cognitive expertise, it may be more important to measure various auditory skills and relate them to each other, as opposed to comparing experts to non-experts.


Subject(s)
Auditory Perception , Memory , Music , Phonetics , Pitch Perception , Signal Detection, Psychological , Acoustic Stimulation , Adult , Female , Humans , Male
8.
Dev Psychol ; 52(11): 1867-1877, 2016 11.
Article in English | MEDLINE | ID: mdl-27786530

ABSTRACT

Children interact with and learn about all types of sound sources, including dogs, bells, trains, and human beings. Although it is clear that knowledge of semantic categories for everyday sights and sounds develops during childhood, there are very few studies examining how children use this knowledge to make sense of auditory scenes. We used a change deafness paradigm and an object-encoding task to investigate how children (6, 8, and 10 years of age) and adults process auditory scenes composed of everyday sounds (e.g., human voices, animal calls, environmental sounds, and musical instruments). Results indicated that although change deafness was present and robust at all ages, listeners improved at detecting changes with age. All listeners were less sensitive to changes within the same semantic category than to small acoustic changes, suggesting that, regardless of age, listeners relied heavily on semantic category knowledge to detect changes. Furthermore, all listeners showed less change deafness when they correctly encoded change-relevant objects (i.e., when they remembered hearing the changing object during the task). Finally, we found that all listeners were better at encoding human voices and were more sensitive to detecting changes involving the human voice. Despite poorer overall performance compared with adults, children detect changes in complex auditory scenes much like adults, using high-level knowledge about auditory objects to guide processing, with special attention to the human voice. (PsycINFO Database Record


Subject(s)
Auditory Perception/physiology , Child Development/physiology , Knowledge , Semantics , Signal Detection, Psychological/physiology , Acoustic Stimulation , Age Factors , Analysis of Variance , Child , Female , Humans , Male , Psychoacoustics , Statistics as Topic
9.
J Exp Psychol Hum Percept Perform ; 42(11): 1806-1817, 2016 11.
Article in English | MEDLINE | ID: mdl-27399831

ABSTRACT

Attention and other processing constraints limit the perception of objects in complex scenes, which has been studied extensively in the visual sense. We used a change deafness paradigm to examine how attention to particular objects helps and hurts the ability to notice changes within complex auditory scenes. In a counterbalanced design, we examined how cueing attention to particular objects affected performance in an auditory change-detection task through the use of valid or invalid cues and trials without cues (Experiment 1). We further examined how successful encoding predicted change-detection performance using an object-encoding task and we addressed whether performing the object-encoding task along with the change-detection task affected performance overall (Experiment 2). Participants had more error for invalid compared to valid and uncued trials, but this effect was reduced in Experiment 2 compared to Experiment 1. When the object-encoding task was present, listeners who completed the uncued condition first had less overall error than those who completed the cued condition first. All participants showed less change deafness when they successfully encoded change-relevant compared to irrelevant objects during valid and uncued trials. However, only participants who completed the uncued condition first also showed this effect during invalid cue trials, suggesting a broader scope of attention. These findings provide converging evidence that attention to change-relevant objects is crucial for successful detection of acoustic changes and that encouraging broad attention to multiple objects is the best way to reduce change deafness. (PsycINFO Database Record


Subject(s)
Attention/physiology , Auditory Perception/physiology , Cues , Signal Detection, Psychological/physiology , Adult , Female , Humans , Male , Young Adult
10.
Cognition ; 143: 135-40, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26151370

ABSTRACT

Few studies comparing music and language processing have adequately controlled for low-level acoustical differences, making it unclear whether differences in music and language processing arise from domain-specific knowledge, acoustic characteristics, or both. We controlled acoustic characteristics by using the speech-to-song illusion, which often results in a perceptual transformation to song after several repetitions of an utterance. Participants performed a same-different pitch discrimination task for the initial repetition (heard as speech) and the final repetition (heard as song). Better detection was observed for pitch changes that violated rather than conformed to Western musical scale structure, but only when utterances transformed to song, indicating that music-specific pitch representations were activated and influenced perception. This shows that music-specific processes can be activated when an utterance is heard as song, suggesting that the high-level status of a stimulus as either language or music can be behaviorally dissociated from low-level acoustic factors.


Subject(s)
Music , Pitch Discrimination/physiology , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Female , Humans , Knowledge , Male , Middle Aged , Young Adult
11.
J Exp Psychol Gen ; 144(2): e43-9, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25688906

ABSTRACT

Speech and song are readily differentiated from each other in everyday communication, yet sometimes listeners who have formal music training will hear a spoken utterance transform from speech to song when it is repeated (Deutsch, Henthorn, & Lapidis, 2011). It remains unclear whether music training is required to perceive this illusory transformation or whether implicit knowledge of musical structure is sufficient. The current study replicates Deutsch et al.'s findings with musicians and demonstrates the generalizability of this auditory illusion to casual music listeners with no formal training. We confirm that the illusory transformation is disrupted when the pitch height of each repetition of the utterance is transposed, and we find that raising the pitch height has a different effect on listeners' ratings than does lowering it. Auditory illusions such as this may offer unique opportunities to compare domain-specific and domain-general processing in the brain while holding acoustic characteristics constant.


Subject(s)
Auditory Perception/physiology , Illusions/psychology , Music/psychology , Speech , Adolescent , Adult , Female , Humans , Male , Middle Aged , Pitch Perception/physiology , Young Adult
12.
Front Syst Neurosci ; 7: 48, 2013 Sep 03.
Article in English | MEDLINE | ID: mdl-24027502

ABSTRACT

The musical brain is built over time through experience with a multitude of sounds in the auditory environment. However, learning the melodies, timbres, and rhythms unique to the music and language of one's culture begins already within the mother's womb during the third trimester of human development. We review evidence that the intrauterine auditory environment plays a key role in shaping later auditory development and musical preferences. We describe evidence that externally and internally generated sounds influence the developing fetus, and argue that such prenatal auditory experience may set the trajectory for the development of the musical mind.

13.
Ann N Y Acad Sci ; 1252: 92-9, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22524345

ABSTRACT

Rhythm and meter are fundamental components of music that are universal yet also culture specific. Although simple, isochronous meters are preferred and more readily discriminated than highly complex, nonisochronous meters, moderately complex nonisochronous meters do not pose a problem for listeners who are exposed to them from a young age. The present work uses a behavioral task to examine the ease with which listeners of various ages acquire knowledge of unfamiliar metrical structures from passive exposure. We examined perception of familiar (Western) rhythms with an isochronous meter and unfamiliar (Balkan) rhythms with a nonisochronous meter. We compared discrimination by American children (5 to 11 years) and adults before and after a 2-week period of at-home listening to nonisochronous meter music from Bulgaria. During the first session, listeners of all ages exhibited superior discrimination of isochronous than in nonisochronous melodies. Across sessions, this asymmetry declined for young children but not for older children and adults.


Subject(s)
Auditory Perception/physiology , Music , Adolescent , Adult , Age Factors , Child , Child, Preschool , Cultural Characteristics , Female , Humans , Learning , Male , Neurosciences , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...