Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
1.
Front Sociol ; 8: 1030115, 2023.
Article in English | MEDLINE | ID: mdl-37404338

ABSTRACT

In this paper, we will attempt to outline the key ideas of a theoretical framework for neuroscience research that reflects critically on the neoliberal capitalist context. We argue that neuroscience can and should illuminate the effects of neoliberal capitalism on the brains and minds of the population living under such socioeconomic systems. Firstly, we review the available empirical research indicating that the socio-economic environment is harmful to minds and brains. We, then, describe the effects of the capitalist context on neuroscience itself by presenting how it has been influenced historically. In order to set out a theoretical framework that can generate neuroscientific hypotheses with regards to the effects of the capitalist context on brains and minds, we suggest a categorization of the effects, namely deprivation, isolation and intersectional effects. We also argue in favor of a neurodiversity perspective [as opposed to the dominant model of conceptualizing neural (mal-)functioning] and for a perspective that takes into account brain plasticity and potential for change and adaptation. Lastly, we discuss the specific needs for future research as well as a frame for post-capitalist research.

2.
Brain Lang ; 236: 105219, 2023 01.
Article in English | MEDLINE | ID: mdl-36577315

ABSTRACT

Rhythm perception deficits have been linked to neurodevelopmental disorders affecting speech and language. Children who stutter have shown poorer rhythm discrimination and attenuated functional connectivity in rhythm-related brain areas, which may negatively impact timing control required for speech. It is unclear whether adults who stutter (AWS), who are likely to have acquired compensatory adaptations in response to rhythm processing/timing deficits, are similarly affected. We compared rhythm discrimination in AWS and controls (total n = 36) during fMRI in two matched conditions: simple rhythms that consistently reinforced a periodic beat, and complex rhythms that did not (requiring greater reliance on internal timing). Consistent with an internal beat deficit hypothesis, behavioral results showed poorer complex rhythm discrimination for AWS than controls. In AWS, greater stuttering severity was associated with poorer rhythm discrimination. AWS showed increased activity within beat-based timing regions and increased functional connectivity between putamen and cerebellum (supporting interval-based timing) for simple rhythms.


Subject(s)
Stuttering , Child , Humans , Adult , Stuttering/diagnostic imaging , Magnetic Resonance Imaging , Auditory Perception/physiology , Speech/physiology , Brain/diagnostic imaging
3.
J Speech Lang Hear Res ; 65(11): 4025-4046, 2022 11 17.
Article in English | MEDLINE | ID: mdl-36260352

ABSTRACT

PURPOSE: This study used a cross-sequential design to identify developmental changes in narrative speech rhythm and intonation. The aim was to provide a robust, clinically relevant characterization of normative changes in speech prosody across the early school-age years. METHOD: Structured spontaneous narratives were elicited annually from 60 children over a 3-year period. Children were aged 5-7 years at study outset and then were aged 7-9 years at study offset. Articulation rate, prominence spacing, and intonational phrase length and duration were calculated for each narrative to index speech rhythm; measures of pitch variability and pitch range indexed intonation. Linear mixed-effects (LME) models tested for cohort-based and within-subject longitudinal change on the prosodic measures; linear regression was used to test for the simple effect of age-in-months within year on the measures. RESULTS: The LME analyses indicated systematic longitudinal changes in speech rhythm across all measures except phrase duration; there were no longitudinal changes in pitch variability or pitch range across the school-age years. Linear regression results showed an increase in articulation rate with age; there were no systematic differences between age cohorts across years in the study. CONCLUSIONS: The results indicate that speech rhythm continues to develop during the school-age years. The results also underscore the very strong relationship between the rate and rhythm characteristics of speech and so suggest an important influence of speech motor skills on rhythm production. Finally, the results on pitch variability and pitch range are interpreted to suggest that these are inadequate measures of typical intonation development during the school-age years.


Subject(s)
Speech Perception , Speech , Child , Humans
4.
Ear Hear ; 43(2): 685-698, 2022.
Article in English | MEDLINE | ID: mdl-34611118

ABSTRACT

OBJECTIVES: Understanding how quantity and quality of language input vary across children with cochlear implants (CIs) is important for explaining sources of large individual differences in language outcomes of this at-risk pediatric population. Studies have mostly focused either on intervention-related, device-related, and/or patient-related factors, or relied on data from parental reports and laboratory-based speech corpus to unravel factors explaining individual differences in language outcomes among children with CIs. However, little is known about the extent to which children with CIs differ in quantity and quality of language input they experience in their natural linguistic environments. To address this knowledge gap, the present study analyzed the quantity and quality of language input to early-implanted children (age of implantation <23 mo) during the first year after implantation. DESIGN: Day-long Language ENvironment Analysis (LENA) recordings, derived from home environments of 14 early-implanted children, were analyzed to estimate numbers of words per day, type-token ratio (TTR), and mean length of utterance in morphemes (MLUm) in adults' speech. Properties of language input were analyzed across these three dimensions to examine how input in home environments varied across children with CIs in quantity, defined as number of words, and quality, defined as whether speech was child-directed or overheard. RESULTS: Our per-day estimates demonstrated that children with CIs were highly variable in the number of total words (mean ± SD = 25,134 ± 9,267 words) and high-quality child-directed words (mean ± SD = 10,817 ± 7,187 words) they experienced in a day in their home environments during the first year after implantation. The results also showed that the patterns of variability across children in quantity and quality of language input changes depending on whether the speech was child-directed or overheard. Children also experienced highly different environments in terms of lexical diversity (as measured by TTR) and morphosyntactic complexity (as measured by MLUm) of language input. The results demonstrated that children with CIs varied substantially in the quantity and quality of language input experienced in their home environments. More importantly, individual children experienced highly variable amounts of high-quality, child-directed speech, which may drive variability in language outcomes across children with CIs. CONCLUSIONS: Analyzing early language input in natural, linguistic environments of children with CIs showed that the quantity and quality of early linguistic input vary substantially across individual children with CIs. This substantial individual variability suggests that the quantity and quality of early linguistic input are potential sources of individual differences in outcomes of children with CIs and warrant further investigation to determine the effects of this variability on outcomes.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Adult , Child , Humans , Language Development , Linguistics , Speech
5.
Ear Hear ; 43(2): 592-604, 2022.
Article in English | MEDLINE | ID: mdl-34582393

ABSTRACT

OBJECTIVES: Early home auditory environment plays an important role in children's spoken language development and overall well-being. This study explored differences in the home auditory environment experienced by children with cochlear implants (CIs) relative to children with normal hearing (NH). DESIGN: Measures of the child's home auditory environment, including adult word count (AWC), conversational turns (CTs), child vocalizations (CVs), television and media (TVN), overlapping sound (OLN), and noise (NON), were gathered using the Language Environment Analysis System. The study included 16 children with CIs (M = 22.06 mo) and 25 children with NH (M = 18.71 mo). Families contributed 1 to 3 daylong recordings quarterly over the course of approximately 1 year. Additional parent and infant characteristics including maternal education, amount of residual hearing, and age at activation were also collected. RESULTS: The results showed that whereas CTs and CVs increased with child age for children with NH, they did not change as a function of age for children with CIs; NON was significantly higher for the NH group. No significant group differences were found for the measures of AWC, TVN, or OLN. Moreover, measures of CTs, CVs, TVN, and NON from children with CIs were associated with demographic and child factors, including maternal education, age at CI activation, and amount of residual hearing. CONCLUSIONS: These findings suggest that there are similarities and differences in the home auditory environment experienced by children with CIs and children with NH. These findings have implications for early intervention programs to promote spoken language development for children with CIs.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Adult , Child , Hearing , Hearing Tests , Humans , Infant , Language Development
6.
Top Cogn Sci ; 13(2): 351-398, 2021 04.
Article in English | MEDLINE | ID: mdl-33780156

ABSTRACT

A classic problem in spoken language comprehension is how listeners perceive speech as being composed of discrete words, given the variable time-course of information in continuous signals. We propose a syllable inference account of spoken word recognition and segmentation, according to which alternative hierarchical models of syllables, words, and phonemes are dynamically posited, which are expected to maximally predict incoming sensory input. Generative models are combined with current estimates of context speech rate drawn from neural oscillatory dynamics, which are sensitive to amplitude rises. Over time, models which result in local minima in error between predicted and recently experienced signals give rise to perceptions of hearing words. Three experiments using the visual world eye-tracking paradigm with a picture-selection task tested hypotheses motivated by this framework. Materials were sentences that were acoustically ambiguous in numbers of syllables, words, and phonemes they contained (cf. English plural constructions, such as "saw (a) raccoon(s) swimming," which have two loci of grammatical information). Time-compressing, or expanding, speech materials permitted determination of how temporal information at, or in the context of, each locus affected looks to, and selection of, pictures with a singular or plural referent (e.g., one or more than one raccoon). Supporting our account, listeners probabilistically interpreted identical chunks of speech as consistent with a singular or plural referent to a degree that was based on the chunk's gradient rate in relation to its context. We interpret these results as evidence that arriving temporal information, judged in relation to language model predictions generated from context speech rate evaluated on a continuous scale, informs inferences about syllables, thereby giving rise to perceptual experiences of understanding spoken language as words separated in time.


Subject(s)
Comprehension , Speech , Humans , Memory , Speech Perception , Time Factors
7.
Neurosci Res ; 171: 49-61, 2021 Oct.
Article in English | MEDLINE | ID: mdl-33484749

ABSTRACT

Caregivers modify their speech when talking to infants, a specific type of speech known as infant-directed speech (IDS). This speaking style facilitates language learning compared to adult-directed speech (ADS) in infants with normal hearing (NH). While infants with NH and those with cochlear implants (CIs) prefer listening to IDS over ADS, it is yet unknown how CI processing may affect the acoustic distinctiveness between ADS and IDS, as well as the degree of intelligibility of these. This study analyzed speech of seven female adult talkers to model the effects of simulated CI processing on (1) acoustic distinctiveness between ADS and IDS, (2) estimates of intelligibility of caregivers' speech in ADS and IDS, and (3) individual differences in caregivers' ADS-to-IDS modification and estimated speech intelligibility. Results suggest that CI processing is substantially detrimental to the acoustic distinctiveness between ADS and IDS, as well as to the intelligibility benefit derived from ADS-to-IDS modifications. Moreover, the observed variability across individual talkers in acoustic implementation of ADS-to-IDS modification and the estimated speech intelligibility was significantly reduced due to CI processing. The findings are discussed in the context of the link between IDS and language learning in infants with CIs.


Subject(s)
Cochlear Implants , Speech Perception , Adult , Auditory Perception , Female , Humans , Infant , Language Development , Speech
8.
Behav Res Methods ; 53(1): 113-138, 2021 02.
Article in English | MEDLINE | ID: mdl-32583366

ABSTRACT

Automatic speech processing devices have become popular for quantifying amounts of ambient language input to children in their home environments. We assessed error rates for language input estimates for the Language ENvironment Analysis (LENA) audio processing system, asking whether error rates differed as a function of adult talkers' gender and whether they were speaking to children or adults. Audio was sampled from within LENA recordings from 23 families with children aged 4-34 months. Human coders identified vocalizations by adults and children, counted intelligible words, and determined whether adults' speech was addressed to children or adults. LENA's classification accuracy was assessed by parceling audio into 100-ms frames and comparing, for each frame, human and LENA classifications. LENA correctly classified adult speech 67% of the time across families (average false negative rate: 33%). LENA's adult word count showed a mean +47% error relative to human counts. Classification and Adult Word Count error rates were significantly affected by talkers' gender and whether speech was addressed to a child or an adult. The largest systematic errors occurred when adult females addressed children. Results show LENA's classifications and Adult Word Count entailed random - and sometimes large - errors across recordings, as well as systematic errors as a function of talker gender and addressee. Due to systematic and sometimes high error in estimates of amount of adult language input, relying on this metric alone may lead to invalid clinical and/or research conclusions. Further validation studies and circumspect usage of LENA are warranted.


Subject(s)
Language , Speech , Adolescent , Adult , Child , Child, Preschool , Female , Humans , Young Adult
9.
Dev Rev ; 572020 Sep.
Article in English | MEDLINE | ID: mdl-32632339

ABSTRACT

Early language environment plays a critical role in child language development. The Language ENvironment Analysis (LENA™) system allows researchers and clinicians to collect daylong recordings and obtain automated measures to characterize a child's language environment. This meta-analysis evaluates the predictability of LENA's automated measures for language skills in young children. We systematically searched reports for associations between LENA's automated measures, specifically, adult word count (AWC), conversational turn count (CTC), and child vocalization count (CVC), and language skills in children younger than 48 months. Using robust variance estimation, we calculated weighted mean effect sizes and conducted moderator analyses exploring the factors that might affect this relationship. The results revealed an overall medium effect size for the correlation between LENA's automated measures and language skills. This relationship was largely consistent regardless of child developmental status, publication status, language assessment modality and method, or the age at which the LENA recording was taken; however, the effect was weakly moderated by the gap between LENA recordings and language measures taken. Among the three measures, there were medium associations between CTC and CVC and language, whereas there was a small-to-medium association between AWC and language. These findings extend beyond validation work conducted by the LENA Research Foundation and suggest certain predictive strength of LENA's automated measures for child language. We discussed possible mechanisms underlying the observed associations, as well as the theoretical, methodological, and clinical implications of these findings.

10.
J Speech Lang Hear Res ; 63(7): 2453-2467, 2020 07 20.
Article in English | MEDLINE | ID: mdl-32603621

ABSTRACT

Purpose Differences across language environments of prelingually deaf children who receive cochlear implants (CIs) may affect language acquisition; yet, whether mothers show individual differences in how they modify infant-directed (ID) compared with adult-directed (AD) speech has seldom been studied. This study assessed individual differences in how mothers realized speech modifications in ID register and whether these predicted differences in language outcomes for children with CIs. Method Participants were 36 dyads of mothers and their children aged 0;8-2;5 (years;months) at the time of CI implantation. Mothers' spontaneous speech was recorded in a lab setting in ID or AD conditions before ~15 months postimplantation. Mothers' speech samples were characterized for acoustic-phonetic and lexical properties established as canonical indices of ID speech to typically hearing infants, such as vowel space area differences, fundamental frequency variability, and speech rate. Children with CIs completed longitudinal administrations of one or more standardized language assessment instruments at variable intervals from 6 months to 9.5 years postimplantation. Standardized scores on assessments administered longitudinally were used to calculate linear regressions, which gave rise to predicted language scores for children at 2 years postimplantation and language growth over 2-year intervals. Results Mothers showed individual differences in how they modified speech in ID versus AD registers. Crucially, these individual differences significantly predicted differences in estimated language outcomes at 2 years postimplantation in children with CIs. Maternal speech variation in lexical quantity and vowel space area differences across ID and AD registers most frequently predicted estimates of language attainment in children with CIs, whereas prosodic differences played a minor role. Conclusion Results support that caregiver language behaviors play a substantial role in explaining variability in language attainment in children receiving CIs. Supplemental Material https://doi.org/10.23641/asha.12560147.


Subject(s)
Cochlear Implantation , Cochlear Implants , Deafness , Speech Perception , Adult , Child , Deafness/surgery , Female , Humans , Individuality , Infant , Language , Language Development , Mothers , Speech
11.
J Phon ; 75: 73-87, 2019 Jul.
Article in English | MEDLINE | ID: mdl-32884162

ABSTRACT

Statistical distributions of phonetic variants in spoken language influence speech perception for both language learners and mature users. We theorized that patterns of phonetic variant processing of consonants demonstrated by adults might stem in part from patterns of early exposure to statistics of phonetic variants in infant-directed (ID) speech. In particular, we hypothesized that ID speech might involve greater proportions of canonical /t/ pronunciations compared to adult-directed (AD) speech in at least some phonological contexts. This possibility was tested using a corpus of spontaneous speech of mothers speaking to other adults, or to their typically-developing infant. Tokens of word-final alveolar stops - including /t/, /d/, and the nasal stop /n/ - were examined in assimilable contexts (i.e., those followed by a word-initial labial and/or velar); these were classified as canonical, assimilated, deleted, or glottalized. Results confirmed that there were significantly more canonical pronunciations in assimilable contexts in ID compared with AD speech, an effect which was driven by the phoneme /t/. These findings suggest that at least in phonological contexts involving possible assimilation, children are exposed to more canonical /t/ variant pronunciations than adults are. This raises the possibility that perceptual processing of canonical /t/ may be partly attributable to exposure to canonical /t/ variants in ID speech. Results support the need for further research into how statistics of variant pronunciations in early language input may shape speech processing across the lifespan.

12.
Atten Percept Psychophys ; 81(2): 571-589, 2019 Feb.
Article in English | MEDLINE | ID: mdl-30488190

ABSTRACT

Listeners resolve ambiguities in speech perception using multiple sources, including non-local or distal speech rate (i.e., the speech rate of material surrounding a particular region). The ability to resolve ambiguities is particularly important for the perception of casual, everyday productions, which are often produced using phonetically reduced forms. Here, we examine whether the distal speech rate effect is specific to a lexical class of words and/or to particular lexical or phonological contexts. In Experiment 1, we examined whether distal speech rate influenced perception of phonologically similar content words differing in number of syllables (e.g., form/forum). In Experiment 2, we used both transcription and word-monitoring tasks to examine whether distal speech rate influenced perception of a reduced vowel, causing lexical reorganization (e.g., cease, see us). Distal speech rate influenced perception of lexical content in both experiments. This demonstrates that distal rate influences perception of a lexical class other than function words and affects perception in a variety of phonological and lexical contexts. These results support a view that distal speech rate is a pervasive source of information with far-reaching consequences for perception of lexical content and word segmentation.


Subject(s)
Phonetics , Recognition, Psychology , Speech Perception/physiology , Verbal Behavior , Adolescent , Adult , Female , Humans , Male , Young Adult
13.
PLoS One ; 11(9): e0155975, 2016.
Article in English | MEDLINE | ID: mdl-27603209

ABSTRACT

Neil Armstrong insisted that his quote upon landing on the moon was misheard, and that he had said one small step for a man, instead of one small step for man. What he said is unclear in part because function words like a can be reduced and spectrally indistinguishable from the preceding context. Therefore, their presence can be ambiguous, and they may disappear perceptually depending on the rate of surrounding speech. Two experiments are presented examining production and perception of reduced tokens of for and for a in spontaneous speech. Experiment 1 investigates the distributions of several acoustic features of for and for a. The results suggest that the distributions of for and for a overlap substantially, both in terms of temporal and spectral characteristics. Experiment 2 examines perception of these same tokens when the context speaking rate differs. The perceptibility of the function word a varies as a function of this context speaking rate. These results demonstrate that substantial ambiguity exists in the original quote from Armstrong, and that this ambiguity may be understood through context speaking rate.


Subject(s)
Phonetics , Speech Acoustics , Speech Perception/physiology , Speech/physiology , Acoustic Stimulation , Astronauts , Comprehension , Extraterrestrial Environment , Humans , Moon , Semantics , Sound Spectrography
14.
Atten Percept Psychophys ; 78(1): 334-45, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26392395

ABSTRACT

The perception of reduced syllables, including function words, produced in casual speech can be made to disappear by slowing the rate at which surrounding words are spoken (Dilley & Pitt, Psychological Science, 21(11), 1664-1670. doi: 10.1177/0956797610384743 , 2010). The current study explored the domain generality of this speech-rate effect, asking whether it is induced by temporal information found only in speech. Stimuli were short word sequences (e.g., minor or child) appended to precursors that were clear speech, degraded speech (low-pass filtered or sinewave), or tone sequences, presented at a spoken rate and a slowed rate. Across three experiments, only precursors heard as intelligible speech generated a speech-rate effect (fewer reports of function words with a slowed context), suggesting that rate-dependent speech processing can be domain specific.


Subject(s)
Language , Speech Acoustics , Speech Perception , Adult , Female , Humans , Male , Time Factors
15.
J Psycholinguist Res ; 45(4): 813-31, 2016 Aug.
Article in English | MEDLINE | ID: mdl-25980971

ABSTRACT

The importance of secondary-stressed (SS) and unstressed-unreduced (UU) syllable accuracy for spoken word recognition in English is as yet unclear. An acoustic study first investigated Russian learners' of English production of SS and UU syllables. Significant vowel quality and duration reductions in Russian-spoken SS and UU vowels were found, likely due to a transfer of native phonological features. Next, a cross-modal phonological priming technique combined with a lexical decision task assessed the effect of inaccurate SS and UU syllable productions on native American English listeners' speech processing. Inaccurate UU vowels led to significant inhibition of lexical access, while reduced SS vowels revealed less interference. The results have implications for understanding the role of SS and UU syllables for word recognition and English pronunciation instruction.


Subject(s)
Multilingualism , Psycholinguistics , Speech Intelligibility/physiology , Speech Perception/physiology , Adult , Female , Humans , Male , Ohio , Russia , Young Adult
16.
J Exp Psychol Gen ; 144(4): 730-6, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26214165

ABSTRACT

Do the same mechanisms underlie processing of music and language? Recent investigations of this question have yielded inconsistent results. Likely factors contributing to discrepant findings are use of small samples and failure to control for individual differences in cognitive ability. We investigated the relationship between music and speech prosody processing, while controlling for cognitive ability. Participants (n = 179) completed a battery of cognitive ability tests, the Montreal Battery of Evaluation of Amusia (MBEA) to assess music perception, and a prosody test of pitch peak timing discrimination (early, as in insight vs. late, incite). Structural equation modeling revealed that only music perception was a significant predictor of prosody test performance. Music perception accounted for 34.5% of variance on prosody test performance; cognitive abilities and music training added only about 8%. These results indicate musical pitch and temporal processing are highly predictive of pitch discrimination in speech processing, even after controlling for other possible predictors of this aspect of language processing.


Subject(s)
Individuality , Music , Pitch Perception/physiology , Speech Perception/physiology , Speech/physiology , Adolescent , Adult , Cognition/physiology , Female , Humans , Male , Middle Aged , Neuropsychological Tests , Young Adult
17.
J Speech Lang Hear Res ; 58(4): 1341-9, 2015 Aug 01.
Article in English | MEDLINE | ID: mdl-25860652

ABSTRACT

PURPOSE: A new literature has suggested that speech rate can influence the parsing of words quite strongly in speech. The purpose of this study was to investigate differences between younger adults and older adults in the use of context speech rate in word segmentation, given that older adults perceive timing information differently from younger ones. METHOD: Younger (18-25 years) and older (55-65 years) adults performed a sentence transcription task for sentences that varied in speech rate context (i.e., distal speech rate) and a syntactic cue to the presence of a word boundary. RESULTS: There were no differences between younger and older adults in their use of the distal speech rate cue to word segmentation. CONCLUSIONS: The differences previously documented between younger and older adults in their perception of speech rate cues do not necessarily translate to older adults' use of those cues. Older adults' difficulties with compressed speech may arise from problems broader than just speech rate alone.


Subject(s)
Aging/psychology , Speech Perception , Speech , Adult , Aged , Aging/physiology , Cues , Female , Humans , Language Tests , Male , Middle Aged , Random Allocation , Sound Spectrography , Speech/physiology , Speech Perception/physiology , Speech Production Measurement , Young Adult
18.
Brain Lang ; 144: 26-34, 2015 May.
Article in English | MEDLINE | ID: mdl-25880903

ABSTRACT

Stuttering is a neurodevelopmental disorder that affects the timing and rhythmic flow of speech production. When speech is synchronized with an external rhythmic pacing signal (e.g., a metronome), even severe stuttering can be markedly alleviated, suggesting that people who stutter may have difficulty generating an internal rhythm to pace their speech. To investigate this possibility, children who stutter and typically-developing children (n=17 per group, aged 6-11 years) were compared in terms of their auditory rhythm discrimination abilities of simple and complex rhythms. Children who stutter showed worse rhythm discrimination than typically-developing children. These findings provide the first evidence of impaired rhythm perception in children who stutter, supporting the conclusion that developmental stuttering may be associated with a deficit in rhythm processing.


Subject(s)
Auditory Perception , Periodicity , Stuttering/physiopathology , Stuttering/psychology , Acoustic Stimulation , Child , Female , Humans , Male , Speech
19.
Psychon Bull Rev ; 22(5): 1451-7, 2015 Oct.
Article in English | MEDLINE | ID: mdl-25794478

ABSTRACT

During lexical access, listeners use both signal-based and knowledge-based cues, and information from the linguistic context can affect the perception of acoustic speech information. Recent findings suggest that the various cues used in lexical access are implemented with flexibility and may be affected by information from the larger speech context. We conducted 2 experiments to examine effects of a signal-based cue (distal speech rate) and a knowledge-based cue (linguistic structure) on lexical perception. In Experiment 1, we manipulated distal speech rate in utterances where an acoustically ambiguous critical word was either obligatory for the utterance to be syntactically well formed (e.g., Conner knew that bread and butter (are) both in the pantry) or optional (e.g., Don must see the harbor (or) boats). In Experiment 2, we examined identical target utterances as in Experiment 1 but changed the distribution of linguistic structures in the fillers. The results of the 2 experiments demonstrate that speech rate and linguistic knowledge about critical word obligatoriness can both influence speech perception. In addition, it is possible to alter the strength of a signal-based cue by changing information in the speech environment. These results provide support for models of word segmentation that include flexible weighting of signal-based and knowledge-based cues.


Subject(s)
Comprehension , Linguistics , Recognition, Psychology , Semantics , Speech Acoustics , Speech Perception , Cues , Female , Humans , Male , Phonetics , Reaction Time , Young Adult
20.
J Speech Lang Hear Res ; 58(2): 241-53, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25659121

ABSTRACT

PURPOSE: A large body of literature has indicated vowel space area expansion in infant-directed (ID) speech compared with adult-directed (AD) speech, which may promote language acquisition. The current study tested whether this expansion occurs in storybook speech read to infants at various points during their first 2 years of life. METHOD: In 2 studies, mothers read a storybook containing target vowels in ID and AD speech conditions. Study 1 was longitudinal, with 11 mothers recorded when their infants were 3, 6, and 9 months old. Study 2 was cross-sectional, with 48 mothers recorded when their infants were 3, 9, 13, or 20 months old (n=12 per group). The 1st and 2nd formants of vowels /i/, /ɑ/, and /u/ were measured, and vowel space area and dispersion were calculated. RESULTS: Across both studies, 1st and/or 2nd formant frequencies shifted systematically for /i/ and /u/ vowels in ID compared with AD speech. No difference in vowel space area or dispersion was found. CONCLUSIONS: The results suggest that a variety of communication and situational factors may affect phonetic modifications in ID speech, but that vowel space characteristics in speech to infants stay consistent across the first 2 years of life.


Subject(s)
Child Language , Phonetics , Reading , Speech Acoustics , Speech Perception , Adult , Child, Preschool , Cross-Sectional Studies , Female , Humans , Infant , Longitudinal Studies , Male , Mothers
SELECTION OF CITATIONS
SEARCH DETAIL
...