Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Percept Psychophys ; 62(3): 615-25, 2000 Apr.
Article in English | MEDLINE | ID: mdl-10909252

ABSTRACT

Perceptual identification of spoken words in noise is less accurate when the target words are preceded by spoken phonetically related primes (Goldinger, Luce, & Pisoni, 1989). The present investigation replicated and extended this finding. Subjects shadowed target words presented in the clear that were preceded by phonetically related or unrelated primes. In addition, primes were either higher or lower in frequency than the target words. Shadowing latencies were significantly longer for target words preceded by phonetically related primes, but only when the prime-target interstimulus interval was short (50 vs. 500 msec). These results demonstrate that phonetic priming does not depend on target degradation and that it affects processing time. We further demonstrated that PARSYN--a connectionist instantiation of the neighborhood activation model--accurately simulates the observed pattern of priming.


Subject(s)
Attention , Phonetics , Speech Perception , Adult , Cues , Humans , Paired-Associate Learning , Speech Acoustics
2.
Brain Lang ; 68(1-2): 306-11, 1999.
Article in English | MEDLINE | ID: mdl-10433774

ABSTRACT

Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.


Subject(s)
Cognition/physiology , Speech , Vocabulary , Humans , Phonetics
3.
Cogn Psychol ; 38(4): 465-94, 1999 Jun.
Article in English | MEDLINE | ID: mdl-10334878

ABSTRACT

This research examines the issue of speech segmentation in 9-month-old infants. Two cues known to carry probabilistic information about word boundaries were investigated: Phonotactic regularity and prosodic pattern. The stimuli used in four head turn preference experiments were bisyllabic CVC.CVC nonwords bearing primary stress in either the first or the second syllable (strong/weak vs. weak/strong). Stimuli also differed with respect to the phonotactic nature of their cross-syllabic C.C cluster. Clusters had either a low probability of occurring at a word juncture in fluent speech and a high probability of occurring inside of words ("within-word" clusters) or a high probability of occurring at a word juncture and a low probability of occurring inside of words ("between-word" clusters). Our results show that (1) 9-month-olds are sensitive to how phonotactic sequences typically align with word boundaries, (2) altering the stress pattern of the stimuli reverses infants' preference for phonotactic cluster types, (3) the prosodic cue to segmentation is more strongly relied upon than the phonotactic cue, and (4) a preference for high-probability between-word phonotactic sequences can be obtained either by placing stress on the second syllable of the stimuli or by inserting a pause between syllables. The implications of these results are discussed in light of an integrated multiple-cue approach to speech segmentation in infancy.


Subject(s)
Speech Perception/physiology , Female , Humans , Infant , Infant, Newborn , Male , Phonetics , Speech Discrimination Tests
4.
J Exp Psychol Hum Percept Perform ; 25(1): 174-83, 1999 Feb.
Article in English | MEDLINE | ID: mdl-10069031

ABSTRACT

A large number of multisyllabic words contain syllables that are themselves words. Previous research using cross-modal priming and word-spotting tasks suggests that embedded words may be activated when the carrier word is heard. To determine the effects of an embedded word on processing of the larger word, processing times for matched pairs of bisyllabic words were examined to contrast the effects of the presence or absence of embedded words in both 1st- and 2nd-syllable positions. Results from auditory lexical decision and single-word shadowing demonstrate that the presence of an embedded word in the 1st-syllable position speeds processing times for the carrier word. The presence of an embedded word in the 2nd syllable has no demonstrable effect.


Subject(s)
Attention , Field Dependence-Independence , Semantics , Speech Perception , Adult , Female , Humans , Male , Reaction Time
5.
Mem Cognit ; 26(4): 708-15, 1998 Jul.
Article in English | MEDLINE | ID: mdl-9701963

ABSTRACT

Many theories of spoken word recognition assume that lexical items are stored in memory as abstract representations. However, recent research (e.g., Goldinger, 1996) has suggested that representations of spoken words in memory are veridical exemplars that encode specific information, such as characteristics of the talker's voice. If representations are exemplar based, effects of stimulus variation such as that arising from changes in the identity of the talker may have an effect on identification of and memory for spoken words. This prediction was examined for an implicit and explicit task (lexical decision and recognition, respectively). Comparable amounts of repetition priming in lexical decision were found for repeated words, regardless of whether the repetitions were in the same or in different voices. However, reaction times in the recognition task were faster if the repetition was in the same voice. These results suggest a role for both abstract and specific representations in models of spoken word recognition.


Subject(s)
Cues , Memory/physiology , Verbal Learning/physiology , Voice , Adult , Analysis of Variance , Humans , Speech Perception/physiology
6.
Percept Psychophys ; 60(3): 465-83, 1998 Apr.
Article in English | MEDLINE | ID: mdl-9599996

ABSTRACT

Previous research (Garber & Pisoni, 1991; Pisoni & Garber, 1990) has demonstrated that subjective familiarity judgments for words are not differentially affected by the modality (visual or auditory) in which the words are presented, suggesting that participants base their judgments on fairly abstract, modality-independent representations in memory. However, in a recent large-scale study in Japanese (Amano, Kondo, & Kakehi, 1995), marked modality effects on familiarity ratings were observed. The present research further examined possible modality differences in subjective ratings and their implications for word recognition. Specially selected words were presented to participants for frequency judgments. In particular, participants were asked how frequently they read, wrote, heard, or said a given spoken or printed word. These ratings were then regressed against processing times in auditory and visual lexical decision and naming tasks. Our results suggest modality dependence for some lexical representations.


Subject(s)
Semantics , Speech Perception/physiology , Visual Perception/physiology , Vocabulary , Humans
7.
Percept Psychophys ; 60(3): 484-90, 1998 Apr.
Article in English | MEDLINE | ID: mdl-9599997

ABSTRACT

Using the cross-modal priming paradigm, we attempted to determine whether semantic representations for word-final morphemes embedded in multisyllabic words (e.g.,/lak/in /hemlak/) are independently activated in memory. That is, we attempted to determine whether the auditory prime, /hemlak/, would facilitate lexical decision times to the visual target, KEY, even when the recognition point for /hemlak/ occurred prior to the end of the word, which should ensure deactivation of all lexical candidates. In the first experiment, a gating task was used in order to ensure that the multisyllabic words could be identified prior to their offsets. In the second experiment, lexical decision times for visually presented targets following spoken monosyllabic primes (e.g., /lak/-KEY) were compared with reaction times for the same visual targets following multisyllabic pairs (/hemlak/-KEY). Significant priming was found for both the monosyllabic and the multisyllabic conditions. The results support a recognition strategy that initiates lexical access at strong syllables (Cutler & Norris, 1988) and operates according to a principle of delayed commitment (Marr, 1982).


Subject(s)
Semantics , Speech Perception/physiology , Humans , Language
8.
Ear Hear ; 19(1): 1-36, 1998 Feb.
Article in English | MEDLINE | ID: mdl-9504270

ABSTRACT

OBJECTIVE: A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. DESIGN: Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. RESULTS: The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults.


Subject(s)
Memory/physiology , Models, Psychological , Speech Perception/physiology , Vocabulary , Adult , Decision Making , Humans , Language , Phonetics , Reaction Time
9.
J Exp Psychol Hum Percept Perform ; 23(3): 873-89, 1997 Jun.
Article in English | MEDLINE | ID: mdl-9180048

ABSTRACT

Previous research on spoken word recognition has demonstrated that identification of a phonetic segment is affected by the lexical status of the item in which the segment occurs. W. F. Ganong (1980) demonstrated that a category boundary shift occurs when the voiced end of 1 voice-onset time continuum is a word but the voiceless end of another series is a word; this is known as the "lexical effect." A series of studies was undertaken to examine how lexical neighborhood; in contrast to lexical status, might influence word perception. Pairs of nonword series were created in which the voiced end of 1 series had a higher frequency-weighted neighborhood density, whereas the reverse was true for the other series. Lexical neighborhood was found to affect word recognition in much the same way as lexical status.


Subject(s)
Phonetics , Vocabulary , Humans , Speech Perception
10.
Lang Speech ; 40 ( Pt 1): 47-62, 1997.
Article in English | MEDLINE | ID: mdl-9230698

ABSTRACT

Two experiments using bisyllabic CVCCVC nonsense words that varied in phonotactic probability and stress placement were conducted to examine the influences of phonotactic and metrical information on spoken word recognition. Experiment 1 examined participants' intuitions about the phonological "goodness" of nonsense words. Experiment 2 examined processing times for the same stimuli in a speeded auditory repetition task. The results of both studies provide further evidence that the phonotactic configuration and stress placement of spoken stimuli have important implications for the representation and processing of spoken words.


Subject(s)
Phonetics , Speech Perception , Speech , Humans , Speech Discrimination Tests
11.
J Child Lang ; 22(3): 727-35, 1995 Oct.
Article in English | MEDLINE | ID: mdl-8789521

ABSTRACT

Based on an analysis of similarity neighbourhoods of worlds in children's lexicons, Dollaghan (1994) argues that because of the degree of phonological overlap among lexical items in memory, children must perform detailed acoustic-phonetic analyses in order to recognize spoken words. This is in contradiction to Charles-Luce & Luce (1990), who reported that the similarity neighbourhoods in younger children's expressive lexicons are sparse relative to older children's and adult lexicons and that young children may be able to use more global word recognition strategies. The current investigation re-examined these issues. Similarity neighbourhoods of young children's receptive vocabularies were analysed for three-phoneme, four-phoneme and five-phoneme words. The pattern of the original results from Charles-Luce & Luce (1990) was replicated.


Subject(s)
Cognition , Speech Perception , Vocabulary , Child , Child, Preschool , Humans , Infant , Phonetics
13.
J Exp Psychol Learn Mem Cogn ; 18(6): 1211-38, 1992 Nov.
Article in English | MEDLINE | ID: mdl-1447548

ABSTRACT

Phonological priming of spoken words refers to improved recognition of targets preceded by primes that share at least one of their constituent phonemes (e.g., BULL-BEER). Phonetic priming refers to reduced recognition of targets preceded by primes that share no phonemes with targets but are phonetically similar to targets (e.g., BULL-VEER). Five experiments were conducted to investigate the role of bias in phonological priming. Performance was compared across conditions of phonological and phonetic priming under a variety of procedural manipulations. Ss in phonological priming conditions systematically modified their responses on unrelated priming trials in perceptual identification, and they were slower and more errorful on unrelated trials in lexical decision than were Ss in phonetic priming conditions. Phonetic and phonological priming effects display different time courses and also different interactions with changes in proportion of related priming trials. Phonological priming involves bias; phonetic priming appears to reflect basic properties of activation and competition in spoken word recognition.


Subject(s)
Speech Perception , Speech , Vocabulary , Adult , Female , Humans , Language , Male , Memory , Noise , Perceptual Masking , Phonetics , Research Design , Semantics , Speech Production Measurement
14.
J Exp Psychol Hum Percept Perform ; 16(3): 551-63, 1990 Aug.
Article in English | MEDLINE | ID: mdl-2144570

ABSTRACT

This research examines the recognition of two-syllable spoken words and the means by which the auditory word recognition process deals with ambiguous stimulus information. The experiments reported here investigate the influence of individual syllables within two-syllable words on the recognition of each other. Specifically, perceptual identification of two-syllable words comprised of two monosyllabic words (spondees) was examined. Individual syllables within a spondee were characterized as either "easy" or "hard" depending on the syllable's neighborhood characteristics; an easy syllable was defined as a high-frequency word in a sparse neighborhood of low-frequency words, and a hard syllable as a low-frequency word in a high-density, high-frequency neighborhood. In Experiment 1, stimuli were created by splicing together recordings of the component syllables of the spondee, thus equating for syllable stress. Additional experiments tested the perceptual identification of naturally produced spondees, spliced nonwords, and monosyllables alone. Neighborhood structure had a strong effect on identification in all experiments. In addition, identification performance for spondees with a hard-easy syllable pattern was higher than for spondees with an easy-hard syllable pattern, indicating a primarily retroactive pattern of influence in spoken word recognition. Results strongly suggest that word recognition involves multiple activation and delayed commitment, thus ensuring accurate and efficient recognition.


Subject(s)
Attention , Memory , Mental Recall , Phonetics , Speech Perception , Adult , Female , Humans , Male , Paired-Associate Learning , Psychoacoustics
15.
J Child Lang ; 17(1): 205-15, 1990 Feb.
Article in English | MEDLINE | ID: mdl-2312642

ABSTRACT

Similarity neighbourhoods for words in young children's lexicons were investigated using three computerized databases. These databases were representative of three groups of native English speakers: 5-year-olds, 7-year-olds, and adults. Computations relating to the similarity neighbourhoods of words in the children's and adult's lexicon revealed that words in the 5- and 7-year-olds' lexicons have many fewer similar neighbours than the same words analyzed in the adult lexicon. Thus, young children may employ more global recognition strategies because words are more discriminable in memory. The neighbourhood analyses provide a number of insights into the processes of auditory word recognition in children and the possible structural organization of words in the young child's mental lexicon.


Subject(s)
Child Language , Language Development , Phonetics , Speech Perception , Vocabulary , Child , Child, Preschool , Humans , Memory
16.
Cognition ; 25(1-2): 21-52, 1987 Mar.
Article in English | MEDLINE | ID: mdl-3581727
18.
J Acoust Soc Am ; 78(6): 1949-57, 1985 Dec.
Article in English | MEDLINE | ID: mdl-4078171

ABSTRACT

Acoustic measurements were conducted to determine the degree to which vowel duration, closure duration, and their ratio distinguish voicing of word-final stop consonants across variations in sentential and phonetic environments. Subjects read CVC test words containing three different vowels and ending in stops of three different places of articulation. The test words were produced either in nonphrase-final or phrase-final position and in several local phonetic environments within each of these sentence positions. Our measurements revealed that vowel duration most consistently distinguished voicing categories for the test words. Closure duration failed to consistently distinguish voicing categories across the contextual variables manipulated, as did the ratio of closure and vowel duration. Our results suggest that vowel duration is the most reliable correlate of voicing for word-final stops in connected speech.


Subject(s)
Speech/physiology , Female , Humans , Male , Phonetics , Speech Acoustics
SELECTION OF CITATIONS
SEARCH DETAIL
...