Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Psychon Bull Rev ; 19(2): 309-16, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22194272

ABSTRACT

It is now well established that people in conversations repeat each other's words and structures. Does doing so reflect dialogue participants' expectations that their own choices of words or structures will be repeated back to them? In two experiments, subjects and confederates (purportedly) took turns describing pictures to each other. On critical trials, we measured response latencies to choose pictures when labels (e.g., stroller) or syntactic structures (a prepositional dative) that subjects had just produced were repeated back to them, versus when they heard reasonable alternatives (baby carriage or a double-object structure). Experiment 1 showed that repeated words and syntactic structures both elicit faster responses. Experiment 2 showed that the effect happens even when subjects hear descriptions from computers, instead of from their addressees, and that the repeated-word effect was not due to preferences for labels. These observations suggest that dialogue participants expect their own word and structure choices to be repeated back to them, and this is general to the task situation rather than specific to their communicative partners.


Subject(s)
Repetition Priming , Verbal Behavior , Anticipation, Psychological , Comprehension , Humans , Photic Stimulation , Psycholinguistics , Reaction Time
2.
Cognition ; 121(3): 459-65, 2011 Dec.
Article in English | MEDLINE | ID: mdl-21939965

ABSTRACT

Listeners rapidly adjust to talkers' pronunciations, accommodating those pronunciations into the relevant phonemic category to improve subsequent perception. Previous work has suggested that such learning is restricted to pronunciations that are representative of how the speaker talks (Kraljic, Samuel, & Brennan, 2008). If an ambiguous pronunciation, for example, can be attributed to an external source (such as a pen in the speaker's mouth), or if it is preceded by normal pronunciations of the same sound, learning is blocked. In three experiments, we explore this blocking effect in more detail. Our aim is to better understand the nature of the representations underlying the perceptual learning process. Experiment 1 replicates the blocking effect. Experiments 2 and 3 demonstrate that it can be eliminated when certain visual information occurs simultaneously with the auditory signal. The pattern of learning and non-learning is best accounted for by the view that speech perception is mediated by episodic representations that include potentially relevant visual information.


Subject(s)
Learning/physiology , Speech Perception/physiology , Adolescent , Adult , Decision Making/physiology , Female , Humans , Language , Male , Speech
3.
Atten Percept Psychophys ; 71(6): 1207-18, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19633336

ABSTRACT

Adult language users have an enormous amount of experience with speech in their native language. As a result, they have very well-developed processes for categorizing the sounds of speech that they hear. Despite this very high level of experience, recent research has shown that listeners are capable of redeveloping their speech categorization to bring it into alignment with new variation in their speech input. This reorganization of phonetic space is a type of perceptual learning, or recalibration, of speech processes. In this article, we review several recent lines of research on perceptual learning for speech.


Subject(s)
Language Development , Speech Perception , Adult , Humans , Multilingualism , Phonetics , Reading , Retention, Psychology , Social Environment , Speech Production Measurement , Verbal Behavior
4.
J Mem Lang ; 61(3): 398-411, 2009 Oct 01.
Article in English | MEDLINE | ID: mdl-20161058

ABSTRACT

The perceptual loop theory of self-monitoring posits that auditory speech output is parsed by the comprehension system. For sign language, however, visual input from one's own signing is distinct from visual input received from another's signing. Two experiments investigated the role of visual feedback in the production of American Sign Language (ASL). Experiment 1 revealed that signers were poor at recognizing ASL signs when viewed as they would appear during self-produced signing. Experiment 2 showed that the absence or blurring of visual feedback did not affect production performance when deaf signers learned to reproduce signs from Russian Sign Language, and production performance of hearing non-signers was slightly worse with visual feedback. Signers may rely primarily on somatosensory feedback when monitoring language output, and if the perceptual loop theory is to be maintained, the comprehension system must be able to parse a somatosensory signal as well as an external perceptual signal for both sign and speech.

5.
Psychol Sci ; 19(4): 332-8, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18399885

ABSTRACT

Perceptual theories must explain how perceivers extract meaningful information from a continuously variable physical signal. In the case of speech, the puzzle is that little reliable acoustic invariance seems to exist. We tested the hypothesis that speech-perception processes recover invariants not about the signal, but rather about the source that produced the signal. Findings from two manipulations suggest that the system learns those properties of speech that result from idiosyncratic characteristics of the speaker; the same properties are not learned when they can be attributed to incidental factors. We also found evidence for how the system determines what is characteristic: In the absence of other information about the speaker, the system relies on episodic order, representing those properties present during early experience as characteristic of the speaker. This "first-impressions" bias can be overridden, however, when variation is an incidental consequence of a temporary state (a pen in the speaker's mouth), rather than characteristic of the speaker.


Subject(s)
Adaptation, Psychological , Attention , Attitude , Social Perception , Speech Perception , Adolescent , Adult , Humans , Phonetics , Reaction Time
6.
Cognition ; 107(1): 54-81, 2008 Apr.
Article in English | MEDLINE | ID: mdl-17803986

ABSTRACT

Listeners are faced with enormous variation in pronunciation, yet they rarely have difficulty understanding speech. Although much research has been devoted to figuring out how listeners deal with variability, virtually none (outside of sociolinguistics) has focused on the source of the variation itself. The current experiments explore whether different kinds of variation lead to different cognitive and behavioral adjustments. Specifically, we compare adjustments to the same acoustic consequence when it is due to context-independent variation (resulting from articulatory properties unique to a speaker) versus context-conditioned variation (resulting from common articulatory properties of speakers who share a dialect). The contrasting results for these two cases show that the source of a particular acoustic-phonetic variation affects how that variation is handled by the perceptual system. We also show that changes in perceptual representations do not necessarily lead to changes in production.


Subject(s)
Culture , Language , Phonetics , Speech Perception , Adult , Female , Humans , Learning , Male , Speech Production Measurement
7.
Psychon Bull Rev ; 13(2): 262-8, 2006 Apr.
Article in English | MEDLINE | ID: mdl-16892992

ABSTRACT

Lexical context strongly influences listeners' identification of ambiguous sounds. For example, a sound midway between /f/ and /s/ is reported as /f/ in "sheri_," but as /s/ in "Pari_." Norris, McQueen, and Cutler (2003) have demonstrated that after hearing such lexically determined phonemes, listeners expand their phonemic categories to include more ambiguous tokens than before. We tested whether listeners adjust their phonemic categories for a specific speaker. Do listeners learn a particular speaker's "accent"? Similarly, we examined whether perceptual learning is specific to the particular ambiguous phonemes that listeners hear, or whether the adjustments generalize to related sounds. Participants heard ambiguous /d/ or /t/ phonemes during a lexical decision task. They then categorized sounds on /d/-/t/ and /b/-/p/ continua, either in the same voice that they had heard for lexical decision, or in a different voice. Perceptual learning generalized across both speaker and test continua: Changes in perceptual representations are robust and broadly tuned.


Subject(s)
Generalization, Psychological , Learning , Speech Perception , Decision Making , Humans , Vocabulary
8.
Cogn Psychol ; 51(2): 141-78, 2005 Sep.
Article in English | MEDLINE | ID: mdl-16095588

ABSTRACT

Recent work on perceptual learning shows that listeners' phonemic representations dynamically adjust to reflect the speech they hear (Norris, McQueen, & Cutler, 2003). We investigate how the perceptual system makes such adjustments, and what (if anything) causes the representations to return to their pre-perceptual learning settings. Listeners are exposed to a speaker whose pronunciation of a particular sound (either /s/ or /integral/) is ambiguous (e.g., halfway between /s/ and /integral/). After exposure, participants are tested for perceptual learning on two continua that range from /s/ to /integral/, one in the Same voice they heard during exposure, and one in a Different voice. To assess how representations revert to their prior settings, half of Experiment 1's participants were tested immediately after exposure; the other half performed a 25-min silent intervening task. The perceptual learning effect was actually larger after such a delay, indicating that simply allowing time to pass does not cause learning to fade. The remaining experiments investigate different ways that the system might unlearn a person's pronunciations: listeners hear the Same or a Different speaker for 25 min with either: no relevant (i.e., 'good') /s/ or /integral/ input (Experiment 2), one of the relevant inputs (Experiment 3), or both relevant inputs (Experiment 4). The results support a view of phonemic representations as dynamic and flexible, and suggest that they interact with both higher- (e.g., lexical) and lower-level (e.g., acoustic) information in important ways.


Subject(s)
Learning , Speech Perception , Adaptation, Psychological , Adult , Female , Humans , Male , Psycholinguistics , Retention, Psychology , Speech Acoustics , Time Factors , Transfer, Psychology , Voice Quality
9.
Cogn Psychol ; 50(2): 194-231, 2005 Mar.
Article in English | MEDLINE | ID: mdl-15680144

ABSTRACT

Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so "for" their addressees (the audience design hypothesis) or "for" themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., "Put the dog in the basket on the star") were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers' instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects. We discuss the standards needed for a convincing test of the audience design hypothesis.


Subject(s)
Cues , Verbal Behavior , Adult , Analysis of Variance , Female , Fixation, Ocular , Humans , Male , New York , Psycholinguistics , Reaction Time , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...