Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
1.
Cognition ; 215: 104809, 2021 10.
Article in English | MEDLINE | ID: mdl-34274558

ABSTRACT

Common ground can be mutually established between conversational partners in several ways. We examined whether the modality (visual or linguistic) with which speakers share information with their conversational partners results in memory traces that affect subsequent references addressed to a particular partner. In 32 triads, directors arranged a set of tangram cards with one matcher and then with another, but in different modalities, sharing some cards only linguistically (by describing cards the matcher couldn't see), some only visually (by silently showing them), some both linguistically and visually, and others not at all. Then directors arranged the cards again in separate rounds with each matcher. The modality with which they previously established common ground about a particular card with a particular matcher (e.g., linguistically with one partner and visually with the other) affected subsequent referring: References to cards previously shared only visually included more idea units, words, and reconceptualizations than those shared only linguistically, which in turn included more idea units, words, and reconceptualizations than those shared both linguistically and visually. Moreover, speakers were able to tailor references to the same card appropriately to the distinct modality shared with each addressee. Such gradient, partner-specific adaptation during re-referring suggests that memory encodes rich-enough representations of multimodal shared experiences to effectively cue relevant constraints about the perceptual conditions under which speakers and addressees establish common ground.


Subject(s)
Communication , Linguistics , Humans
2.
Soc Cogn Affect Neurosci ; 12(6): 871-880, 2017 06 01.
Article in English | MEDLINE | ID: mdl-28338791

ABSTRACT

In dialogue, language processing is adapted to the conversational partner. We hypothesize that the brain facilitates partner-adapted language processing through preparatory neural configurations (task sets) that are tailored to the conversational partner. In this experiment, we measured neural activity with functional magnetic resonance imaging (fMRI) while healthy participants in the scanner (a) engaged in a verbal communication task with a conversational partner outside of the scanner, or (b) spoke outside of a conversational context (to test the microphone). Using multivariate searchlight analysis, we identify cortical regions that represent information on whether speakers plan to speak to a conversational partner or without having a partner. Most notably a region that has been associated with processing social-affective information and perspective taking, the ventromedial prefrontal cortex, as well as regions that have been associated with prospective task representation, the bilateral ventral prefrontal cortex, are involved in encoding the speaking condition. Our results suggest that speakers prepare, in advance of speaking, for the social context in which they will speak.


Subject(s)
Brain/physiology , Communication , Speech/physiology , Adult , Affect/physiology , Brain Mapping , Cerebral Cortex/anatomy & histology , Cerebral Cortex/physiology , Female , Humans , Image Processing, Computer-Assisted , Language , Magnetic Resonance Imaging , Male , Prefrontal Cortex/anatomy & histology , Prefrontal Cortex/physiology , Prospective Studies , Social Behavior , Theory of Mind , Young Adult
3.
Front Psychol ; 7: 1111, 2016.
Article in English | MEDLINE | ID: mdl-27512379

ABSTRACT

In this paper we consider the potential role of metarepresentation-the representation of another representation, or as commonly considered within cognitive science, the mental representation of another individual's knowledge and beliefs-in mediating definite reference and common ground in conversation. Using dialogues from a referential communication study in which speakers conversed in succession with two different addressees, we highlight ways in which interlocutors work together to successfully refer to objects, and achieve shared conceptualizations. We briefly review accounts of how such shared conceptualizations could be represented in memory, from simple associations between label and referent, to "triple co-presence" representations that track interlocutors in an episode of referring, to more elaborate metarepresentations that invoke theory of mind, mutual knowledge, or a model of a conversational partner. We consider how some forms of metarepresentation, once created and activated, could account for definite reference in conversation by appealing to ordinary processes in memory. We conclude that any representations that capture information about others' perspectives are likely to be relatively simple and subject to the same kinds of constraints on attention and memory that influence other kinds of cognitive representations.

4.
Psychon Bull Rev ; 20(1): 54-72, 2013 Feb.
Article in English | MEDLINE | ID: mdl-23188738

ABSTRACT

Experiments that aim to model language processing in spoken dialogue contexts often use confederates as speakers or addressees. However, the decision of whether to use a confederate, and of precisely how to deploy one, is shaped by researchers' explicit theories and implicit assumptions about the nature of dialogue. When can a confederate fulfill the role of conversational partner without changing the nature of the dialogue itself? We survey the benefits and risks of using confederates in studies of language in dialogue contexts, identifying four concerns that appear to guide how confederates are deployed. We then discuss several studies that have addressed these concerns differently-and, in some cases, have found different results. We conclude with recommendations for how to weigh the benefits and risks of using experimental confederates in dialogue studies: Confederates are best used when an experimental hypothesis concerns responses to unusual behaviors or low-frequency linguistic forms and when the experimental task calls for the confederate partner to take the initiative as speaker. Confederates can be especially risky in the addressee role, especially if their nonverbal behavior is uncontrolled and if they know more than is warranted by the experimental task.


Subject(s)
Psycholinguistics/methods , Research Personnel , Speech Perception , Speech , Humans , Language
5.
Psychon Bull Rev ; 17(5): 718-24, 2010 Oct.
Article in English | MEDLINE | ID: mdl-21037172

ABSTRACT

To better understand the problem of referencing a location in space under time pressure, we had two remotely located partners (A, B) attempt to locate and reach consensus on a sniper target, which appeared randomly in the windows of buildings in a pseudorealistic city scene. The partners were able to communicate using speech alone (shared voice), gaze cursors alone (shared gaze), or both. In the shared-gaze conditions, a gaze cursor representing Partner A's eye position was superimposed over Partner B's search display and vice versa. Spatial referencing times (for both partners to find and agree on targets) were faster with shared gaze than with speech, with this benefit due primarily to faster consensus (less time needed for one partner to locate the target after it was located by the other partner). These results suggest that sharing gaze can be more efficient than speaking when people collaborate on tasks requiring the rapid communication of spatial information. Supplemental materials for this article may be downloaded from http://pbr.psychonomic-journals.org/content/supplemental.


Subject(s)
Fixation, Ocular , Space Perception , Fixation, Ocular/physiology , Humans , Interpersonal Relations , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Space Perception/physiology , Speech
6.
Top Cogn Sci ; 1(2): 274-91, 2009 Apr.
Article in English | MEDLINE | ID: mdl-25164933

ABSTRACT

No one denies that people adapt what they say and how they interpret what is said to them, depending on their interactive partners. What is controversial is when and how they do so. Several psycholinguistics research programs have found what appear to be failures to adapt to partners in the early moments of processing and have used this evidence to argue for modularity in the language processing architecture, claiming that the system cannot take into account a partner's distinct needs or knowledge early in processing. We review the evidence for both early and delayed partner-specific adaptations, and we identify some challenges and difficulties with interpreting this evidence. We then discuss new analyses from a previously published referential communication experiment (Metzing & Brennan, 2003) demonstrating that partner-specific effects need not occur late in processing. In contrast to Pickering and Garrod (2004) and Keysar, Barr, and Horton (1998b), we conclude that there is no good evidence that early processing has to be be "egocentric,""dumb," or encapsulated from social knowledge or common ground, but that under some circumstances, such as when one partner has made an attribution about another's knowledge or needs, processing can be nimble enough to adapt quite early to a perspective different from one's own.


Subject(s)
Communication , Interpersonal Relations , Language , Humans , Knowledge , Reaction Time
7.
Psychol Sci ; 19(4): 332-8, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18399885

ABSTRACT

Perceptual theories must explain how perceivers extract meaningful information from a continuously variable physical signal. In the case of speech, the puzzle is that little reliable acoustic invariance seems to exist. We tested the hypothesis that speech-perception processes recover invariants not about the signal, but rather about the source that produced the signal. Findings from two manipulations suggest that the system learns those properties of speech that result from idiosyncratic characteristics of the speaker; the same properties are not learned when they can be attributed to incidental factors. We also found evidence for how the system determines what is characteristic: In the absence of other information about the speaker, the system relies on episodic order, representing those properties present during early experience as characteristic of the speaker. This "first-impressions" bias can be overridden, however, when variation is an incidental consequence of a temporary state (a pen in the speaker's mouth), rather than characteristic of the speaker.


Subject(s)
Adaptation, Psychological , Attention , Attitude , Social Perception , Speech Perception , Adolescent , Adult , Humans , Phonetics , Reaction Time
8.
Memory ; 16(3): 245-61, 2008 Apr.
Article in English | MEDLINE | ID: mdl-18324550

ABSTRACT

When people remember shared experiences, the amount they recall as a collaborating group is less than the amount obtained by pooling their individual memories. We tested the hypothesis that reduced group productivity can be attributed, at least in part, to content filtering, where information is omitted from group products either because individuals fail to retrieve it or choose to withhold it (self-filtering), or because groups reject or fail to incorporate it (group-filtering). Three-person groups viewed a movie clip together and recalled it, first individually, then in face-to-face or electronic groups, and finally individually again. Although both kinds of groups recalled equal amounts, group-filtering occurred more often face-to-face, while self-filtering occurred more often electronically. This suggests that reduced group productivity is due not only to intrapersonal factors stemming from cognitive interference, but also to interpersonal costs of coordinating the group product. Finally, face-to-face group interaction facilitated subsequent individual recall.


Subject(s)
Concept Formation/physiology , Mental Recall/physiology , Paired-Associate Learning/physiology , Pattern Recognition, Visual/physiology , Adolescent , Adult , Female , Group Processes , Humans , Interpersonal Relations , Male , Psychological Theory , Television
9.
Cognition ; 106(3): 1465-77, 2008 Mar.
Article in English | MEDLINE | ID: mdl-17617394

ABSTRACT

Collaboration has its benefits, but coordination has its costs. We explored the potential for remotely located pairs of people to collaborate during visual search, using shared gaze and speech. Pairs of searchers wearing eyetrackers jointly performed an O-in-Qs search task alone, or in one of three collaboration conditions: shared gaze (with one searcher seeing a gaze-cursor indicating where the other was looking, and vice versa), shared-voice (by speaking to each other), and shared-gaze-plus-voice (by using both gaze-cursors and speech). Although collaborating pairs performed better than solitary searchers, search in the shared gaze condition was best of all: twice as fast and efficient as solitary search. People can successfully communicate and coordinate their searching labor using shared gaze alone. Strikingly, shared gaze search was even faster than shared-gaze-plus-voice search; speaking incurred substantial coordination costs. We conclude that shared gaze affords a highly efficient method of coordinating parallel activity in a time-critical spatial task.


Subject(s)
Cognition , Cooperative Behavior , Fixation, Ocular , Visual Perception , Attention , Communication , Humans
10.
Cognition ; 107(1): 54-81, 2008 Apr.
Article in English | MEDLINE | ID: mdl-17803986

ABSTRACT

Listeners are faced with enormous variation in pronunciation, yet they rarely have difficulty understanding speech. Although much research has been devoted to figuring out how listeners deal with variability, virtually none (outside of sociolinguistics) has focused on the source of the variation itself. The current experiments explore whether different kinds of variation lead to different cognitive and behavioral adjustments. Specifically, we compare adjustments to the same acoustic consequence when it is due to context-independent variation (resulting from articulatory properties unique to a speaker) versus context-conditioned variation (resulting from common articulatory properties of speakers who share a dialect). The contrasting results for these two cases show that the source of a particular acoustic-phonetic variation affects how that variation is handled by the perceptual system. We also show that changes in perceptual representations do not necessarily lead to changes in production.


Subject(s)
Culture , Language , Phonetics , Speech Perception , Adult , Female , Humans , Learning , Male , Speech Production Measurement
11.
Cogn Psychol ; 50(2): 194-231, 2005 Mar.
Article in English | MEDLINE | ID: mdl-15680144

ABSTRACT

Evidence has been mixed on whether speakers spontaneously and reliably produce prosodic cues that resolve syntactic ambiguities. And when speakers do produce such cues, it is unclear whether they do so "for" their addressees (the audience design hypothesis) or "for" themselves, as a by-product of planning and articulating utterances. Three experiments addressed these issues. In Experiments 1 and 3, speakers followed pictorial guides to spontaneously instruct addressees to move objects. Critical instructions (e.g., "Put the dog in the basket on the star") were syntactically ambiguous, and the referential situation supported either one or both interpretations. Speakers reliably produced disambiguating cues to syntactic ambiguity whether the situation was ambiguous or not. However, Experiment 2 suggested that most speakers were not yet aware of whether the situation was ambiguous by the time they began to speak, and so adapting to addressees' particular needs may not have been feasible in Experiment 1. Experiment 3 examined individual speakers' awareness of situational ambiguity and the extent to which they signaled structure, with or without addressees present. Speakers tended to produce prosodic cues to syntactic boundaries regardless of their addressees' needs in particular situations. Such cues did prove helpful to addressees, who correctly interpreted speakers' instructions virtually all the time. In fact, even when speakers produced syntactically ambiguous utterances in situations that supported both interpretations, eye-tracking data showed that 40% of the time addressees did not even consider the non-intended objects. We discuss the standards needed for a convincing test of the audience design hypothesis.


Subject(s)
Cues , Verbal Behavior , Adult , Analysis of Variance , Female , Fixation, Ocular , Humans , Male , New York , Psycholinguistics , Reaction Time , Speech
12.
Behav Brain Sci ; 27(2): 192-193, 2004 Apr.
Article in English | MEDLINE | ID: mdl-18241469

ABSTRACT

Pickering & Garrod's (P&G's) call to study language processing in dialogue context is an appealing one. Their interactive alignment model is ambitious, aiming to explain the converging behavior of dialogue partners via both intra- and interpersonal priming. However, they ignore the flexible, partner-specific processing demonstrated by some recent dialogue studies. We discuss implications of these data.

13.
Psychon Bull Rev ; 9(3): 550-7, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12412896

ABSTRACT

A current debate in psycholinguistics concerns how speakers take addressees' knowledge or needs into account during the packaging of utterances. In retelling stories, speakers are more likely to mention atypical instruments than easily inferrable, typical instruments; in a seminal study, Brown and Dell (1987) suggested that this is not an adjustment to addressees but is simply easiest for speakers. They concluded that manipulating addressees' knowledge did not affect speakers' mention of instruments. However, their addressees were confederates who heard the same stories repeatedly. We had speakers retell stories to naive addressees who either saw or did not see a picture illustrating the main action and instrument. When addressees lacked pictures, speakers were more likely to mention atypical instruments, to mention them early (within the same clause as the action verb), and to mark atypical instruments as indefinite. This suggests that with visual copresence, speakers can take addressees' knowledge into account in early syntactic choices.


Subject(s)
Choice Behavior , Linguistics , Speech , Humans , Mental Recall , Random Allocation
SELECTION OF CITATIONS
SEARCH DETAIL
...