Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
J Exp Psychol Learn Mem Cogn ; 23(1): 164-80, 1997 Jan.
Article in English | MEDLINE | ID: mdl-9028026

ABSTRACT

It has been proposed that auditory stimuli are more temporally discriminable in memory than visual stimuli. Studies using the continual-distractor paradigm have provided both supporting and contradictory evidence for this hypothesis. The conflicting reports differed, however, in the modality of the interleaved distractor tasks. The present experiments manipulated both word and distractor-task modality. Results showed that aurally presented word lists were more sensitive to temporal schedules of presentation when the distractor task was auditory than when it was visual. The same effect was not consistently found for visually presented word lists. Such an interaction may help explain the previously reported disparate findings and suggests that the auditory modality, in the presence of silent distraction, can reduce participants' use of temporal-distinctiveness information at retrieval.


Subject(s)
Discrimination, Psychological/physiology , Memory/physiology , Mental Recall/physiology , Acoustic Stimulation , Adult , Female , Humans , Male , Photic Stimulation
2.
Memory ; 4(3): 225-42, 1996 May.
Article in English | MEDLINE | ID: mdl-8735609

ABSTRACT

The serial position function reflects better memory for the first and last few items in a list than for the middle items. Four experiments examined the effects of temporal spacing on the serial position function for five-item lists that took between 0.5 seconds and 1.1 seconds to present. As with recall of far longer-lasting lists, recency and other robust serial position effects were observed with both free and serial recall. We demonstrate that temporal schedules of presentation control recall probability in predictable ways, and conclude that very fleeting lists obey similar principles as do longer-lasting lists. We compare both sets of findings with predictions from the dimensional distinctiveness framework.


Subject(s)
Memory, Short-Term , Mental Recall , Neuropsychological Tests , Adult , Analysis of Variance , Humans , Time Factors , Word Association Tests
3.
Q J Exp Psychol A ; 46(2): 193-223, 1993 May.
Article in English | MEDLINE | ID: mdl-8316636

ABSTRACT

Six experiments investigated the locus of the recency effect in immediate serial recall. Previous research has shown much larger recency for speech as compared to non-speech sounds. We compared two hypotheses: (1) speech sounds are processed differently from non-speech sounds (e.g. Liberman & Mattingly, 1985); and (2) speech sounds are more familiar and more discriminable than non-speech sounds (e.g. Nairne, 1988, 1990). In Experiments 1 and 2 we determined that merely varying the label given to the sets of stimuli (speech or non-speech) had no effect on recency or overall recall. We varied the familiarity of the stimuli by using highly trained musicians as subjects (Experiments 3 and 4) and by instructing subjects to attend to an unpracticed dimension of speech (Experiment 6). Discriminability was manipulated by varying the acoustic complexity of the stimuli (Experiments 3, 5, and 6) or the pitch distance between the stimuli (Experiment 4). Although manipulations of discriminability and familiarity affected overall level of recall greatly, in no case did discriminability or familiarity alone significantly enhance recency. What seems to make a difference in the occurrence of convincing recency is whether the items being remembered are undegraded speech sounds.


Subject(s)
Attention , Memory, Short-Term , Serial Learning , Speech Perception , Adult , Auditory Perception , Female , Humans , Male , Mental Recall , Phonetics
4.
Mem Cognit ; 21(2): 142-5, 1993 Mar.
Article in English | MEDLINE | ID: mdl-8469121

ABSTRACT

Two empirical challenges to the traditional "modal model" of short-term memory are that neither the Brown-Peterson distractor technique nor the recency effect in recall is well accommodated by that position. Additionally, the status of memory stores as such, has declined in response to proceduralist thinking. At the same time, the concept of coding, on which the modal model is silent, is increasingly central to memory theory. People need to remember things in the short term, but a dedicated store does not need to be the agency.


Subject(s)
Attention , Memory, Short-Term , Humans , Retention, Psychology
5.
J Exp Psychol Hum Percept Perform ; 18(3): 728-38, 1992 Aug.
Article in English | MEDLINE | ID: mdl-1500872

ABSTRACT

The musical quality of timbre is based on both spectral and dynamic acoustic cues. Four 2-part experiments examined whether these properties are represented in the mental image of a musical timbre. Experiment 1 established that imagery occurs for timbre variations within a single musical instrument, using plucked and bowed tones from a cello. Experiments 2 and 3 used synthetic stimuli that varied in either spectral or dynamic properties only, to investigate imagery with strict acoustic control over the stimuli. Experiment 4 explored whether the dimension of loudness is stored in an auditory image. Spectral properties appear to play a much larger role than dynamic properties in imagery for musical timbre.


Subject(s)
Imagination , Loudness Perception , Music , Pitch Discrimination , Sound Spectrography , Adult , Attention , Cues , Humans , Reaction Time
6.
J Acoust Soc Am ; 88(5): 2080-90, 1990 Nov.
Article in English | MEDLINE | ID: mdl-2269724

ABSTRACT

In same-different discrimination tasks employing isolated vowel sounds, subjects often give significantly more "different" responses to one order of two stimuli than to the other order. Cowan and Morse [J. Acoust. Soc. Am. 79, 500-507 (1986)] proposed a neutralization hypothesis to account for such effects: The first vowel in a pair is assumed to change its quality in memory in the direction of the neutral vowel, schwa. Three experiments were conducted using a variety of vowels and some initial support for the hypothesis was obtained, using a large stimulus set, but conflicting evidence with smaller stimulus sets. Rather than becoming more similar to schwa, the first vowel in a pair seems to drift toward the interior of the stimulus range employed in a given test. Several possible explanations are discussed for this tendency and its relation to presentation order effects obtained in other psychophysical paradigms is noted.


Subject(s)
Attention , Mental Recall , Phonetics , Speech Perception , Adult , Female , Humans , Male , Paired-Associate Learning
7.
Mem Cognit ; 18(5): 469-76, 1990 Sep.
Article in English | MEDLINE | ID: mdl-2233260

ABSTRACT

Three experiments were designed to investigate two explanations for the integration effect in memory for songs (Serafine, Crowder, & Repp, 1984; Serafine, Davidson, Crowder, & Repp, 1986). The integration effect is the finding that recognition of the melody (or text) of a song is better in the presence of the text (or melody) with which it had been heard originally than in the presence of a different text (or melody). One explanation for this finding is the physical interaction hypothesis, which holds that one component of a song exerts subtle but memorable physical changes on the other component, making the latter different from what it would be with a different companion. In Experiments 1 and 2, we investigated the influence that words could exert on the subtle musical character of a melody. A second explanation for the integration effect is the association-by-contiguity hypothesis, which holds that any two events experienced in close temporal proximity may become connected in memory such that each acts as a recall cue for the other. In Experiment 3, we investigated the degree to which simultaneous presentations of spoken text with a hummed melody would induce an association between the two components. The results gave encouragement for both explanations and are discussed in terms of the distinction between encoding specificity and independent associative bonding.


Subject(s)
Mental Recall , Music , Paired-Associate Learning , Pitch Discrimination , Verbal Learning , Adult , Attention , Humans , Phonetics , Time Perception
8.
J Exp Psychol Learn Mem Cogn ; 16(2): 316-27, 1990 Mar.
Article in English | MEDLINE | ID: mdl-2137870

ABSTRACT

Recency, in remembering a series of events, reflects the simple fact that memory is vivid for what has just happened but deteriorates over time. Theories based on distinctiveness, an alternative to the multistore model, assert that the last few events in a series are well remembered because their times of occurrence are more highly distinctive than those of earlier items. Three experiments examined the role of temporal and ordinal factors in auditorily and visually presented lists that were temporally organized by distractor materials interpolated between memory items. With uniform distractor periods, the results were consistent with Glenberg's (1987) temporal distinctiveness theory. When the procedure was altered so that distractor periods became progressively shorter from the beginning to the end of the list, the results were consistent for only the visual modality; the auditory modality produced a different and unpredicted (by the theory) pattern of results, thus falsifying the claim that the auditory modality derives more benefit from temporal information than the visual modality. We distinguish serial order information from specifically temporal information, arguing that the former may be enhanced by auditory presentation but that the two modalities are more nearly equal with respect to the latter.


Subject(s)
Auditory Perception , Memory , Visual Perception , Humans , Mental Recall , Probability , Psychological Theory , Time Factors
9.
Mem Cognit ; 17(4): 384-97, 1989 Jul.
Article in English | MEDLINE | ID: mdl-2761399

ABSTRACT

Three experiments were designed to decide whether temporal information is coded more accurately for intervals defined by auditory events or for those defined by visual events. In the first experiment, the irregular-list technique was used, in which a short list of items was presented, the items all separated by different interstimulus intervals. Following presentation, the subject was given three items from the list, in their correct serial order, and was asked to judge the relative interstimulus intervals. Performance was indistinguishable whether the items were presented auditorily or visually. In the second experiment, two unfilled intervals were defined by three nonverbal signals in either the auditory or the visual modality. After delays of 0, 9, or 18 sec (the latter two filled with distractor activity), the subjects were directed to make a verbal estimate of the length of one of the two intervals, which ranged from 1 to 4 sec and from 10 to 13 sec. Again, performance was not dependent on the modality of the time markers. The results of Experiment 3, which was procedurally similar to Experiment 2 but with filled rather than empty intervals, showed significant modality differences in one measure only. Within the range of intervals employed in the present study, our results provide, at best, only modest support for theories that predict more accurate temporal coding in memory for auditory, rather than visual, stimulus presentation.


Subject(s)
Attention , Auditory Perception , Form Perception , Memory , Mental Recall , Pattern Recognition, Visual , Visual Perception , Adult , Arousal , Humans , Speech Perception
12.
Mem Cognit ; 14(4): 355-60, 1986 Jul.
Article in English | MEDLINE | ID: mdl-3762390
13.
J Exp Psychol Learn Mem Cogn ; 12(2): 268-78, 1986 Apr.
Article in English | MEDLINE | ID: mdl-2939183

ABSTRACT

Subjects in five experiments read nine-digit memory lists from a cathode ray tube for immediate recall. Reading aloud always produced a localized and reliable advantage for the last item, compared to reading silently. Two experiments on whispered and mouthed lists, with or without simultaneous broadband noise, falsified expectations derived from the theory of precategorical acoustic storage. Three additional experiments showed no enhancement of recency in the silent conditions when the digits were drawn or spelled gradually on the screen, a result that is inconsistent with the changing-state hypothesis. The classic auditory-visual modality effect is large and reliable, but still poorly understood.


Subject(s)
Memory, Short-Term , Reading , Speech Perception , Visual Perception , Humans , Models, Psychological , Noise , Perceptual Masking , Psychophysics , Speech , Time Factors
14.
Cognition ; 16(3): 285-303, 1984 Apr.
Article in English | MEDLINE | ID: mdl-6541107
15.
Percept Psychophys ; 35(4): 372-8, 1984 Apr.
Article in English | MEDLINE | ID: mdl-6739272
16.
Philos Trans R Soc Lond B Biol Sci ; 302(1110): 251-65, 1983 Aug 11.
Article in English | MEDLINE | ID: mdl-6137845

ABSTRACT

Recent evidence from experiments on immediate memory indicates unambiguously that silent speech perception can produce typically 'auditory' effects while there is either active or passive mouthing of the relevant articulatory gestures. This result falsifies previous theories of auditory sensory memory (pre-categorical acoustic store) that insisted on external auditory stimulation as indispensable for access to the system. A resolution is proposed that leaves the properties of pre-categorical acoustic store much as they were assumed to be before but adds the possibility that visual information can affect the selection of auditory features in a pre-categorical stage of speech perception. In common terms, a speaker's facial gestures (or one's own) can influence auditory experience independently of determining what it was that was said. Some results in word perception that encourage this view are discussed.


Subject(s)
Gestures , Kinesics , Memory , Speech Perception , Feedback , Humans , Inhibition, Psychological , Lipreading , Models, Psychological , Perceptual Masking
18.
Acta Psychol (Amst) ; 50(3): 291-323, 1982 Jul.
Article in English | MEDLINE | ID: mdl-7124433
SELECTION OF CITATIONS
SEARCH DETAIL
...