Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 14(10): e0224623, 2019.
Article in English | MEDLINE | ID: mdl-31671134

ABSTRACT

The goal of the present study was to examine whether the effect of visual context on the interpretation of facial expression from an actor's face could be produced using isolated photographic stills, instead of the typical dynamic film sequences used to demonstrate the effect. Two-photograph sequences consisting of a context photograph varying in pleasantness and a photograph of an actor's neutral face were presented. Participants performed a liking rating task for the context photograph (to ensure attention to the stimulus) and they performed three tasks for the face stimulus: labeling the emotion portrayed by the actor, rating valence, and rating arousal. The results of the labeling data confirmed the existence of a visual context effect, with more faces labeled as "happy" after viewing pleasant context and more faces labeled "sad" or "fearful" after viewing unpleasant context. This effect was demonstrated when no explicit connection between the context stimulus and face stimulus was invoked, with the contextual information exerting its effect on labeling after being held in memory for at least 10 seconds. The results for ratings of valence and arousal were mixed. Overall, the results suggest that isolated photograph sequences produce a Kuleshov-type context effect on attributions of emotion to actors' faces, replicating previous research conducted with dynamic film sequences.


Subject(s)
Facial Expression , Facial Recognition/physiology , Photic Stimulation/methods , Adult , Attention , Emotions , Fear , Female , Happiness , Humans , Male , Motion Pictures , Photography , Social Perception , Visual Perception/physiology , Young Adult
2.
Perception ; 47(4): 359-378, 2018 Apr.
Article in English | MEDLINE | ID: mdl-29310527

ABSTRACT

The effect of art expertise on viewers' processing of titled visual artwork was examined. The study extended the research of Leder, Carbon, and Ripsas by explicitly selecting art novices and art experts. The study was designed to test assumptions about how expertise modulates context in the form of titles for artworks. Viewers rated a set of abstract paintings for liking and understanding. The type of title accompanying the artwork (descriptive or elaborative) was manipulated. Viewers were allotted as much time as they wished to view each artwork. For judgments of liking, novices and experts both liked artworks with elaborative titles better, with overall rated liking similar for both groups. For judgments of understanding, type of title had no effect on ratings for both novices and experts. However, experts' rated understanding was higher than novices, with experts making their decisions faster than novices. An analysis of viewers' art expertise revealed that expertise was correlated with understanding, but not liking. Overall, the results suggest that both novices and experts integrate title with visual image in similar manner. However, expertise differentially affected liking and understanding. The results differ from those obtained by Leder et al. The differences between studies are discussed.


Subject(s)
Comprehension , Emotions , Judgment , Paintings , Professional Competence , Adolescent , Adult , Female , Humans , Male , Middle Aged , Visual Perception , Young Adult
3.
Exp Psychol ; 54(2): 148-60, 2007.
Article in English | MEDLINE | ID: mdl-17472098

ABSTRACT

Corneille, Huart, Becquart, & Brédart (2004) found that people remember ambiguous race faces as closer to a race prototype than they actually are. In three studies, we examined whether this memory bias generalizes to voice memory. In Studies 1 and 2, participants listened to synthesized male and female speech samples (high, moderate, or low pitch) and were asked to identify a voice target when paired against distracters higher or lower in pitch. The results showed that pitch distortions occurred, with the pattern consistent with assimilation toward low and high ends of the pitch continuum. Study 3 replicated this result with a wider voice pitch range. The results parallel those of Corneille et al. (2004). The implications of this work are discussed.


Subject(s)
Memory , Pitch Perception , Speech Perception , Adult , Female , Humans , Male , Voice Quality
4.
Psychol Rep ; 94(3 Pt 2): 1283-92, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15362406

ABSTRACT

To examine sex differences in persuasiveness, we conducted a meta-analysis of seven studies from our laboratory on reactions to human versus computer-synthesized speech. We tested three hypotheses: (1) people would be more persuaded by human speech than by computer-synthesized speech, (2) women would be slightly more persuaded than men, and (3) the sex difference would be more pronounced for human speech than for synthetic speech. While there was support for the first two hypotheses, there was none for the third. Also, no consistent support was found for a moderating effect of mode of presentation, audio versus video.


Subject(s)
Communication Aids for Disabled , Computers , Gender Identity , Persuasive Communication , Speech , Female , Humans , Male , Speech Acoustics , Speech Perception , Stereotyping
5.
Nicotine Tob Res ; 5(5): 681-94, 2003 Oct.
Article in English | MEDLINE | ID: mdl-14577985

ABSTRACT

A pool of single-word adjectives representing smoking outcome expectancies was derived and tested in two studies. In Study One, smoking-related words were generated and then clustered together to form 39 categories representing smoking expectancy nodes. Analysis of the number of times words in each category were generated indicated that expectancies varied as a function of smoking status (measured at two levels: Ever smoked daily vs. never smoked daily), smoking history (current vs. past smoker), and dependence (nondependent vs. dependent). In Study Two, participants rated the words in terms of expectations of smoking outcomes. A principal components analysis of the ratings indicated that three components accounted for 74.10% of the variance in participants' ratings: Component 1 (adverse effects), 30.92%; component 2 (positive image), 28.08%; and component 3 (positive mood), 15.09%. Further analyses revealed that ratings of words comprising the three components differed as a function of smoking status (measured at three levels: Never smoked daily, daily nondependent smoker, daily dependent smoker), with dependent smokers rating the outcomes associated with all three components as occurring more frequently when they smoked compared with nondependent smokers or those who never smoked daily. The results suggest that the single-word adjectives are appropriate for use in research investigating smoking outcome expectancies.


Subject(s)
Semantics , Smoking/psychology , Tobacco Use Disorder/psychology , Adult , Affect , Endpoint Determination , Female , Humans , Male , Perception , Principal Component Analysis , Smoking/adverse effects , Treatment Outcome
6.
J Appl Psychol ; 87(2): 411-7, 2002 Apr.
Article in English | MEDLINE | ID: mdl-12002967

ABSTRACT

Are perceptions of computer-synthesized speech altered by the belief that the person using this technology is disabled? In a 2 x 2 factorial design, participants completed an attitude pretest and were randomly assigned to watch an actor deliver a persuasive appeal under 1 of the following 4 conditions: disabled or nondisabled using normal speech and disabled or nondisabled using computer-synthesized speech. Participants then completed a posttest survey and a series of questionnaires assessing perceptions of voice, speaker, and message. Natural speech was perceived more favorably and was more persuasive than computer-synthesized speech. When the speaker was perceived to be speech-disabled, however, this difference diminished. This finding suggests that negatively viewed assistive technologies will be perceived more favorably when used by people with disabilities.


Subject(s)
Communication Aids for Disabled , Disabled Persons/psychology , Persuasive Communication , Adult , Attitude to Computers , Female , Humans , Male , Speech Disorders/psychology
7.
Lang Speech ; 45(Pt 3): 255-83, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12693687

ABSTRACT

The effects of variation from stimulus to stimulus in emotional tone of voice on speech perception were examined through a series of perceptual experiments. Stimuli were recorded from human speakers who produced utterances in tones of voice designed to convey affective information. Then, stimuli varying in talker voice and emotional tone were presented to listeners for perceptual matching and classification. The results showed that both intertalker variation in talker voice and intratalker variation in emotional tone had a negative effect on perceptual performance. The results suggest that sources of variation in the speech signal that affect the spectral/temporal properties of speech (i.e., talker voice, speech rate, emotional tone) may be treated differently than sources of variation that do not affect these properties (i.e., vocal amplitude).


Subject(s)
Affect , Speech Perception , Voice Quality , Communication , Female , Humans , Male , Speech Acoustics , Verbal Behavior
SELECTION OF CITATIONS
SEARCH DETAIL
...