Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
Add more filters










Publication year range
1.
Risk Anal ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38742599

ABSTRACT

People typically use verbal probability phrases when discussing risks ("It is likely that this treatment will work"), both in written and spoken communication. When speakers are uncertain about risks, they can nonverbally signal this uncertainty by using prosodic cues, such as a rising, question-like intonation or a filled pause ("uh"). We experimentally studied the effects of these two prosodic cues on the listener's perceived speaker certainty and numerical interpretation of spoken verbal probability phrases. Participants (N = 115) listened to various verbal probability phrases that were uttered with a rising or falling global intonation and with or without a filled pause before the probability phrase. For each phrase, they gave a point estimate of their numerical interpretation in percentages and indicated how certain they thought the speaker was about the correctness of the probability phrase. Speakers were perceived as least certain when the verbal probability phrases were spoken with both prosodic uncertainty cues. Interpretation of verbal probability phrases varied widely across participants, especially when rising intonation was produced by the speaker. Overall, high probability phrases (e.g., "very likely") were estimated as lower (and low probability phrases, such as "unlikely," as higher) when they were uttered with a rising intonation. The effects of filled pauses were less pronounced, as were the uncertainty effects for medium probability phrases (e.g., "probable"). These results stress the importance of nonverbal communication when verbally communicating risks and probabilities to people, for example, in the context of doctor-patient communication.

2.
Lang Speech ; : 238309231217689, 2023 Dec 29.
Article in English | MEDLINE | ID: mdl-38156473

ABSTRACT

The current study investigates the average effect: the tendency for humans to appreciate an averaged (face, bird, wristwatch, car, and so on) over an individual instance. The effect holds across cultures, despite varying conceptualizations of attractiveness. While much research has been conducted on the average effect in visual perception, much less is known about the extent to which this effect applies to language and speech. This study investigates the attractiveness of average speech rhythms in Dutch and Mandarin Chinese, two typologically different languages. This was tested in a series of perception experiments in either language in which native listeners chose the most attractive one from a pair of acoustically manipulated rhythms. For each language, two experiments were carried out to control for the potential influence of the acoustic manipulation on the average effect. The results confirm the average effect in both languages, and they do not exclude individual variation in the listeners' perception of attractiveness. The outcomes provide a new crosslinguistic perspective and give rise to alternative explanations to the average effect.

3.
Front Artif Intell ; 5: 835298, 2022.
Article in English | MEDLINE | ID: mdl-35434608

ABSTRACT

Different applications or contexts may require different settings for a conversational AI system, as it is clear that e.g., a child-oriented system would need a different interaction style than a warning system used in emergency situations. The current article focuses on the extent to which a system's usability may benefit from variation in the personality it displays. To this end, we investigate whether variation in personality is signaled by differences in specific audiovisual feedback behavior, with a specific focus on embodied conversational agents. This article reports about two rating experiments in which participants judged the personalities (i) of human beings and (ii) of embodied conversational agents, where we were specifically interested in the role of variability in audiovisual cues. Our results show that personality perceptions of both humans and artificial communication partners are indeed influenced by the type of feedback behavior used. This knowledge could inform developers of conversational AI on how to also include personality in their feedback behavior generation algorithms, which could enhance the perceived personality and in turn generate a stronger sense of presence for the human interlocutor.

4.
Lang Speech ; 64(1): 3-23, 2021 Mar.
Article in English | MEDLINE | ID: mdl-31957542

ABSTRACT

This paper presents the results of three perceptual experiments investigating the role of auditory and visual channels for the identification of statements and echo questions in Brazilian Portuguese. Ten Brazilian speakers (five male) were video-recorded (frontal view of the face) while they produced a sentence ("Como você sabe"), either as a statement (meaning "As you know.") or as an echo question (meaning "As you know?"). Experiments were set up including the two different intonation contours. Stimuli were presented in conditions with clear and degraded audio as well as congruent and incongruent information from both channels. Results show that Brazilian listeners were able to distinguish statements and questions prosodically and visually, with auditory cues being dominant over visual ones. In noisy conditions, the visual channel improved the interpretation of prosodic cues robustly, while it degraded them in conditions where the visual information was incongruent with the auditory information. This study shows that auditory and visual information are integrated during speech perception, also when applied to prosodic patterns.


Subject(s)
Acoustic Stimulation/methods , Facial Expression , Phonetics , Photic Stimulation/methods , Speech Perception/physiology , Adult , Brazil , Cues , Female , Humans , Language , Male
6.
Lang Speech ; 63(4): 856-876, 2020 Dec.
Article in English | MEDLINE | ID: mdl-31888403

ABSTRACT

Speech perception is a multisensory process: what we hear can be affected by what we see. For instance, the McGurk effect occurs when auditory speech is presented in synchrony with discrepant visual information. A large number of studies have targeted the McGurk effect at the segmental level of speech (mainly consonant perception), which tends to be visually salient (lip-reading based), while the present study aims to extend the existing body of literature to the suprasegmental level, that is, investigating a McGurk effect for the identification of tones in Mandarin Chinese. Previous studies have shown that visual information does play a role in Chinese tone perception, and that the different tones correlate with variable movements of the head and neck. We constructed various tone combinations of congruent and incongruent auditory-visual materials (10 syllables with 16 tone combinations each) and presented them to native speakers of Mandarin Chinese and speakers of tone-naïve languages. In line with our previous work, we found that tone identification varies with individual tones, with tone 3 (the low-dipping tone) being the easiest one to identify, whereas tone 4 (the high-falling tone) was the most difficult one. We found that both groups of participants mainly relied on auditory input (instead of visual input), and that the auditory reliance for Chinese subjects was even stronger. The results did not show evidence for auditory-visual integration among native participants, while visual information is helpful for tone-naïve participants. However, even for this group, visual information only marginally increases the accuracy in the tone identification task, and this increase depends on the tone in question.


Subject(s)
Acoustic Stimulation , Language , Photic Stimulation , Speech Perception , Timbre Perception , Adult , Asian People/psychology , Female , Humans , Male , Phonetics
7.
Cogn Sci ; 43(12): e12804, 2019 12.
Article in English | MEDLINE | ID: mdl-31858627

ABSTRACT

The temporal-focus hypothesis claims that whether people conceptualize the past or the future as in front of them depends on their cultural attitudes toward time; such conceptualizations can be independent from the space-time metaphors expressed through language. In this paper, we study how Chinese people conceptualize time on the sagittal axis to find out the respective influences of language and culture on mental space-time mappings. An examination of Mandarin speakers' co-speech gestures shows that some Chinese spontaneously perform past-in-front/future-at-back (besides future-in-front/past-at-back) gestures, especially when gestures are accompanying past-in-front/future-at-back space-time metaphors (Exp. 1). Using a temporal performance task, the study confirms that Chinese can conceptualize the future as behind and the past as in front of them, and that such space-time mappings are affected by the different expressions of Mandarin space-time metaphors (Exp. 2). Additionally, a survey on cultural attitudes toward time shows that Chinese tend to focus slightly more on the future than on the past (Exp. 3). Within the Chinese sample, we did not find evidence for the effect of participants' cultural temporal attitudes on space-time mappings, but a cross-cultural comparison of space-time mappings between Chinese, Moroccans, and Spaniards provides strong support for the temporal-focus hypothesis. Furthermore, the results of Exp. 2 are replicated even after controlling for factors such as cultural temporal attitudes and age (Exp. 3), which implies that linguistic sagittal temporal metaphors can indeed influence Mandarin speakers' space-time mappings. The findings not only contribute to a better understanding of Chinese people's sagittal temporal orientation, but also have additional implications for theories on the mental space-time mappings and the relationship between language and thought.


Subject(s)
Cross-Cultural Comparison , Gestures , Language , Space Perception , Time Perception , Adult , China , Female , Humans , Male , Spain , Young Adult
8.
Phonetica ; 76(4): 263-286, 2019.
Article in English | MEDLINE | ID: mdl-30086551

ABSTRACT

Although the way tones are acquired by second or foreign language learners has attracted some scholarly attention, detailed knowledge of the factors that promote efficient learning is lacking. In this article, we look at the effect of visual cues (comparing audio-only with audio-visual presentations) and speaking style (comparing a natural speaking style with a teaching speaking style) on the perception of Mandarin tones by non-native listeners, looking both at the relative strength of these two factors and their possible interactions. Both the accuracy and reaction time of the listeners were measured in a task of tone identification. Results showed that participants in the audio-visual condition distinguished tones more accurately than participants in the audio-only condition. Interestingly, this varied as a function of speaking style, but only for stimuli from specific speakers. Additionally, some tones (notably tone 3) were recognized more quickly and accurately than others.

9.
Front Psychol ; 9: 2077, 2018.
Article in English | MEDLINE | ID: mdl-30455653

ABSTRACT

We investigate whether smile mimicry and emotional contagion are evident in non-text-based computer-mediated communication (CMC). Via an ostensibly real-time audio-visual CMC platform, participants interacted with a confederate who either smiled radiantly or displayed a neutral expression throughout the interaction. Automatic analyses of expressions displayed by participants indicated that smile mimicry was at play: A higher level of activation of the facial muscle that characterizes genuine smiles was observed among participants who interacted with the smiling confederate than among participants who interacted with the unexpressive confederate. However, there was no difference in the self-reported level of joviality between participants in the two conditions. Our findings demonstrate that people mimic smiles in audio-visual CMC, but that even though the diffusion of emotions has been documented in text-based CMC in previous studies, we find no convincing support for the phenomenon of emotional contagion in non-text-based CMC.

10.
J Acoust Soc Am ; 141(6): 4727, 2017 06.
Article in English | MEDLINE | ID: mdl-28679274

ABSTRACT

This study examines the influence of the position of prosodic heads (accented syllables) and prosodic edges (prosodic word and intonational phrase boundaries) on the timing of head movements. Gesture movements and prosodic events tend to be temporally aligned in the discourse, the most prominent part of gestures typically being aligned with prosodically prominent syllables in speech. However, little is known about the impact of the position of intonational phrase boundaries on gesture-speech alignment patterns. Twenty-four Catalan speakers produced spontaneous (experiment 1) and semi-spontaneous head gestures with a confirmatory function (experiment 2), along with phrase-final focused words in different prosodic conditions (stress-initial, stress-medial, and stress-final). Results showed (a) that the scope of head movements is the associated focused prosodic word, (b) that the left edge of the focused prosodic word determines where the interval of gesture prominence starts, and (c) that the speech-anchoring site for the gesture peak (or apex) depends both on the location of the accented syllable and the distance to the upcoming intonational phrase boundary. These results demonstrate that prosodic heads and edges have an impact on the timing of head movements, and therefore that prosodic structure plays a central role in the timing of co-speech gestures.


Subject(s)
Cues , Gestures , Head Movements , Language , Speech Acoustics , Speech Perception , Voice Quality , Adult , Female , Humans , Male , Time Factors , Young Adult
11.
J Nonverbal Behav ; 41(1): 67-82, 2017.
Article in English | MEDLINE | ID: mdl-28203037

ABSTRACT

We examined the effects of social and cultural contexts on smiles displayed by children during gameplay. Eight-year-old Dutch and Chinese children either played a game alone or teamed up to play in pairs. Activation and intensity of facial muscles corresponding to Action Unit (AU) 6 and AU 12 were coded according to Facial Action Coding System. Co-occurrence of activation of AU 6 and AU 12, suggesting the presence of a Duchenne smile, was more frequent among children who teamed up than among children who played alone. Analyses of the intensity of smiles revealed an interaction between social and cultural contexts. Whereas smiles, both Duchenne and non-Duchenne, displayed by Chinese children who teamed up were more intense than those displayed by Chinese children who played alone, the effect of sociality on smile intensity was not observed for Dutch children. These findings suggest that the production of smiles by children in a competitive context is susceptible to both social and cultural factors.

12.
Front Psychol ; 7: 1900, 2016.
Article in English | MEDLINE | ID: mdl-27994569

ABSTRACT

This paper investigates developmental changes in children's processing of redundant information in definite object descriptions. In two experiments, children of two age groups (6 or 7, and 9 or 10 years old) were presented with pictures of sweets. In the first experiment (pairwise comparison), two identical sweets were shown, and one of these was described with a redundant modifier. After the description, the children had to indicate the sweet they preferred most in a forced-choice task. In the second experiment (graded rating), only one sweet was shown, which was described with a redundant color modifier in half of the cases (e.g., "the blue sweet") and in the other half of the cases simply as "the sweet." This time, the children were asked to indicate on a 5-point rating scale to what extent they liked the sweets. In both experiments, the results showed that the younger children had a preference for the sweets described with redundant information, while redundant information did not have an effect on the preferences for the older children. These results imply that children are learning to distinguish between situations in which redundant information carries an implicature and situations in which this is not the case.

13.
Front Psychol ; 7: 1936, 2016.
Article in English | MEDLINE | ID: mdl-28018271

ABSTRACT

The present study investigates how easily it can be detected whether a child is being truthful or not in a game situation, and it explores the cue validity of bodily movements for such type of classification. To achieve this, we introduce an innovative methodology - the combination of perception studies (in which eye-tracking technology is being used) and automated movement analysis. Film fragments from truthful and deceptive children were shown to human judges who were given the task to decide whether the recorded child was being truthful or not. Results reveal that judges are able to accurately distinguish truthful clips from lying clips in both perception studies. Even though the automated movement analysis for overall and specific body regions did not yield significant results between the experimental conditions, we did find a positive correlation between the amount of movement in a child and the perception of lies, i.e., the more movement the children exhibited during a clip, the higher the chance that the clip was perceived as a lie. The eye-tracking study revealed that, even when there is movement happening in different body regions, judges tend to focus their attention mainly on the face region. This is the first study that compares a perceptual and an automated method for the detection of deceptive behavior in children whose data have been elicited through an ecologically valid paradigm.

14.
Cogn Sci ; 40(7): 1617-1647, 2016 09.
Article in English | MEDLINE | ID: mdl-26432277

ABSTRACT

In two experiments, we investigate to what extent various visual saliency cues in realistic visual scenes cause speakers to overspecify their definite object descriptions with a redundant color attribute. The results of the first experiment demonstrate that speakers are more likely to redundantly mention color when visual clutter is present in a scene as compared to when this is not the case. In the second experiment, we found that distractor type and distractor color affect redundant color use: Speakers are most likely to overspecify if there is at least one distractor object present that has the same type, but a different color than the target referent. Reliable effects of distractor distance were not found. Taken together, our results suggest that certain visual saliency cues guide speakers in determining which objects in a visual scene are relevant distractors, and which not. We argue that this is problematic for algorithms that aim to generate human-like descriptions of objects (such as the Incremental Algorithm), since these generally select properties that help to distinguish a target from all objects that are present in a scene.


Subject(s)
Attention/physiology , Models, Theoretical , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Adolescent , Adult , Algorithms , Cues , Female , Humans , Male , Photic Stimulation , Young Adult
15.
Front Psychol ; 6: 1401, 2015.
Article in English | MEDLINE | ID: mdl-26441776

ABSTRACT

Although current emotion theories emphasize the importance of contextual factors for emotional expressive behavior, developmental studies that examine such factors are currently thin on the ground. In this research, we studied the course of emotional expressions of 8- and 11-year-old children after winning a (large) first prize or a (substantially smaller) consolation prize, while playing a game competing against the computer or a physically co-present peer. We analyzed their emotional reactions by conducting two perception tests in which participants rated children's level of happiness. Results showed that co-presence positively affected children's happiness only when receiving the first prize. Moreover, for children who were in the presence of a peer, we found that eye contact affected children's expressions of happiness, but that the effect was different for different age groups: 8-year-old children were negatively affected, and 11-year-old children positively. Overall, we can conclude that as children grow older and their social awareness increases, the presence of a peer affects their non-verbal expressions, regardless of their appreciation of their prize.

16.
Exp Psychol ; 62(3): 181-97, 2015.
Article in English | MEDLINE | ID: mdl-25804243

ABSTRACT

Visual information contributes fundamentally to the process of object categorization. The present study investigated whether the degree of activation of visual information in this process is dependent on the contextual relevance of this information. We used the Proactive Interference (PI-release) paradigm. In four experiments, we manipulated the information by which objects could be categorized and subsequently be retrieved from memory. The pattern of PI-release showed that if objects could be stored and retrieved both by (non-perceptual) semantic and (perceptual) shape information, then shape information was overruled by semantic information. If, however, semantic information could not be (satisfactorily) used to store and retrieve objects, then objects were stored in memory in terms of their shape. The latter effect was found to be strongest for objects from identical semantic categories.


Subject(s)
Form Perception/physiology , Memory/physiology , Adolescent , Adult , Analysis of Variance , Female , Humans , Male , Middle Aged , Names , Neuropsychological Tests , Photic Stimulation , Repetition Priming/physiology , Semantics , Young Adult
17.
Soc Cogn Affect Neurosci ; 10(5): 729-34, 2015 May.
Article in English | MEDLINE | ID: mdl-25140046

ABSTRACT

For successful communication, conversational partners need to estimate each other's current knowledge state. Nonverbal facial and bodily cues can reveal relevant information about how confident a speaker is about what they are saying. Using functional magnetic resonance imaging, we aimed to identify brain regions that encode how confident a speaker is perceived to be. Participants viewed videos of people answering general knowledge questions and judged each respondent's confidence in their answer. Our results suggest a distinct role of two neural networks known to support social inferences, the so-called mentalizing and the mirroring network. While activation in both networks underlies the processing of nonverbal cues, only activity in the mentalizing network, most notably the medial prefrontal cortex and the bilateral temporoparietal junction, is modulated by how confident the respondent is judged to be. Our results support an integrative account of the mirroring and mentalizing network, in which the two systems support each other in aiding pragmatic processing.


Subject(s)
Cues , Knowledge , Nonverbal Communication/physiology , Nonverbal Communication/psychology , Social Perception , Adolescent , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Mirror Neurons/physiology , Nerve Net/physiology , Photic Stimulation , Psychomotor Performance/physiology , Social Environment , Theory of Mind , Young Adult
18.
Lang Speech ; 57(Pt 4): 470-86, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25536844

ABSTRACT

A central problem in recent research on speech production concerns the question to what extent speakers adapt their linguistic expressions to the needs of their addressees. It is claimed that speakers sometimes leak information about objects that are only visible for them and not for their listeners. Previous research only takes the occurrence of adjectives as evidence for the leakage of privileged information. The present study hypothesizes that leaked information is also encoded in the prosody of those adjectives. A production experiment elicited adjectives that leak information and adjectives that do not leak information. An acoustic analysis and prominence rating task showed that adjectives that leak information were uttered with a higher pitch and perceived as more prominent compared to adjectives that do not leak information. Furthermore, a guessing task suggested that the adjectives' prosody relates to how listeners infer possible privileged information.


Subject(s)
Intention , Interpersonal Relations , Semantics , Speech Acoustics , Speech Perception , Speech Production Measurement , Verbal Behavior , Adolescent , Adult , Communication , Female , Humans , Male , Netherlands , Pattern Recognition, Visual , Psycholinguistics , Sound Spectrography , Young Adult
19.
Lang Speech ; 57(Pt 1): 86-107, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24754222

ABSTRACT

We studied the effect of two social settings (collaborative versus competitive) on the visual and auditory expressions of uncertainty by children in two age groups (8 and 11). We conducted an experiment in which children played a quiz game in pairs. They either had to collaborate or compete with each other. We found that the Feeling-of-Knowing of eight-year-old children did not seem to be affected by the social setting, contrary to the Feeling-of-Knowing of 11-year-old children. In addition, we labelled children's expressions in clips taken from the experiment for various visual and auditory features. We found that children used some of these features to signal uncertainty and that older children exhibited clearer cues than younger children. In a subsequent perception test, adults rated children's certainty in clips used for labelling. It appeared that older children and children in competition expressed their confidence level more clearly than younger children and children in collaboration.


Subject(s)
Competitive Behavior , Cooperative Behavior , Psychology, Child , Uncertainty , Verbal Behavior , Age Factors , Child , Child Language , Cues , Female , Humans , Judgment , Male , Psychoacoustics , Social Behavior , Speech Perception
20.
J Acoust Soc Am ; 134(3): 2182-96, 2013 Sep.
Article in English | MEDLINE | ID: mdl-23967948

ABSTRACT

The present research investigates what drives the prosodic marking of contrastive information. For example, a typically developing speaker of a Germanic language like Dutch generally refers to a pink car as a "PINK car" (accented words in capitals) when a previously mentioned car was red. The main question addressed in this paper is whether contrastive intonation is produced with respect to the speaker's or (also) the listener's perspective on the preceding discourse. Furthermore, this research investigates the production of contrastive intonation by typically developing speakers and speakers with autism. The latter group is investigated because people with autism are argued to have difficulties accounting for another person's mental state and exhibit difficulties in the production and perception of accentuation and pitch range. To this end, utterances with contrastive intonation are elicited from both groups and analyzed in terms of function and form of prosody using production and perception measures. Contrary to expectations, typically developing speakers and speakers with autism produce functionally similar contrastive intonation as both groups account for both their own and their listener's perspective. However, typically developing speakers use a larger pitch range and are perceived as speaking more dynamically than speakers with autism, suggesting differences in their use of prosodic form.


Subject(s)
Child Development Disorders, Pervasive/physiopathology , Phonetics , Speech Acoustics , Speech Intelligibility , Speech Perception , Voice Quality , Acoustics , Adolescent , Adolescent Development , Adult , Case-Control Studies , Child Development Disorders, Pervasive/diagnosis , Child Development Disorders, Pervasive/psychology , Female , Humans , Language Development , Male , Middle Aged , Sound Spectrography , Speech Production Measurement , Time Factors , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...