Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Hum Brain Mapp ; 43(10): 3257-3269, 2022 07.
Article in English | MEDLINE | ID: mdl-35344258

ABSTRACT

Deception detection can be of great value during the juristic investigation. Although the neural signatures of deception have been widely documented, most prior studies were biased by difficulty levels. That is, deceptive behavior typically required more effort, making deception detection possibly effort detection. Furthermore, no study has examined the generalizability across instructed and spontaneous responses and across participants. To explore these issues, we used a dual-task paradigm, where the difficulty level was balanced between truth-telling and lying, and the instructed and spontaneous truth-telling and lying were collected independently. Using Multivoxel pattern analysis, we were able to decode truth-telling versus lying with a balanced difficulty level. Results showed that the angular gyrus (AG), inferior frontal gyrus (IFG), and postcentral gyrus could differentiate lying from truth-telling. Critically, linear classifiers trained to distinguish instructed truthful and deceptive responses could correctly differentiate spontaneous truthful and deceptive responses in AG and IFG with above-chance accuracy. In addition, with a leave-one-participant-out analysis, multivoxel neural patterns from AG could classify if the left-out participant was lying or not in a trial. These results indicate the commonality of neural responses subserved instructed and spontaneous deceptive behavior as well as the feasibility of cross-participant deception validation.


Subject(s)
Brain , Deception , Brain/diagnostic imaging , Brain/physiology , Brain Mapping , Humans , Parietal Lobe/physiology , Prefrontal Cortex/diagnostic imaging , Prefrontal Cortex/physiology
2.
PLoS One ; 15(5): e0232431, 2020.
Article in English | MEDLINE | ID: mdl-32365066

ABSTRACT

This study examined how trustworthiness impressions depend on vocal expressive and person characteristics and how their dependence may be explained by acoustical profiles. Sentences spoken in a range of emotional and conversational expressions by 20 speakers differing in age and sex were presented to 80 age and sex matched listeners who rated speaker trustworthiness. Positive speaker valence but not arousal consistently predicted greater perceived trustworthiness. Additionally, voices from younger as compared with older and female as compared with male speakers were judged more trustworthy. Acoustic analysis highlighted several parameters as relevant for being perceived as trustworthy (i.e., accelerated tempo, low harmonic-to-noise ratio, more shimmer, low fundamental frequency, more jitter, large intensity range) and showed that effects partially overlapped with those for perceived speaker affect, age, but not sex. Specifically, a fast speech rate and a lower harmonic-to-noise ratio differentiated trustworthy from untrustworthy, positive from negative, and younger from older voices. Male and female voices differed in other ways. Together, these results show that a speaker's expressive as well as person characteristics shape trustworthiness impressions and that their effect likely results from a combination of low-level perceptual and higher-order conceptual processes.


Subject(s)
Aging/psychology , Anger , Trust , Voice , Acoustic Stimulation , Adult , Affect , Aged , Aged, 80 and over , Female , Humans , Male , Middle Aged , Psychoacoustics , Sex Factors , Singapore , Speech Acoustics , Speech Perception , Young Adult
4.
PLoS One ; 14(1): e0210555, 2019.
Article in English | MEDLINE | ID: mdl-30650135

ABSTRACT

This study examined how trustworthiness impressions depend on vocal expressive and person characteristics and how their dependence may be explained by acoustical profiles. Sentences spoken in a range of emotional and conversational expressions by 20 speakers differing in age and sex were presented to 80 age and sex matched listeners who rated speaker trustworthiness. Positive speaker valence but not arousal consistently predicted greater perceived trustworthiness. Additionally, voices from younger as compared with older and female as compared with male speakers were judged more trustworthy. Acoustic analysis highlighted several parameters as relevant for differentiating trustworthiness ratings and showed that effects largely overlapped with those for speaker valence and age, but not sex. Specifically, a fast speech rate, a low harmonic-to-noise ratio, and a low fundamental frequency mean and standard deviation differentiated trustworthy from untrustworthy, positive from negative, and younger from older voices. Male and female voices differed in other ways. Together, these results show that a speaker's expressive as well as person characteristics shape trustworthiness impressions and that their effect likely results from a combination of low-level perceptual and higher-order conceptual processes.


Subject(s)
Anger , Auditory Perception , Trust , Voice , Acoustic Stimulation , Acoustics , Adult , Aged , Aged, 80 and over , Arousal , Female , Humans , Male , Middle Aged , Young Adult
5.
Front Psychol ; 6: 2055, 2015.
Article in English | MEDLINE | ID: mdl-26793161

ABSTRACT

For dynamic sounds, such as vocal expressions, duration often varies alongside speed. Compared to longer sounds, shorter sounds unfold more quickly. Here, we asked whether listeners implicitly use this confound when representing temporal regularities in their environment. In addition, we explored the role of emotions in this process. Using a mismatch negativity (MMN) paradigm, we asked participants to watch a silent movie while passively listening to a stream of task-irrelevant sounds. In Experiment 1, one surprised and one neutral vocalization were compressed and stretched to create stimuli of 378 and 600 ms duration. Stimuli were presented in four blocks, two of which used surprised and two of which used neutral expressions. In one surprised and one neutral block, short and long stimuli served as standards and deviants, respectively. In the other two blocks, the assignment of standards and deviants was reversed. We observed a climbing MMN-like negativity shortly after deviant onset, which suggests that listeners implicitly track sound speed and detect speed changes. Additionally, this MMN-like effect emerged earlier and was larger for long than short deviants, suggesting greater sensitivity to duration increments or slowing down than to decrements or speeding up. Last, deviance detection was facilitated in surprised relative to neutral blocks, indicating that emotion enhances temporal processing. Experiment 2 was comparable to Experiment 1 with the exception that sounds were spectrally rotated to remove vocal emotional content. This abolished the emotional processing benefit, but preserved the other effects. Together, these results provide insights into listener sensitivity to sound speed and raise the possibility that speed biases duration judgements implicitly in a feed-forward manner. Moreover, this bias may be amplified for duration increments relative to decrements and within an emotional relative to a neutral stimulus context.

SELECTION OF CITATIONS
SEARCH DETAIL
...