Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Front Psychiatry ; 12: 596557, 2021.
Article in English | MEDLINE | ID: mdl-34163378

ABSTRACT

Disgust has recently been characterized as a low-urgency emotion, particularly compared to fear. The aim of the present study is to clarify whether behavioral inhibition during disgust engagement is characteristic of a low-urgency emotion and thus indicates self-imposed attentional avoidance in comparison to fear. Therefore, 54 healthy participants performed an emotional go/no-go task with disgust- and fear-relevant as well as neutral pictures. Furthermore, heart rate activity and facial muscle activity on the fear-specific m. corrugator supercilli and the disgust-specific m. levator labii were assessed. The results partially support the temporal urgency hypothesis of disgust. The emotion conditions significantly differed in emotional engagement and in the facial muscle activity of the m. levator labii as expected. However, contrary to our expectations, no differences between the emotion conditions regarding behavioral inhibition as well as heart rate change could be found. Furthermore, individuals with a higher-trait disgust proneness showed faster reactions and higher activity of the m. levator labii in response to disgust stimuli. The results show that different trait levels influence attentional engagement and physiological parameters but have only a small effect on behavioral inhibition.

2.
Lang Speech ; 64(3): 654-680, 2021 Sep.
Article in English | MEDLINE | ID: mdl-32811294

ABSTRACT

Repeating the movements associated with activities such as drawing or sports typically leads to improvements in kinematic behavior: these movements become faster, smoother, and exhibit less variation. Likewise, practice has also been shown to lead to faster and smoother movement trajectories in speech articulation. However, little is known about its effect on articulatory variability. To address this, we investigate the extent to which repetition and predictability influence the articulation of the frequent German word "sie" [zi] (they). We find that articulatory variability is proportional to speaking rate and the duration of [zi], and that overall variability decreases as [zi] is repeated during the experiment. Lower variability is also observed as the conditional probability of [zi] increases, and the greatest reduction in variability occurs during the execution of the vocalic target of [i]. These results indicate that practice can produce observable differences in the articulation of even the most common gestures used in speech.


Subject(s)
Gestures , Speech , Biomechanical Phenomena , Humans , Movement , Speech Production Measurement
3.
PLoS One ; 12(4): e0174623, 2017.
Article in English | MEDLINE | ID: mdl-28394938

ABSTRACT

Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.


Subject(s)
Algorithms , Computer Simulation , Speech , Comprehension , Female , Humans , Male , Pattern Recognition, Physiological , Phonetics , Recognition, Psychology , Sound Spectrography , Speech Acoustics , Speech Perception , Speech Recognition Software , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...