Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Acta Psychol (Amst) ; 204: 103026, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32087419

ABSTRACT

We investigated the existence and nature of adaptation aftereffects on the visual perception of basic emotions displayed through walking gait. Stimuli were previously validated gender-ambiguous point-light walker models displaying various basic emotions (happy, sad, anger and fear). Results indicated that both facilitative and inhibitive aftereffects influenced the perception of all displayed emotions. Facilitative aftereffects were found between theoretically opposite emotions (i.e. happy/sad and anger/fear). Evidence suggested that low-level and high-level visual processes contributed to both stimulus aftereffect and conceptual aftereffect mechanisms. Significant aftereffects were more frequently evident for the time required to identify the displayed emotion than for emotion identification rates. The perception of basic emotions from walking gait is influenced by a number of different perceptual mechanisms which shift the categorical boundaries of each emotion as a result of perceptual experience.


Subject(s)
Adaptation, Physiological/physiology , Emotions/physiology , Gait/physiology , Photic Stimulation/methods , Walking/physiology , Walking/psychology , Adolescent , Adult , Female , Humans , Male , Visual Perception/physiology , Young Adult
2.
Hum Mov Sci ; 57: 461-477, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29107320

ABSTRACT

Previous evidence has shown that males and females display different gait kinematics which may influence the perception of emotions displayed through the same walking gait. We therefore investigated the influence of walker gender on the perception of happiness, sadness, anger and fear displayed through walking movements. Full-light (FL), point-light (PL) and synthetically modelled point-light walkers (SW) of both genders were shown to perceivers over three experiments. Additionally, gender ambiguous synthetic walkers were shown to control for the influence of form, gender stereotypes and idiosyncratic gait movements on emotional gait perception. Each emotion was identified above chance level for both walker genders and in all display conditions though significantly less in PL and SW than in FL. The gender of the walker did not influence the pattern of identifications in FL walkers (Fear > Sad > Happy > Anger > Neutral), but did influence the identification patterns in PL (Female: [Happy = Sad = Fear = Anger] > Neutral; Male: Fear = Sad = [Happy > Anger] > Neutral) and SWs (Female: Happy = Sad = Anger = Fear = Neutral; Male: [Happy = Sad = Anger] > [Fear = Neutral]; Ambiguous: [[Happy = Sad = Anger] > Fear] = Neutral). The gender of the walker and format in which they are displayed influenced the perception of different basic emotions. The constructed SW stimuli also displayed happiness, sadness and anger with equivalent intensity in female, male and gender ambiguous walkers thus untangling the perception-expression entanglement that has plagued previous emotion perception research.


Subject(s)
Anger , Emotions/physiology , Fear , Happiness , Perception , Walking/physiology , Adult , Australia , Biomechanical Phenomena , Female , Gait , Humans , Imaging, Three-Dimensional , Male , Psychophysics , Sex Factors , Visual Perception
3.
Hum Mov Sci ; 57: 478-488, 2018 Feb.
Article in English | MEDLINE | ID: mdl-29174557

ABSTRACT

Perceiving emotions from gait can serve numerous socio-environmental functions (e.g. perceiving threat, sexual courting behaviours). Participant perceivers were asked to report their strategies for identifying happiness, sadness, anger and fear in point-light walkers. Perceivers claimed they identified happiness by a bouncing gait with increased arm movement, sadness by a slow slouching gait, anger by a fast stomping gait and fear by both fast and slow gaits. The emotion-specific point-light walker stimuli were kinematically analysed to verify the presence of the gait cues perceivers reported using to identify each emotion. Happy and angry walkers both displayed long strides with increased arm movement though angry strides had a faster cadence. Fearful walkers walked with fast short strides reminiscent of a scurrying gait. Sad walkers walked with slow short strides consequently creating the slowest walking pace. However, fearful and sad walkers showed less arm movement in their gait in different ways. Sad walkers moved their entire arms whilst fearful walkers primarily moved their lower arms throughout their gait.


Subject(s)
Cues , Emotions/physiology , Gait/physiology , Adult , Anger , Biomechanical Phenomena , Fear , Female , Happiness , Humans , Male , Perception , Walking/psychology , Young Adult
4.
Data Brief ; 15: 1000-1002, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29159240

ABSTRACT

This data set describes the experimental data collected and reported in the research article "Walking my way? Walker gender and display format confounds the perception of specific emotions" (Halovic and Kroos, in press) [1]. The data set represent perceiver identification rates for different emotions (happiness, sadness, anger, fear and neutral), as displayed by full-light, point-light and synthetic point-light walkers. The perceiver identification scores have been transformed into Ht rates, which represent proportions/percentages of correct identifications above what would be expected by chance. This data set also provides Ht rates separately for male, female and ambiguously gendered walkers.

5.
J Acoust Soc Am ; 140(4): 2794, 2016 10.
Article in English | MEDLINE | ID: mdl-27794291

ABSTRACT

Substantial research has established that place of articulation of stop consonants (labial, alveolar, velar) are reliably differentiated using a number of acoustic measures such as closure duration, voice onset time (VOT), and spectral measures such as centre of gravity and the relative energy distribution in the mid-to-high spectral range of the burst. It is unclear, however, whether such measurable acoustic differences are present in multiple place of articulation contrasts among coronal stops. This article presents evidence from the highly endangered indigenous Australian language Wubuy, which maintains a 4-way coronal stop place contrast series in all word positions. The authors examine the temporal and burst characteristics of / t̪ t ʈ/ in three prosodic positions (utterance-initial, word-initial but phrase medial, and word-medial). The results indicate that VOT, closure duration, and the spectral quality of the burst may indeed differentiate multiple coronal place contrasts, in most positions, although measures that distinguish the apical contrast in absolute initial position remain elusive. The authors also examine measures (spectrum kurtosis, spectral tilt) previously used in other studies of multiple coronals in Australian languages. These results suggest that the authors' measures perform at least as well as those previously applied to multiple coronals in other Australian languages.

6.
PLoS One ; 10(12): e0142054, 2015.
Article in English | MEDLINE | ID: mdl-26633651

ABSTRACT

Native speech perception is generally assumed to be highly efficient and accurate. Very little research has, however, directly examined the limitations of native perception, especially for contrasts that are only minimally differentiated acoustically and articulatorily. Here, we demonstrate that native speech perception may indeed be more difficult than is often assumed, where phonemes are highly similar, and we address the nature and extremes of consonant perception. We present two studies of native and non-native (English) perception of the acoustically and articulatorily similar four-way coronal stop contrast /t ʈ [symbol: see text] ȶ/ (apico-alveolar, apico-retroflex, lamino-dental, lamino-alveopalatal) of Wubuy, an indigenous language of Australia. The results show that all listeners find contrasts involving /ȶ/ easy to discriminate, but that, for both groups, contrasts involving /t ʈ [symbol: see text]/ are much harder. Where the two groups differ, the results largely reflect native language (Wubuy vs English) attunement as predicted by the Perceptual Assimilation Model. We also observe striking perceptual asymmetries in the native listeners' perception of contrasts involving the latter three stops, likely due to the differences in input frequency. Such asymmetries have not previously been observed in adults, and we propose a novel Natural Referent Consonant Hypothesis to account for the results.


Subject(s)
Language , Speech Perception/physiology , Speech/physiology , Auditory Perception/physiology , Australia , Humans , Middle Aged
7.
Front Psychol ; 5: 262, 2014.
Article in English | MEDLINE | ID: mdl-24808868

ABSTRACT

Singing involves vocal production accompanied by a dynamic and meaningful use of facial expressions, which may serve as ancillary gestures that complement, disambiguate, or reinforce the acoustic signal. In this investigation, we examined the use of facial movements to communicate emotion, focusing on movements arising in three epochs: before vocalization (pre-production), during vocalization (production), and immediately after vocalization (post-production). The stimuli were recordings of seven vocalists' facial movements as they sang short (14 syllable) melodic phrases with the intention of communicating happiness, sadness, irritation, or no emotion. Facial movements were presented as point-light displays to 16 observers who judged the emotion conveyed. Experiment 1 revealed that the accuracy of emotional judgment varied with singer, emotion, and epoch. Accuracy was highest in the production epoch, however, happiness was well communicated in the pre-production epoch. In Experiment 2, observers judged point-light displays of exaggerated movements. The ratings suggested that the extent of facial and head movements was largely perceived as a gauge of emotional arousal. In Experiment 3, observers rated point-light displays of scrambled movements. Configural information was removed in these stimuli but velocity and acceleration were retained. Exaggerated scrambled movements were likely to be associated with happiness or irritation whereas unexaggerated scrambled movements were more likely to be identified as "neutral." An analysis of singers' facial movements revealed systematic changes as a function of the emotional intentions of singers. The findings confirm the central role of facial expressions in vocal emotional communication, and highlight individual differences between singers in the amount and intelligibility of facial movements made before, during, and after vocalization.

8.
Lang Sci ; 34(5): 583-590, 2012 Sep 01.
Article in English | MEDLINE | ID: mdl-22711975

ABSTRACT

We present a study aimed at investigating how novel signs emerge and spread through a community of interacting individuals. Ten triads of participants played a game in which players created novel signs in order to communicate with each other while constantly rotating between the role of interlocutor and that of observer. The main result of the study was that, for a majority of the triads, communicative success was not shared by the three dyads of players in a triad. This imbalance appears to be due to individual differences in game performance as well as to uncooperative behaviors. We suggest that both of these are magnified by the social dynamics induced by the role rotations in the game.

9.
J Phon ; 39(4): 558-570, 2011 Oct.
Article in English | MEDLINE | ID: mdl-22787285

ABSTRACT

Speech production research has demonstrated that the first language (L1) often interferes with production in bilinguals' second language (L2), but it has been suggested that bilinguals who are L2-dominant are the most likely to suppress this L1-interference. While prolonged contextual changes in bilinguals' language use (e.g., stays overseas) are known to result in L1 and L2 phonetic shifts, code-switching provides the unique opportunity of observing the immediate phonetic effects of L1-L2 interaction. We measured the voice onset times (VOTs) of Greek-English bilinguals' productions of /b, d, p, t/ in initial and medial contexts, first in either a Greek or English unilingual mode, and in a later session when they produced the same target pseudowords as a code-switch from the opposing language. Compared to a unilingual mode, all English stops produced as code-switches from Greek, regardless of context, had more Greek-like VOTs. In contrast, Greek stops showed no shift toward English VOTs, with the exception of medial voiced stops. Under the specifically interlanguage condition of code-switching we have demonstrated a pervasive influence of the L1 even in L2-dominant individuals.

10.
Perception ; 39(3): 407-16, 2010.
Article in English | MEDLINE | ID: mdl-20465175

ABSTRACT

Parsing of information from the world into objects and events occurs in both the visual and auditory modalities. It has been suggested that visual and auditory scene perceptions involve similar principles of perceptual organisation. We investigated here cross-modal scene perception by determining whether an auditory stimulus could facilitate visual object segregation. Specifically, we examined whether the presentation of matched auditory speech would facilitate the detection of a point-light talking face amid point-light distractors. An adaptive staircase procedure (3-up 1-down rule) was used to estimate the 79% correct threshold in a two-alternative forced-choice procedure. To determine if different degrees of speech motion would show auditory influence of different sizes, two speech modes were tested (in quiet and Lombard speech). A facilitatory auditory effect on talking-face detection was found; the size of this effect did not differ between the different speech modes.


Subject(s)
Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Analysis of Variance , Auditory Perception/physiology , Auditory Threshold/physiology , Face , Humans , Movement
11.
J Phon ; 38(4): 640-653, 2010 Oct.
Article in English | MEDLINE | ID: mdl-21743759

ABSTRACT

The way that bilinguals produce phones in each of their languages provides a window into the nature of the bilingual phonological space. For stop consonants, if early sequential bilinguals, whose languages differ in voice onset time (VOT) distinctions, produce native-like VOTs in each of their languages, it would imply that they have developed separate first and second language phones, that is, language-specific phonetic realisations for stop-voicing distinctions. Given the ambiguous phonological status of Greek voiced stops, which has been debated but not investigated experimentally, Greek-English bilinguals can offer a unique perspective on this issue. We first recorded the speech of Greek and Australian-English monolinguals to observe native VOTs in each language for /p, t, b, d/ in word-initial and word-medial (post-vocalic and post-nasal) positions. We then recorded fluent, early Greek-Australian-English bilinguals in either a Greek or English language context; all communication occurred in only one language. The bilinguals in the Greek context were indistinguishable from the Greek monolinguals, whereas the bilinguals in the English context matched the VOTs of the Australian-English monolinguals in initial position, but showed some modest differences from them in the phonetically more complex medial positions. We interpret these results as evidence that bilingual speakers possess phonetic categories for voiced versus voiceless stops that are specific to each language, but are influenced by positional context differently in their second than in their first language.

12.
J Cogn Neurosci ; 16(5): 805-16, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15200708

ABSTRACT

Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Gestures , Speech Perception/physiology , Visual Perception/physiology , Adult , Brain Mapping , Cerebral Cortex/anatomy & histology , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging/methods , Male , Physical Stimulation/methods , Psychomotor Performance/physiology
13.
Neuroreport ; 14(17): 2213-8, 2003 Dec 02.
Article in English | MEDLINE | ID: mdl-14625450

ABSTRACT

This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.


Subject(s)
Acoustic Stimulation/methods , Brain/physiology , Photic Stimulation/methods , Speech Perception/physiology , Visual Perception/physiology , Adult , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Neural Pathways/physiology , Psychomotor Performance/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...