Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 13 de 13
Filter
Add more filters










Publication year range
1.
Front Psychol ; 14: 1124385, 2023.
Article in English | MEDLINE | ID: mdl-37179870

ABSTRACT

Human social performance has been a focus of theory and investigation for more than a century. Attempts to quantify social performance have focused on self-report and non-social performance measures grounded in intelligence-based theories. An expertise framework, when applied to individual differences in social interaction performance, offers novel insights and methods of quantification that could address limitations of prior approaches. The purposes of this review are 3-fold. First, to define the central concepts related to individual differences in social performance, with a particular focus on the intelligence-based framework that has dominated the field. Second, to make an argument for a revised conceptualization of individual differences in social-emotional performance as a social expertise. In support of this second aim, the putative components of a social-emotional expertise and the potential means for their assessment will be outlined. To end, the implications of an expertise-based conceptual framework for the application of computational modeling approaches in this area will be discussed. Taken together, expertise theory and computational modeling methods have the potential to advance quantitative assessment of social interaction performance.

2.
Front Psychol ; 11: 277, 2020.
Article in English | MEDLINE | ID: mdl-32158414

ABSTRACT

Social interactions have long been a source of lay beliefs about the ways in which psychological constructs operate. Some of the most enduring psychological constructs to become common lay beliefs originated from research focused on social-emotional processes. "Emotional intelligence" and "social intelligence" are now mainstream notions, stemming from their appealing nature and depiction in popular media. However, empirical attempts at quantifying the quality of social interactions have not been nearly as successful as measures of individual differences such as social skills, theory of mind, or social/emotional intelligence. The subjective, lay ratings of the quality of interactions by naïve observers are nonetheless consistent both within and between observers. The goal of this paper is to describe recent empirical work surrounding lay beliefs about social interaction quality and ways in which those beliefs can be quantified. We will then argue that these lay impressions formed about the quality of an interaction, perhaps via affect induction, are consistent with an expertise framework. Affect induction, beginning in infancy and occurring over time, creates instances in memory that accumulate and are ultimately measurable as social-emotional expertise (SEE). The ways in which our lay beliefs about social interaction quality fit the definition of expertise, or the automatic, holistic processing of relevant stimuli, will be discussed. We will then describe the promise of future work in this area, with a focus on a) continued delineation of the thoughts, behaviors, and timing of behaviors that lead to high-quality social interactions; and b) the viability of expertise as the conceptual model for individual differences in social-emotional ability.

3.
Assessment ; 27(8): 1718-1730, 2020 12.
Article in English | MEDLINE | ID: mdl-30132335

ABSTRACT

Social-emotional expertise (SEE) represents a synthesis of specific cognitive abilities related to social interactions, and emphasizes the timing and synchrony of behaviors that contribute to overall social-emotional ability. As a step toward SEE construct validation, we conducted three experiments to develop a self-report measure that captured key elements of our conceptualization of SEE. In Experiment 1, we generated and tested 76 items for a measure of SEE. The resultant 25-item scale is reliable, test-retest: r(80) = .82, p < .001, and internally consistent (Cronbach's α = .90). Experiments 2 and 3 examined the relationships between the SEE Scale and related constructs. Convergent constructs, such as emotional intelligence, r(885) = .62, p < .01, and social anxiety, r(885) = -.59, p < .01, and discriminant constructs, such as social desirability, r(885) = .19, p < .01, and self-monitoring, r(885) = .28, p < .01, were found to be related in the expected directions. Additionally, two factors were statistically identified: Adaptability and Expressivity. The items contributing to each factor describe the ability to successfully navigate social environments and the ability to successfully convey affect and ideas to other people, respectively. These factors correlate with related constructs in distinct and theoretically relevant ways.


Subject(s)
Emotions , Fear , Humans , Psychometrics , Reproducibility of Results , Social Desirability , Surveys and Questionnaires
4.
Lang Learn Dev ; 10(1): 51-67, 2014.
Article in English | MEDLINE | ID: mdl-24489521

ABSTRACT

F0-based acoustic measures were extracted from a brief, sentence-final target word spoken during structured play interactions between mothers and their 3- to 14-month-old infants, and were analyzed based on demographic variables and DSM-IV Axis-I clinical diagnoses and their common modifiers. F0 range (ΔF0) was negatively correlated with infant age and number of children. ΔF0 was significantly smaller in clinically depressed mothers and mothers diagnosed with depression in partial remission, relative to non-depressed mothers, mothers diagnosed with depression in full remission, and those diagnosed with depressive disorder not otherwise specified. ΔF0 was significantly lower in mothers experiencing their first major depressive episode relative to mothers with recurrent depression. Deficits in ΔF0 were specific to diagnosed clinical depression, and were not well predicted by elevated self-report scores only, or by diagnosed anxiety disorders. Mothers with higher ΔF0 had infants with reportedly larger productive vocabularies, but depression was unrelated to vocabulary development. Implications for cognitive-linguistic development are discussed.

5.
Cogn Emot ; 24(8): 1421-1430, 2010 Oct 22.
Article in English | MEDLINE | ID: mdl-25125772

ABSTRACT

The current study evaluated the quality of facial and vocal emotional expressions in abusive and non-abusive mothers, and assessed whether mothers' emotional expression quality was related to their children's cognitive processing of emotion and behavioural problems. Relative to non-abusive mothers, abusive mothers produced less prototypical angry facial expressions, and less prototypical angry, happy, and sad vocal expressions. The intensity of mothers' facial and vocal expressions of anger was related to their children's externalising and internalising symptoms. Additionally, children's cognitive processing of their mothers' angry faces was related to the quality of mothers' facial expressions. Results are discussed with respect to the impact of early emotional learning environments on children's socioemotional development and risk for psychopathology.

6.
J Autism Dev Disord ; 39(10): 1392-400, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19449097

ABSTRACT

Few studies have examined vocal expressions of emotion in children with autism. We tested the hypothesis that during social interactions, children diagnosed with autism would exhibit less extreme laugh acoustics than their nonautistic peers. Laughter was recorded during a series of playful interactions with an examiner. Results showed that children with autism exhibited only one type of laughter, whereas comparison participants exhibited two types. No group differences were found for laugh duration, mean fundamental frequency (F(0)) values, change in F(0), or number of laughs per bout. Findings are interpreted to suggest that children with autism express laughter primarily in response to positive internal states, rather than using laughter to negotiate social interactions.


Subject(s)
Autistic Disorder/psychology , Laughter/physiology , Analysis of Variance , Child , Expressed Emotion , Female , Humans , Laughter/psychology , Male , Speech Acoustics
7.
Percept Psychophys ; 69(6): 930-41, 2007 Aug.
Article in English | MEDLINE | ID: mdl-18018974

ABSTRACT

Speech routinely provides cues as to the sex of the talker, in voiced sounds, these cues mainly reflect dimorphism in vocal anatomy. This dimorphism is not symmetrical, however, since during adolescent development, males specifically diverge from a previously shared trajectory with females. We therefore predicted that listeners would show a corresponding perceptual advantage for male sounds in talker-sex discrimination, a hypothesis tested using very brief, one- to eight-cycle vowel segments. The expected performance asymmetry was observed in threshold-like tests of multiple different vowels in Experiments 1-3, and a signal detection design in Experiment 4 helped rule out possible response bias effects. In confirming our counterintuitive prediction, the present study illustrates that a biological and evolutionary perspective can be helpful in understanding indexical cuing in speech.


Subject(s)
Attention , Judgment , Phonetics , Speech Perception , Verbal Behavior , Cues , Female , Humans , Male , Sex Factors
8.
Annu Rev Psychol ; 54: 329-49, 2003.
Article in English | MEDLINE | ID: mdl-12415074

ABSTRACT

A flurry of theoretical and empirical work concerning the production of and response to facial and vocal expressions has occurred in the past decade. That emotional expressions express emotions is a tautology but may not be a fact. Debates have centered on universality, the nature of emotion, and the link between emotions and expressions. Modern evolutionary theory is informing more models, emphasizing that expressions are directed at a receiver, that the interests of sender and receiver can conflict, that there are many determinants of sending an expression in addition to emotion, that expressions influence the receiver in a variety of ways, and that the receiver's response is more than simply decoding a message.


Subject(s)
Emotions , Facial Expression , Speech Acoustics , Expressed Emotion , Humans , Interpersonal Relations , Nonverbal Communication , Personal Construct Theory , Social Perception
9.
Cogn Emot ; 17(2): 327-340, 2003 Mar.
Article in English | MEDLINE | ID: mdl-29715722

ABSTRACT

Drawing from an affect-induction model of laughter (Bachorowski & Owren, 2001; Owren & Bachorowski, 2002), we propose that "antiphonal" laughter--that is, laughter that occurs during or immediately after a social partner's laugh--is a behavioural manifestation of a conditioned positive emotional response to another individual's laugh acoustics. To test hypotheses concerning the occurrence of antiphonal laughter, participants (n = 148) were tested as part of either same- or mixed-sex friend or stranger dyads, and were audiorecorded while they played brief games intended to facilitate laugh production. An index of antiphonal laughter for each dyad was derived using Yule's Q. Significantly more antiphonal laughter was produced in friend than in stranger dyads, and females in mixed-sex dyads produced more antiphonal laughter than did their male partners. Antiphonal laughter may therefore reflect a mutually positive stance between social partners, and function to reinforce shared positive affective experiences.

10.
Ann N Y Acad Sci ; 1000: 244-65, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14766635

ABSTRACT

In his writing Darwin emphasized direct veridical links between vocal acoustics and vocalizer emotional state. Yet he also recognized that acoustics influence the emotional state of listeners. This duality-that particular vocal expressions are likely linked to particular internal states, yet may specifically function to influence others-lies at the heart of contemporary efforts aimed at understanding affect-related vocal acoustics. That work has focused most on speech acoustics and laughter, where the most common approach has been to argue that these signals reflect the occurrence of discrete emotional states in the vocalizer. An alternative view is that the underlying states can be better characterized using a small number of continuous dimensions such as arousal (or activation) and a valenced dimension such as pleasantness. A brief review of the evidence suggests, however, that neither approach is correct. Data from speech-related research provides little support for a discrete-emotions view, with emotion-related aspects of the acoustics seeming more to reflect to vocalizer arousal. However, links to a corresponding emotional valence dimension have also been difficult to demonstrate, suggesting a need for interpretations outside this traditional dichotomy. We therefore suggest a different perspective in which the primary function of signaling is not to express signaler emotion, but rather to impact listener affect and thereby influence the behavior of these individuals. In this view, it is not expected that nuances of signaler states will be highly correlated with particular features of the sounds produced, but rather that vocalizers will be using acoustics that readily affect listener arousal and emotion. Attributions concerning signaler states thus become a secondary outcome, reflecting inferences that listeners base on their own affective responses to the sounds, their past experience with such signals, and the context in which signaling is occurring. This approach has found recent support in laughter research, with the bigger picture being that the sounds of emotion-be they carried in speech, laughter, or other species-typical signals--are not informative, veridical beacons on vocalizer states so much as tools of social influence used to capitalize on listener sensitivities.


Subject(s)
Affect , Social Perception , Speech Acoustics , Voice , Biological Evolution , Cues , Humans , Speech
11.
12.
Psychol Sci ; 13(3): 268-71, 2002 May.
Article in English | MEDLINE | ID: mdl-12009049

ABSTRACT

Depressed mothers use less of the exaggerated prosody that is typical of infant-directed (ID) speech than do nondepressed mothers. We investigated the consequences of this reduced perceptual salience in ID speech for infant learning. Infants of nondepressed mothers readily learned that their mothers' speech signaled a face, whereas infants of depressed mothers failed to learn that their mothers' speech signaled the face. Infants of depressed mothers did, however, show strong learning in response to speech produced by an unfamiliar nondepressed mother. These outcomes indicate that the reduced perceptual salience of depressed mothers' ID speech could lead to deficient learning in otherwise competent learners.


Subject(s)
Association Learning/physiology , Depressive Disorder/complications , Learning/physiology , Mother-Child Relations , Speech/physiology , Female , Humans , Infant
13.
Infancy ; 2(4): 537-548, 2001 Oct.
Article in English | MEDLINE | ID: mdl-33451190

ABSTRACT

Infant-directed (ID) speech was recorded from mothers as they interacted with their 4- to 12-month-old infants. Hierarchical regression analyses revealed that two variables, age of the mother and mother's diagnosed depression, independently accounted for significant proportions of the variance in the extent of change in fundamental frequency (ΔF0). Specifically, depressed mothers produced ID speech with smaller ΔF0 than did nondepressed mothers, and older mothers produced ID speech with larger ΔF0 than did younger mothers. Mothers who were taking antidepressant medication and who were diagnosed as being in at least partial remission produced ID speech with mean ΔF0 values that were comparable to those of nondepressed mothers. These results demonstrate explicit associations between major depressive disorder and an acoustic attribute of ID speech that is highly salient to young infants.

SELECTION OF CITATIONS
SEARCH DETAIL
...