Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
J Speech Lang Hear Res ; 66(11): 4280-4314, 2023 11 09.
Article in English | MEDLINE | ID: mdl-37850877

ABSTRACT

PURPOSE: This study aims to further our understanding of prosodic entrainment and its different subtypes by analyzing a single corpus of conversations with 12 different methods and comparing the subsequent results. METHOD: Entrainment on three fundamental frequency features was analyzed in a subset of recordings from the LUCID corpus (Baker & Hazan, 2011) using the following methods: global proximity, global convergence, local proximity, local convergence, local synchrony (Levitan & Hirschberg, 2011), prediction using linear mixed-effects models (Schweitzer & Lewandowski, 2013), geometric approach (Lehnert-LeHouillier, Terrazas, & Sandoval, 2020), time-aligned moving average (Kousidis et al., 2008), HYBRID method (De Looze et al., 2014), cross-recurrence quantification analysis (e.g., Fusaroli & Tylén, 2016), and windowed, lagged cross-correlation (Boker et al., 2002). We employed entrainment measures on a local timescale (i.e., on adjacent utterances), a global timescale (i.e., over larger time frames), and a time series-based timescale that is larger than adjacent utterances but smaller than entire conversations. RESULTS: We observed variance in results of different methods. CONCLUSIONS: Results suggest that each method may measure a slightly different type of entrainment. The complex implications this has for existing and future research are discussed.


Subject(s)
Communication , Humans , Linear Models , Time Factors
2.
PLoS Comput Biol ; 19(6): e1011169, 2023 06.
Article in English | MEDLINE | ID: mdl-37294830

ABSTRACT

Humans can quickly recognize objects in a dynamically changing world. This ability is showcased by the fact that observers succeed at recognizing objects in rapidly changing image sequences, at up to 13 ms/image. To date, the mechanisms that govern dynamic object recognition remain poorly understood. Here, we developed deep learning models for dynamic recognition and compared different computational mechanisms, contrasting feedforward and recurrent, single-image and sequential processing as well as different forms of adaptation. We found that only models that integrate images sequentially via lateral recurrence mirrored human performance (N = 36) and were predictive of trial-by-trial responses across image durations (13-80 ms/image). Importantly, models with sequential lateral-recurrent integration also captured how human performance changes as a function of image presentation durations, with models processing images for a few time steps capturing human object recognition at shorter presentation durations and models processing images for more time steps capturing human object recognition at longer presentation durations. Furthermore, augmenting such a recurrent model with adaptation markedly improved dynamic recognition performance and accelerated its representational dynamics, thereby predicting human trial-by-trial responses using fewer processing resources. Together, these findings provide new insights into the mechanisms rendering object recognition so fast and effective in a dynamic visual world.


Subject(s)
Pattern Recognition, Visual , Visual Perception , Humans , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Neural Networks, Computer , Recognition, Psychology/physiology , Acclimatization
3.
J Cogn ; 4(1): 35, 2021.
Article in English | MEDLINE | ID: mdl-34430794

ABSTRACT

Intergroup dynamics shape the ways in which we interact with other people. We feel more empathy towards ingroup members compared to outgroup members, and can even feel pleasure when an outgroup member experiences misfortune, known as schadenfreude. Here, we test the extent to which these intergroup biases emerge during interactions with robots. We measured trial-by-trial fluctuations in emotional reactivity to the outcome of a competitive reaction time game to assess both empathy and schadenfreude in arbitrary human-human and human-robot teams. Across four experiments (total n = 361), we observed a consistent empathy and schadenfreude bias driven by team membership. People felt more empathy towards ingroup members than outgroup members and more schadenfreude towards outgroup members. The existence of an intergroup bias did not depend on the nature of the agent: the same effects were observed for human-human and human-robot teams. People reported similar levels of empathy and schadenfreude towards a human and robot player. The human likeness of the robot did not consistently influence this intergroup bias. In other words, similar empathy and schadenfreude biases were observed for both humanoid and mechanoid robots. For all teams, this bias was influenced by the level of team identification; individuals who identified more with their team showed stronger intergroup empathy and schadenfreude bias. Together, we show that similar intergroup dynamics that shape our interactions with people can also shape interactions with robots. Our results highlight the importance of taking intergroup biases into account when examining social dynamics of human-robot interactions.

SELECTION OF CITATIONS
SEARCH DETAIL
...