Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Intell Neurosci ; 2021: 6151651, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34616446

RESUMO

Utterance clustering is one of the actively researched topics in audio signal processing and machine learning. This study aims to improve the performance of utterance clustering by processing multichannel (stereo) audio signals. Processed audio signals were generated by combining left- and right-channel audio signals in a few different ways and then by extracting the embedded features (also called d-vectors) from those processed audio signals. This study applied the Gaussian mixture model for supervised utterance clustering. In the training phase, a parameter-sharing Gaussian mixture model was obtained to train the model for each speaker. In the testing phase, the speaker with the maximum likelihood was selected as the detected speaker. Results of experiments with real audio recordings of multiperson discussion sessions showed that the proposed method that used multichannel audio signals achieved significantly better performance than a conventional method with mono-audio signals in more complicated conditions.


Assuntos
Análise por Conglomerados
2.
PLoS One ; 15(7): e0235973, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32658900

RESUMO

Various motivational theories emphasize that desired emotional outcomes guide behavioral choices. Although motivational theory and research has emphasized that behavior is affected by desired emotional outcomes, little research has focused on the impact of anticipated feelings about engaging in behavior. The current research seeks to partly fill that void. Specifically, we borrow from affective forecasting research in suggesting that forecasts about engaging in performance-relevant behaviors can be more or less accurate. Furthermore, we suggest that the degree of accuracy has implications for self-reported task performance. To examine these ideas, we conducted two studies in which individuals made affective predictions about engaging in tasks and then later reported how they actually felt during task engagement. We also assessed their self-reported task performance. In Study 1, 214 workers provided affective forecasts about upcoming work tasks, and in Study 2, 185 students made forecasts about studying for an exam. Results based on polynomial regression were largely consistent across the studies. The accuracy of the forecasts did not conform to the pattern of affective forecasting accuracy typically found outside the performance domain. Furthermore, anticipated and experienced affect jointly predicted self-reported task performance in a consistent manner. Collectively, these findings suggest that taking into account anticipated affect, and its relationship with later experienced affect, provides a more comprehensive account of affect's role in task performance.


Assuntos
Afeto/fisiologia , Emoções/fisiologia , Motivação/fisiologia , Autorrelato , Estudantes/psicologia , Análise e Desempenho de Tarefas , Adulto , Feminino , Previsões , Humanos , Masculino , Comportamento Social
3.
Sci Rep ; 10(1): 3867, 2020 03 02.
Artigo em Inglês | MEDLINE | ID: mdl-32123191

RESUMO

It has long been claimed that certain configurations of facial movements are universally recognized as emotional expressions because they evolved to signal emotional information in situations that posed fitness challenges for our hunting and gathering hominin ancestors. Experiments from the last decade have called this particular evolutionary hypothesis into doubt by studying emotion perception in a wider sample of small-scale societies with discovery-based research methods. We replicate these newer findings in the Hadza of Northern Tanzania; the Hadza are semi-nomadic hunters and gatherers who live in tight-knit social units and collect wild foods for a large portion of their diet, making them a particularly relevant population for testing evolutionary hypotheses about emotion. Across two studies, we found little evidence of universal emotion perception. Rather, our findings are consistent with the hypothesis that people infer emotional meaning in facial movements using emotion knowledge embrained by cultural learning.


Assuntos
Comparação Transcultural , Emoções , Etnicidade , Feminino , Humanos , Masculino , Tanzânia/etnologia
4.
Emotion ; 19(7): 1292-1313, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30475026

RESUMO

The majority of studies designed to assess cross-cultural emotion perception use a choice-from-array task in which participants are presented with brief emotion stories and asked to choose between target and foil cues. This task has been widely criticized, evoking a lively and prolonged debate about whether it inadvertently helps participants to perform better than they otherwise would, resulting in the appearance of universality. In 3 studies, we provide a strong test of the hypothesis that the classic choice-from-array task constitutes a potent source of context that shapes performance. Participants from a remote small-scale (the Hadza hunter-gatherers of Tanzania) and 2 urban industrialized (China and the United States) cultural samples selected target vocalizations that were contrived for 6 non-English, nonuniversal emotion categories at levels significantly above chance. In studies of anger, disgust, fear, happiness, sadness, and surprise, above chance performance is interpreted as evidence of universality. These studies support the hypothesis that choice-from-array tasks encourage evidence for cross-cultural emotion perception. We discuss these findings with reference to the history of cross-cultural emotion perception studies, and suggest several processes that may, together, give rise to the appearance of universal emotions. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Comparação Transcultural , Emoções/fisiologia , Adulto , Feminino , Humanos , Masculino , Percepção
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...