Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychon Bull Rev ; 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38010455

RESUMO

As reliance on digital communication grows, so does the importance of communicating effectively with text. Yet when communicating with text, benefits from other channels, such as hand gesture, are diminished. Hand gestures support comprehension and disambiguate characteristics of the spoken message by providing information in a visual channel supporting speech. Can emoji (pictures used to supplement text communication) perform similar functions? Here, we ask whether emoji improve comprehension of indirect speech. Indirect speech is ambiguous, and appropriate comprehension depends on the receiver decoding context cues, such as hand gesture. We adapted gesture conditions from prior research (Kelly et al., 1999, Experiment 2) to a digital, text-based format, using emoji rather than gestures. Participants interpreted 12 hypothetical text-message exchanges that ended with indirect speech, communicated via text only, text+emoji, or emoji only, in a between-subjects design. Like that previously seen for hand gesture, emoji improved comprehension. Participants were more likely to correctly interpret indirect speech in the emoji-only condition compared with the text+emoji and the text-only conditions, and more likely in the text+emoji condition compared to the text-only condition. Thus, emoji are not mere decoration, but rather are integrated with text to communicate and disambiguate complex messages. Similar to gesture in face-to-face communication, emoji improve comprehension during text-based communication.

2.
Atten Percept Psychophys ; 81(7): 2354-2364, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31044395

RESUMO

Viewing co-speech hand gestures with spoken phrases enhances memory for phrases, as compared to when the phrases are presented without gesture. Prior work investigating the mechanism underlying the effect of gesture on memory has implicated engagement of the motor system; when the hands are engaged in an unrelated motor task when viewing gesture, the beneficial effect of gesture is absent. However, one alternative interpretation of these findings is that the beneficial effect of gesture disappears due to mismatched contexts at encoding and retrieval: The hands are engaged during either encoding or retrieval, but not during both stages. Here we examined whether matching the motor context at encoding and retrieval plays a role in the beneficial effect of gesture on memory during a phrase recall task. Participants were presented with phrases that were viewed with and without gesture. Participants were assigned to one of four conditions that determined whether they would complete an unrelated motor task at (1) encoding only, (2) retrieval only, (3) both encoding and retrieval, or (4) neither. During stages in which they were not completing a motor task, participants' hands were in their laps. We found that gesture enhanced memory for phrases both when participants engaged in an unrelated motor task at encoding and retrieval and when they did not complete the motor task during either stage. Furthermore, phrases observed with gesture were more likely to be paraphrased than to be recalled literally. Together, these findings demonstrate that gesture can enhance memory even when the motor system is engaged in another task, as long as that same task is performed at retrieval.


Assuntos
Gestos , Mãos , Memória/fisiologia , Estimulação Luminosa/métodos , Desempenho Psicomotor/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Rememoração Mental/fisiologia , Pessoa de Meia-Idade , Distribuição Aleatória , Gravação em Vídeo/métodos , Adulto Jovem
3.
Front Psychol ; 10: 711, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30984091

RESUMO

Dual-task costs are often significantly reduced or eliminated when both tasks use compatible stimulus-response (S-R) pairs. Either by design or unintentionally, S-R pairs used in dual-task experiments that produce small dual-task costs typically have two properties that may reduce dual-task interference. One property is that they are easy to keep separate; specifically, one task is often visual-spatial and contains little verbal information and the other task is primarily auditory-verbal and has no significant spatial component. The other property is that the two sets of S-R pairs are often compatible at the set-level; specifically, the collection of stimuli for each task is strongly related to the collection of responses for that task, even if there is no direct correspondence between the individual items in the sets. In this paper, we directly test which of these two properties is driving the absence of large dual-task costs. We used stimuli (images of hands and auditory words) that when previously been paired with responses (button presses and vocal utterances) produced minimal dual-task costs, but we manipulated the shape of the hands in the images and the auditory words. If set-level compatibility is driving efficient performance, then these changes should not affect dual-task costs. However, we found large changes in the dual-task costs depending on the specific stimuli and responses. We conclude that set-level compatibility is not sufficient to minimize dual-task costs. We connect these findings to divisions within the working memory system and discuss implications for understanding dual-task performance more broadly.

4.
Psychon Bull Rev ; 22(5): 1403-9, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25754530

RESUMO

Dual-task costs can be greatly reduced or even eliminated when both tasks use highly-compatible S-R associations. According to Greenwald (Journal of Experimental Psychology: Human Perception and Performance, 30, 632-636, 2003), this occurs because the appropriate response can be accessed without engaging performance-limiting response selection processes, a proposal consistent with the embodied cognition framework in that it suggests that stimuli can automatically activate motor codes (e.g., Pezzulo et al., New Ideas in Psychology, 31(3), 270-290, 2013). To test this account, we reversed the stimulus-response mappings for one or both tasks so that some participants had to "do the opposite" of what they perceived. In these reversed conditions, stimuli resembled the environmental outcome of the alternative (incorrect) response. Nonetheless, reversed tasks were performed without costs even when paired with an unreversed task. This finding suggests that the separation of the central codes across tasks (e.g., Wickens, 1984) is more critical than the specific S-R relationships; dual-task costs can be avoided when the tasks engage distinct modality-based systems.


Assuntos
Atenção , Reconhecimento Visual de Modelos , Desempenho Psicomotor , Tempo de Reação , Reversão de Aprendizagem , Percepção da Fala , Comportamento de Escolha , Percepção de Cores , Feminino , Humanos , Comportamento Imitativo , Masculino , Semântica , Adulto Jovem
5.
Psychon Bull Rev ; 20(5): 1005-10, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23444106

RESUMO

Implicit learning in the serial reaction time (SRT) task is sometimes disrupted by the presence of a secondary distractor task (e.g., Schumacher & Schwarb Journal of Experimental Psychology: General 138:270-290, 2009) and at other times is not (e.g., Cohen, Ivry, & Keele Journal of Experimental Psychology: Learning, Memory, and Cognition 16:17-30, 1990). In the present study, we used an instructional manipulation to investigate how participants' conceptualizations of the task affect sequence learning under dual-task conditions. Two experimental groups differed only in terms of the instructions and presequence training. One group was instructed that they were completing two separate tasks, whereas the other group was instructed that they were performing a single, integrated task. The separate group showed sequence learning, while the integrated group did not. These findings suggest that the conceptualization of task boundaries affects the availability of the sequential information necessary for implicit learning.


Assuntos
Formação de Conceito/fisiologia , Desempenho Psicomotor/fisiologia , Aprendizagem Seriada/fisiologia , Adulto , Humanos , Distribuição Aleatória , Análise e Desempenho de Tarefas , Adulto Jovem
6.
J Exp Psychol Hum Percept Perform ; 39(2): 413-32, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22866763

RESUMO

Why are dual-task costs reduced with ideomotor (IM) compatible tasks (Greenwald & Shulman, 1973; Lien, Proctor & Allen, 2002)? In the present experiments, we first examine three different measures of single-task performance (pure single-task blocks, mixed blocks, and long stimulus onset asynchrony [SOA] trials in dual-task blocks) and two measures of dual-task performance (simultaneous stimulus presentation blocks and simultaneous stimulus presentation trials in blocks with mixed SOAs), and show that these different measures produce different estimates of the cost. Next we examine whether the near elimination of costs can be explained by assuming that one or both of the tasks bypasses capacity-limited central operations. The results indicate that both tasks must be IM-compatible to nearly eliminate the dual-task costs, suggesting that the relationship between the tasks plays a critical role in overlapping performance.


Assuntos
Atenção , Percepção de Cores , Reconhecimento Visual de Modelos , Desempenho Psicomotor , Tempo de Reação , Percepção da Fala , Adolescente , Feminino , Humanos , Masculino , Orientação , Interface para o Reconhecimento da Fala , Estatística como Assunto , Adulto Jovem
7.
Cogn Emot ; 25(1): 73-88, 2011 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-21278907

RESUMO

Studies investigating the effect of emotional expression on spatial orienting to a gazed-at location have produced mixed results. The present study investigated the role of affective context in the integration of emotion processing and gaze-triggered orienting. In three experiments, a face gazed nonpredictively to the left or right, and then its expression became fearful or happy. Participants identified (Experiments 1 and 2) or detected (Experiment 3) a peripheral target presented 225 or 525 ms after the gaze cue onset. In Experiments 1 and 3 the targets were either threatening (a snarling dog) or nonthreatening (a smiling baby); in Experiment 2 the targets were neutral. With emotionally-valenced targets, the gaze-cuing effect was larger when the face was fearful compared to happy--but only with the longer cue-target interval. With neutral targets, there was no interaction between gaze and expression. Our results indicate that a meaningful context optimizes attentional integration of gaze and expression information.


Assuntos
Emoções , Expressão Facial , Medo , Orientação , Desempenho Psicomotor , Adulto , Atenção , Feminino , Humanos , Masculino , Estimulação Luminosa , Tempo de Reação , Percepção Espacial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...