Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Int J Psychophysiol ; 193: 112235, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37604281

RESUMEN

It is widely accepted that impaired safety learning to a safe stimulus is a pathological feature of anxiety disorders. Safety learning refers to learning that a stimulus is associated with the absence of threat. Cognitive mechanisms that underlie successful threat and safety learning are, however, poorly understood. This study aimed to identify various physiological markers, including neural oscillations and event-related potentials (ERPs) that predict successful threat and safety learning. Therefore, to detect potential differences in these markers, we measured EEG in a fear learning framework combined with a subsequent memory paradigm. Thirty-seven participants were asked to memorize a series of associations between faces and an aversive unconditioned stimulus (US) or its omission. We found a decrease of power in the alpha band in occipital brain regions during learning for both threatening (conditioned stimuli, CS+) and safe faces (control stimuli, CS-) that were subsequently remembered to be associated with a US or not. No effects in theta band were found. In regard to ERPs, a late positive potential (LPP) and a P300 component were larger for remembered than for forgotten CS-US associations. The P300 was also enhanced to remembered US and US omissions, thus replicating previous findings. These results point to the importance of cognitive resource allocation as an underlying mechanism of fear learning and electrophysiological measurements as potential biomarkers for successful threat and safety learning.

2.
Brain Sci ; 12(4)2022 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-35447965

RESUMEN

Previous research suggests that predictive mechanisms are essential in perceiving social interactions. However, these studies did not isolate action prediction (a priori expectations about how partners in an interaction react to one another) from action integration (a posteriori processing of both partner's actions). This study investigated action prediction during social interactions while controlling for integration confounds. Twenty participants viewed 3D animations depicting an action-reaction interaction between two actors. At the start of each action-reaction interaction, one actor performs a social action. Immediately after, instead of presenting the other actor's reaction, a black screen covers the animation for a short time (occlusion duration) until a still frame depicting a precise moment of the reaction is shown (reaction frame). The moment shown in the reaction frame is either temporally aligned with the occlusion duration or deviates by 150 ms or 300 ms. Fifty percent of the action-reaction trials were semantically congruent, and the remaining were incongruent, e.g., one actor offers to shake hands, and the other reciprocally shakes their hand (congruent action-reaction) versus one actor offers to shake hands, and the other leans down (incongruent action-reaction). Participants made fast congruency judgments. We hypothesized that judging the congruency of action-reaction sequences is aided by temporal predictions. The findings supported this hypothesis; linear speed-accuracy scores showed that congruency judgments were facilitated by a temporally aligned occlusion duration, and reaction frames compared to 300 ms deviations, thus suggesting that observers internally simulate the temporal unfolding of an observed social interction. Furthermore, we explored the link between participants with higher autistic traits and their sensitivity to temporal deviations. Overall, the study offers new evidence of prediction mechanisms underpinning the perception of social interactions in isolation from action integration confounds.

3.
Cortex ; 135: 352-354, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33131804
4.
Clin Psychol Sci ; 8(4): 756-772, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34414018

RESUMEN

Although behavioral therapies are effective for posttraumatic stress disorder (PTSD), access for patients is limited. Attention-bias modification (ABM), a cognitive-training intervention designed to reduce attention bias for threat, can be broadly disseminated using technology. We remotely tested an ABM mobile app for PTSD. Participants (N = 689) were randomly assigned to personalized ABM, nonpersonalized ABM, or placebo training. ABM was a modified dot-probe paradigm delivered daily for 12 sessions. Personalized ABM included words selected using a recommender algorithm. Placebo included only neutral words. Primary outcomes (PTSD and anxiety) and secondary outcomes (depression and PTSD clusters) were collected at baseline, after training, and at 5-week-follow-up. Mechanisms assessed during treatment were attention bias and self-reported threat sensitivity. No group differences emerged on outcomes or attention bias. Nonpersonalized ABM showed greater declines in self-reported threat sensitivity than placebo (p = .044). This study constitutes the largest mobile-based trial of ABM to date. Findings do not support the effectiveness of mobile ABM for PTSD.

5.
Psychon Bull Rev ; 25(5): 1751-1769, 2018 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29119405

RESUMEN

Research in a number of related fields has recently begun to focus on the perceptual, cognitive, and motor workings of cooperative behavior. There appears to be enough coherence in these efforts to refer to the study of the mechanisms underlying human cooperative behavior as the field of joint-action (Knoblich, Butterfill, & Sebanz, 2011; Sebanz, Bekkering, & Knoblich, 2006). Yet, the development of theory in this field has not kept pace with the proliferation of research findings. We propose a hierarchical predictive framework for the study of joint-action that we call the predictive joint-action model (PJAM). The presentation of this theoretical framework is organized into three sections. In the first section, we summarize hierarchical predictive principles and discuss their application to joint-action. In the second section, we juxtapose PJAM's assumptions with empirical evidence from the current literature on joint-action. In the third section, we discuss the overall success of the hierarchical predictive approach to account for the burgeoning empirical literature on joint-action research. Finally, we consider the model's capacity to generate novel and testable hypotheses about joint-action. This is done with the larger goal of uncovering the empirical and theoretical pieces that are still missing in a comprehensive understanding of joint action.


Asunto(s)
Conducta Cooperativa , Conducta Social , Cognición , Humanos , Relaciones Interpersonales , Modelos Teóricos , Proyectos de Investigación
6.
Neuroimage ; 152: 425-436, 2017 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-28284802

RESUMEN

Working together feels easier with some people than with others. We asked participants to perform a visual search task either alone or with a partner while simultaneously measuring each participant's EEG. Local phase synchronization and inter-brain phase synchronization were generally higher when subjects jointly attended to a visual search task than when they attended to the same task individually. Some participants searched the visual display more efficiently and made faster decisions when working as a team, whereas other dyads did not benefit from working together. These inter-team differences in behavioral performance gain in the visual search task were reliably associated with inter-team differences in local and inter-brain phase synchronization. Our results suggest that phase synchronization constitutes a neural correlate of social facilitation, and may help to explain why some teams perform better than others.


Asunto(s)
Corteza Cerebral/fisiología , Sincronización Cortical , Toma de Decisiones/fisiología , Facilitación Social , Adolescente , Adulto , Femenino , Humanos , Masculino , Vías Nerviosas , Estimulación Luminosa , Desempeño Psicomotor , Tiempo de Reacción , Percepción Visual/fisiología , Adulto Joven
7.
Proc Natl Acad Sci U S A ; 113(31): 8669-74, 2016 08 02.
Artículo en Inglés | MEDLINE | ID: mdl-27436897

RESUMEN

Studies of social perception report acute human sensitivity to where another's attention is aimed. Here we ask whether humans are also sensitive to how the other's attention is deployed. Observers viewed videos of actors reaching to targets without knowing that those actors were sometimes choosing to reach to one of the targets (endogenous control) and sometimes being directed to reach to one of the targets (exogenous control). Experiments 1 and 2 showed that observers could respond more rapidly when actors chose where to reach, yet were at chance when guessing whether the reach was chosen or directed. This implicit sensitivity to attention control held when either actor's faces or limbs were masked (experiment 3) and when only the earliest actor's movements were visible (experiment 4). Individual differences in sensitivity to choice correlated with an independent measure of social aptitude. We conclude that humans are sensitive to attention control through an implicit kinematic process linked to empathy. The findings support the hypothesis that social cognition involves the predictive modeling of others' attentional states.


Asunto(s)
Atención/fisiología , Conducta de Elección/fisiología , Empatía , Percepción Social , Adolescente , Adulto , Fenómenos Biomecánicos , Femenino , Humanos , Masculino , Movimiento/fisiología , Estimulación Luminosa , Percepción Visual/fisiología , Adulto Joven
8.
Neuropsychologia ; 75: 402-10, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26100561

RESUMEN

Seeing a speaker's facial gestures can significantly improve speech comprehension, especially in noisy environments. However, the nature of the visual information from the speaker's facial movements that is relevant for this enhancement is still unclear. Like auditory speech signals, visual speech signals unfold over time and contain both dynamic configural information and luminance-defined local motion cues; two information sources that are thought to engage anatomically and functionally separate visual systems. Whereas, some past studies have highlighted the importance of local, luminance-defined motion cues in audiovisual speech perception, the contribution of dynamic configural information signalling changes in form over time has not yet been assessed. We therefore attempted to single out the contribution of dynamic configural information to audiovisual speech processing. To this aim, we measured word identification performance in noise using unimodal auditory stimuli, and with audiovisual stimuli. In the audiovisual condition, speaking faces were presented as point light displays achieved via motion capture of the original talker. Point light displays could be isoluminant, to minimise the contribution of effective luminance-defined local motion information, or with added luminance contrast, allowing the combined effect of dynamic configural cues and local motion cues. Audiovisual enhancement was found in both the isoluminant and contrast-based luminance conditions compared to an auditory-only condition, demonstrating, for the first time the specific contribution of dynamic configural cues to audiovisual speech improvement. These findings imply that globally processed changes in a speaker's facial shape contribute significantly towards the perception of articulatory gestures and the analysis of audiovisual speech.


Asunto(s)
Gestos , Percepción del Habla , Percepción Visual , Estimulación Acústica , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Ruido , Estimulación Luminosa , Adulto Joven
9.
Exp Brain Res ; 227(3): 311-22, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23686149

RESUMEN

The exploration of a familiar object by hand can benefit its identification by eye. What is unclear is how much this multisensory cross-talk reflects shared shape representations versus generic semantic associations. Here, we compare several simultaneous priming conditions to isolate the potential contributions of shape and semantics in haptic-to-visual priming. Participants explored a familiar object manually (haptic prime) while trying to name a visual object that was gradually revealed in increments of spatial resolution. Shape priming was isolated in a comparison of identity priming (shared semantic category and shape) with category priming (same category, but different shapes). Semantic priming was indexed by the comparisons of category priming with unrelated haptic primes. The results showed that both factors mediated priming, but that their relative weights depended on the reliability of the visual information. Semantic priming dominated in Experiment 1, when participants were free to use high-resolution visual information, but shape priming played a stronger role in Experiment 2, when participants were forced to respond with less reliable visual information. These results support the structural description hypothesis of haptic-visual priming (Reales and Ballesteros in J Exp Psychol Learn Mem Cogn 25:644-663, 1999) and are also consistent with the optimal integration theory (Ernst and Banks in Nature 415:429-433, 2002), which proposes a close coupling between the reliability of sensory signals and their weight in decision making.


Asunto(s)
Percepción de Forma/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Tacto/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Estimulación Física , Tiempo de Reacción/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...