Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 86
Filter
1.
Biol Psychol ; 183: 108665, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37619811

ABSTRACT

Previous research on emotional face processing has shown that emotional faces such as fearful faces may be processed without visual awareness. However, evidence for nonconscious attention capture by fearful faces is limited. In fact, studies using sensory manipulation of awareness (e.g., backward masking paradigms) have shown that fearful faces do not attract attention during subliminal viewings nor when they were task-irrelevant. Here, we used a three-phase inattentional blindness paradigm and electroencephalography to examine whether faces (fearful and neutral) capture attention under different conditions of awareness and task-relevancy. We found that the electrophysiological marker for attention capture, the N2-posterior-contralateral (N2pc), was elicited by face stimuli only when participants were aware of the faces and when they were task-relevant (phase 3). When participants were unaware of the presence of faces (phase 1) or when the faces were irrelevant to the task (phase 2), no N2pc was observed. Together with our previous work, we concluded that fearful faces, or faces in general, do not attract attention unless we want them to.

2.
J Vis ; 23(7): 15, 2023 07 03.
Article in English | MEDLINE | ID: mdl-37486298

ABSTRACT

Visual attention and visual working memory (VWM) are intertwined processes that allow navigation of the visual world. These systems can compete for highly limited cognitive resources, creating interference effects when both operate in tandem. Performing an attentional task while maintaining a VWM load often leads to a loss of memory information. These losses are seen even with very simple visual search tasks. Previous research has argued that this may be due to the attentional selection process, of choosing the target item out of surrounding nontarget items. Over two experiments, the current study disentangles the roles of search and selection in visual search and their influence on a retained VWM load. Experiment 1 revealed that, when search stimuli were relatively simple, target-absent searches (which did not require attentional selection) did not provoke memory interference, whereas target-present search did. In Experiment 2, the number of potential targets was varied in the search displays. In one condition, participants were required to select any one of the items displayed, requiring an attentional selection but no need to search for a specific item. Importantly, this condition led to memory interference to the same extent as a condition where a single target was presented among nontargets. Together, these results show that the process of attentional selection is a sufficient cause for interference with a concurrently maintained VWM load.


Subject(s)
Memory, Short-Term , Visual Perception , Humans
3.
iScience ; 26(7): 107148, 2023 Jul 21.
Article in English | MEDLINE | ID: mdl-37408689

ABSTRACT

It has been repeatedly claimed that emotional faces readily capture attention, and that they may be processed without awareness. Yet some observations cast doubt on these assertions. Part of the problem may lie in the experimental paradigms employed. Here, we used a free viewing visual search task during electroencephalographic recordings, where participants searched for either fearful or neutral facial expressions among distractor expressions. Fixation-related potentials were computed for fearful and neutral targets and the response compared for stimuli consciously reported or not. We showed that awareness was associated with an electrophysiological negativity starting at around 110 ms, while emotional expressions were distinguished on the N170 and early posterior negativity only when stimuli were consciously reported. These results suggest that during unconstrained visual search, the earliest electrical correlate of awareness may emerge as early as 110 ms, and fixating at an emotional face without reporting it may not produce any unconscious processing.

4.
Neuropsychologia ; 188: 108634, 2023 09 09.
Article in English | MEDLINE | ID: mdl-37391127

ABSTRACT

When searching for a lost item, we tune attention to the known properties of the object. Previously, it was believed that attention is tuned to the veridical attributes of the search target (e.g., orange), or an attribute that is slightly shifted away from irrelevant features towards a value that can more optimally distinguish the target from the distractors (e.g., red-orange; optimal tuning). However, recent studies showed that attention is often tuned to the relative feature of the search target (e.g., redder), so that all items that match the relative features of the target equally attract attention (e.g., all redder items; relational account). Optimal tuning was shown to occur only at a later stage of identifying the target. However, the evidence for this division mainly relied on eye tracking studies that assessed the first eye movements. The present study tested whether this division can also be observed when the task is completed with covert attention and without moving the eyes. We used the N2pc in the EEG of participants to assess covert attention, and found comparable results: Attention was initially tuned to the relative colour of the target, as shown by a significantly larger N2pc to relatively matching distractors than a target-coloured distractor. However, in the response accuracies, a slightly shifted, "optimal" distractor interfered most strongly with target identification. These results confirm that early (covert) attention is tuned to the relative properties of an item, in line with the relational account, while later decision-making processes may be biased to optimal features.


Subject(s)
Color Perception , Eye Movements , Humans , Color Perception/physiology , Reaction Time/physiology , Eye-Tracking Technology , Electroencephalography , Visual Perception/physiology
5.
Front Neurosci ; 17: 1152220, 2023.
Article in English | MEDLINE | ID: mdl-37034154

ABSTRACT

In the current EEG study, we used a dot-probe task in conjunction with backward masking to examine the neural activity underlying awareness and spatial processing of fearful faces and the neural processes for subsequent cued spatial targets. We presented face images under different viewing conditions (subliminal and supraliminal) and manipulated the relation between a fearful face in the pair and a subsequent target. Our mass univariate analysis showed that fearful faces elicit the N2-posterior-contralateral, indexing spatial attention capture, only when they are presented supraliminally. Consistent with this, the multivariate pattern analysis revealed a successful decoding of the location of the fearful face only in the supraliminal viewing condition. Additionally, the spatial attention capture by fearful faces modulated the processing of subsequent lateralised targets that were spatially congruent with the fearful face, in both al and electrophysiological data. There was no evidence for nonconscious processing of the fearful faces in the current paradigm. We conclude that spatial attentional capture by fearful faces requires visual awareness and it is modulated by top-down task demands.

6.
Cognition ; 236: 105420, 2023 07.
Article in English | MEDLINE | ID: mdl-36905828

ABSTRACT

Previous research has identified three mechanisms that guide visual attention: bottom-up feature contrasts, top-down tuning, and the trial history (e.g., priming effects). However, only few studies have simultaneously examined all three mechanisms. Hence, it is currently unclear how they interact or which mechanisms dominate over others. With respect to local feature contrasts, it has been claimed that a pop-out target can only be selected immediately in dense displays when the target has a high local feature contrast, but not when the displays are sparse, which leads to an inverse set-size effect. The present study critically evaluated this view by systematically varying local feature contrasts (i.e., set size), top-down knowledge, and the trial history in pop-out search. We used eye tracking to distinguish between early selection and later identification-related processes. The results revealed that early visual selection was mainly dominated by top-down knowledge and the trial history: When attention was biased to the target feature, either by valid pre-cueing (top-down) or automatic priming, the target could be localised immediately, regardless of display density. Bottom-up feature contrasts only modulated selection when the target was unknown and attention was biased to the non-targets. We also replicated the often-reported finding of reliable feature contrast effects in the mean RTs, but showed that these were due to later, target identification processes (e.g., in the target dwell times). Thus, contrary to the prevalent view, bottom-up feature contrasts in dense displays do not seem to directly guide attention, but only facilitate nontarget rejection, probably by facilitating nontarget grouping.


Subject(s)
Cues , Eye-Tracking Technology , Humans , Knowledge , Reaction Time , Visual Perception , Pattern Recognition, Visual
7.
Trends Cogn Sci ; 27(4): 391-403, 2023 04.
Article in English | MEDLINE | ID: mdl-36841692

ABSTRACT

Theories of attention posit that attentional guidance operates on information held in a target template within memory. The template is often thought to contain veridical target features, akin to a photograph, and to guide attention to objects that match the exact target features. However, recent evidence suggests that attentional guidance is highly flexible and often guided by non-veridical features, a subset of features, or only associated features. We integrate these findings and propose that attentional guidance maximizes search efficiency based on a 'good-enough' principle to rapidly localize candidate target objects. Candidates are then serially interrogated to make target-match decisions using more precise information. We suggest that good-enough guidance optimizes the speed-accuracy-effort trade-offs inherent in each stage of visual search.


Subject(s)
Memory, Short-Term , Visual Perception , Humans
8.
Psychol Res ; 87(7): 2031-2038, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36633707

ABSTRACT

Visual working memory (VWM) allows for the brief retention of approximately three to four items. Interestingly, when these items are similar to each other in a feature domain, memory recall performance is elevated compared to when they are dissimilar. This similarity benefit is currently not accounted for by models of VWM. Previous research has suggested that this similarity benefit may arise from selective attentional prioritisation in the maintenance phase. However, the similarity effect has not been contrasted under circumstances where dissimilar item types can adequately compete for memory resources. In Experiment 1, similarity benefits were seen for all-similar over all-dissimilar displays. This was also seen in mixed displays, change detection performance was higher when one of the two similar items changed, compared to when the dissimilar item changed. Surprisingly, the similarity effect was stronger in these mixed displays then when comparing the all-similar and all-dissimilar. Experiment 2 investigated this further by examining how attention was allocated in the memory encoding phase via eye movements. Results revealed that attention prioritised similar over dissimilar items in the mixed displays. Similar items were more likely to receive the first fixation and were fixated more often than dissimilar items. Furthermore, dwell times were elongated for dissimilar items, suggesting that encoding was less efficient. These results suggest that there is an attentional strategy towards prioritising similar items over dissimilar items, and that this strategy's influence can be observed in the memory encoding phase.


Subject(s)
Attention , Memory, Short-Term , Humans , Mental Recall , Cognition , Eye Movements , Visual Perception
9.
Atten Percept Psychophys ; 85(2): 418-437, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36653521

ABSTRACT

It is well known that visual search for a mirror target (i.e., a horizontally flipped item) is more difficult than search for other-oriented items (e.g., vertically flipped items). Previous studies have typically attributed costs of mirror search to early, attention-guiding processes but could not rule out contributions from later processes. In the present study we used eye tracking to distinguish between early, attention-guiding processes and later target identification processes. The results of four experiments revealed a marked human weakness in identifying mirror targets: Observers appear to frequently fail to classify a mirror target as a target on first fixation and to continue with search even after having directly looked at the target. Awareness measures corroborated that the location of a mirror target could not be reported above chance level after it had been fixated once. This mirror blindness effect explained a large proportion (45-87%) of the overall costs of mirror search, suggesting that part of the difficulties with mirror search are rooted in later, object identification processes (not attentional guidance). Mirror blindness was significantly reduced but not completely eliminated when both the target and non-targets were held constant, which shows that perfect top-down knowledge can reduce mirror blindness, without completely eliminating it. The finding that non-target certainty reduced mirror blindness suggests that object identification is not solely achieved by comparing a selected item to a target template. These results demonstrate that templates that guide search toward targets are not identical to the templates used to conclusively identify those targets.


Subject(s)
Knowledge , Pattern Recognition, Visual , Humans , Visual Perception
10.
Brain Sci ; 12(7)2022 Jun 24.
Article in English | MEDLINE | ID: mdl-35884630

ABSTRACT

Previous research on the relationship between attention and emotion processing have focused essentially on consciously-viewed, supraliminal stimuli, while the attention-emotion interplay remains unexplored in situations where visual awareness is restricted. Here, we presented participants with face pairs in a backward masking paradigm and examined the electrophysiological activity in response to fearful and neutral expressions under different conditions of attention (spatially attended vs. unattended) and stimulus visibility (subliminal vs. supraliminal). We found an enhanced N2 (visual awareness negativity -VAN-) and an enhanced P3 for supraliminal compared to subliminal faces. The VAN, indexing the early perceptual awareness, was enhanced when the faces were spatially attended compared to when they were unattended, showing that the VAN does not require spatial attention focus but can be enhanced by it. Fearful relative to neutral expressions enhanced the early neural activity (N2) regardless of spatial attention but only in the supraliminal viewing condition. However, fear-related enhancements on later neural activity (P3) were found when stimuli were both attended and presented supraliminally. These findings suggest that visual awareness is needed for emotion processing during both early and late stages. Spatial attention is required for emotion processing at the later stage but not at the early stage.

11.
Atten Percept Psychophys ; 84(6): 1913-1924, 2022 Aug.
Article in English | MEDLINE | ID: mdl-35859034

ABSTRACT

In visual search attention can be directed towards items matching top-down goals, but this must compete with factors such as salience that can capture attention. However, under some circumstances it appears that attention can avoid known distractor features. Chang and Egeth (Psychological Science, 30 (12), 1724-1732, 2019) found that such inhibitory effects reflect a combination of distractor-feature suppression and target-feature enhancement. In the present study (N = 48), we extend these findings by revealing that suppression and enhancement effects guide overt attention. On search trials (75% of trials) participants searched for a diamond shape among several other shapes. On half of the search trials all objects were the same colour (e.g., green) and on the other half of the search trials one of the non-target shapes appeared in a different colour (e.g., red). On interleaved probe trials (25% of trials), subjects were presented with four ovals. One of the ovals was in either the colour of the target or the colour of the distractor from the search trials. The other three ovals were on neutral colours. Critically, we found that attention was overtly captured by target colours and avoided distractor colours when they were viewed in a background of neutral colours. In addition, we provided a time course of attentional control. Within visual search tasks we observed inhibition aiding early attentional effects, indexed by the time it took gaze to first reach the target, as well as later decision-making processes indexed by the time for a decision to be made once the target as found.


Subject(s)
Eye Movements , Visual Perception , Attention/physiology , Humans , Inhibition, Psychological , Reaction Time/physiology , Visual Perception/physiology
12.
Neuropsychologia ; 172: 108283, 2022 07 29.
Article in English | MEDLINE | ID: mdl-35661782

ABSTRACT

It remains unclear to date whether spatial attention towards emotional faces is contingent on, or independent of visual awareness. To investigate this question, a bilateral attentional blink paradigm was used in which lateralised fearful faces were presented at various levels of detectability. Twenty-six healthy participants were presented with two rapid serial streams of human faces, while they attempted to detect a pair of target faces (T2) displayed in close or distant succession of a first target pair (T1). Spatial attention shifting to the T2 fearful faces, indexed by the N2-posterior-contralateral component, was dependent on visual awareness and its magnitude covaried with the visual awareness negativity, a neural marker of awareness at the perceptual level. Additionally, information consolidation in working memory, indexed by the sustained posterior contralateral negativity, positively correlated with the level of visual awareness and spatial attention shifting. These findings demonstrate that spatial attention shifting to fearful faces depends on visual awareness, and these early processes are closely linked to information maintenance in working memory.


Subject(s)
Attentional Blink , Attention , Facial Expression , Fear/psychology , Humans , Memory, Short-Term
13.
Brain Imaging Behav ; 16(5): 2426-2443, 2022 Oct.
Article in English | MEDLINE | ID: mdl-35739373

ABSTRACT

Voxel-wise meta-analyses of task-evoked regional activity were conducted for healthy individuals during the unconscious processing of emotional and neutral faces with an aim to examine whether and how different experimental paradigms influenced brain activation patterns. Studies were categorized into sensory and attentional unawareness paradigms. Thirty-four fMRI studies including 883 healthy participants were identified. Across experimental paradigms, unaware emotional faces elicited stronger activation of the limbic system, striatum, inferior frontal gyrus, insula and the temporal lobe, compared to unaware neutral faces. Crucially, in attentional unawareness paradigms, unattended emotional faces elicited a right-lateralized increased activation (i.e., right amygdala, right temporal pole), suggesting a right hemisphere dominance for processing emotional faces during inattention. By contrast, in sensory unawareness paradigms, unseen emotional faces elicited increased activation of the left striatum, the left amygdala and the right middle temporal gyrus. Additionally, across paradigms, unconsciously processed positive emotions were found associated with more activation in temporal and parietal cortices whereas unconsciously processed negative emotions elicited stronger activation in subcortical regions, compared to neutral faces.


Subject(s)
Facial Expression , Magnetic Resonance Imaging , Humans , Emotions/physiology , Amygdala/diagnostic imaging , Amygdala/physiology , Brain Mapping , Brain/diagnostic imaging , Brain/physiology
14.
Cortex ; 151: 30-48, 2022 06.
Article in English | MEDLINE | ID: mdl-35390549

ABSTRACT

The relationship between visual attention and visual awareness has long been hotly debated. There has been limited evidence on whether the neural marker of spatial attention precedes or succeeds that of visual awareness in the processing of emotional faces. The current study aims to investigate the temporal sequence between the electrophysiological signatures of visual awareness (the visual awareness negativity - VAN) and spatial attention (the N2pc), in contexts where emotional faces are task-relevant (Experiment 1) or task-irrelevant (Experiment 2). Fifty-six healthy participants were presented with fearful and neutral faces under different levels of visibility using backward masking. They either performed a face detection task (Experiment 1) or a contrast detection task while ignoring the faces (Experiment 2). Compared to subliminal stimuli, supraliminal stimuli produced more negative ERPs at 170-270 msec and 210-310 msec in Experiments 1 and 2, respectively, identified as the VAN. The P3, a component also frequently considered to reflect awareness, produced a similar effect with larger amplitudes for supraliminal than subliminal stimuli in both experiments. With respect to spatial attention, a significant N2pc was observed in response to fearful faces but only in the supraliminal viewing condition of Experiment 1, in which faces were task-relevant. Crucially, the VAN was found to precede the N2pc in this case. Our results suggest that spatial attention as indexed by the N2pc, is oriented towards fearful faces when they are relevant to participants' task and are consciously processed. Moreover, an early phenomenal stage of awareness, reflected by the VAN, precedes spatial attention shifting to fearful faces.


Subject(s)
Emotions , Facial Expression , Emotions/physiology , Evoked Potentials/physiology , Fear , Humans , Reaction Time/physiology
15.
J Vis ; 22(2): 8, 2022 02 01.
Article in English | MEDLINE | ID: mdl-35156992

ABSTRACT

It is well known that attention can be automatically attracted to salient items. However, recent studies show that it is possible to avoid distraction by a salient item (with a known feature), leading to facilitated search. This article tests a proposed mechanism for distractor inhibition: that a mental representation of the distractor feature held in visual working memory (VWM) allows attention to be guided away from the distractor. We tested this explanation by examining color-based inhibition in visual search for a shape target with and without VWM load. In Experiment 1 the presence of a distractor facilitated visual search under low and high VWM loads, as reflected in faster response times when the distractor was present (compared to absent), and in fewer eye movements to the salient distractor than the non-target items. However, the eye movement inhibition effect was noticeably weakened in the load conditions. Experiment 2 explored further, to distinguish between inhibition of the distractor color and activation of the (irrelevant) target color. Intermittently presenting single-color search trials that contained only either a target, distractor or a neutral-colored singleton revealed that the distractor color attracted attention less than the neutral color with and without VWM load. The target color, however, only attracted attention more than neutral colors under no load, whereas a VWM load completely eliminated this effect. This suggests that although VWM plays a role in guiding attention to the (irrelevant) target color, distractor-feature inhibition can operate independently.


Subject(s)
Inhibition, Psychological , Memory, Short-Term , Eye Movements , Humans , Memory, Short-Term/physiology , Reaction Time , Visual Perception/physiology
16.
IEEE Trans Cybern ; 52(1): 357-371, 2022 Jan.
Article in English | MEDLINE | ID: mdl-32149677

ABSTRACT

Brain electroencephalography (EEG), the complex, weak, multivariate, nonlinear, and nonstationary time series, has been recently widely applied in neurocognitive disorder diagnoses and brain-machine interface developments. With its specific features, unlabeled EEG is not well addressed by conventional unsupervised time-series learning methods. In this article, we handle the problem of unlabeled EEG time-series clustering and propose a novel EEG clustering algorithm, that we call mwcEEGc. The idea is to map the EEG clustering to the maximum-weight clique (MWC) searching in an improved Fréchet similarity-weighted EEG graph. The mwcEEGc considers the weights of both vertices and edges in the constructed EEG graph and clusters EEG based on their similarity weights instead of calculating the cluster centroids. To the best of our knowledge, it is the first attempt to cluster unlabeled EEG trials using MWC searching. The mwcEEGc achieves high-quality clusters with respect to intracluster compactness as well as intercluster scatter. We demonstrate the superiority of mwcEEGc over ten state-of-the-art unsupervised learning/clustering approaches by conducting detailed experimentations with the standard clustering validity criteria on 14 real-world brain EEG datasets. We also present that mwcEEGc satisfies the theoretical properties of clustering, such as richness, consistency, and order independence.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Algorithms , Brain/diagnostic imaging , Cluster Analysis , Time Factors
17.
Cortex ; 144: 151-167, 2021 11.
Article in English | MEDLINE | ID: mdl-34666299

ABSTRACT

Visual short-term memory (VSTM) is an important resource that allows temporarily storing visual information. Current theories posit that elementary features (e.g., red, green) are encoded and stored independently of each other in VSTM. However, they have difficulty explaining the similarity effect, that similar items can be remembered better than dissimilar items. In Experiment 1, we tested (N = 20) whether the similarity effect may be due to storing items in a context-dependent manner in VSTM (e.g., as the reddest/yellowest item). In line with a relational account of VSTM, we found that the similarity effect is not due to feature similarity, but to an enhanced sensitivity for detecting changes when the relative colour of a to-be-memorised item changes (e.g., from reddest to not-reddest item; than when an item underwent the same change but retained its relative colour; e.g., still reddest). Experiment 2 (N = 20) showed that VSTM load, as indexed by the CDA amplitude in the EEG, was smaller when the colours were ordered so that they all had the same relationship than when the same colours were out-of-order, requiring encoding different relative colours. With this, we report two new effects in VSTM - a relational detection advantage that describes an enhanced sensitivity to relative changes in change detection, and a relational CDA effect, which reflects that VSTM load, as indexed by the CDA, scales with the number of (different) relative features between the memory items. These findings support a relational account of VSTM and question the view that VSTM stores features such as colours independently of each other.


Subject(s)
Memory, Short-Term , Visual Perception , Humans , Mental Recall
18.
Cognition ; 212: 104732, 2021 07.
Article in English | MEDLINE | ID: mdl-33862440

ABSTRACT

The attentional template is often described as the mental representation that drives attentional selection and guidance, for instance, in visual search. Recent research suggests that this template is not a veridical representation of the sought-for target, but instead an altered representation that allows more efficient search. The current paper contrasts two such theories. Firstly, the Optimal Tuning account which posits that the attentional template shifts to an exaggerated target value to maximise the signal-to-noise ratio between similar targets and non-targets. Secondly, the Relational account which states that instead of tuning to feature values, attention is directed to the relative value created by the search context, e.g. all redder items or the reddest item. Both theories are empirically supported, but used different paradigms (perceptual decision tasks vs. visual search), and different attentional measures (probe response accuracy vs. gaze capture). The current design incorporates both paradigms and measures. The results reveal that while Optimal Tuning shifts are observed in probe trials they do not drive early attention or eye- movement behaviour in visual search. Instead, early attention follows the Relational Account, selecting all items with the relative target colour (e.g., redder). This suggests that the masked probe trials used in Optimal Tuning do not probe the attentional template that guides attention. In Experiment 3 we find that optimal tuning shifts correspond in magnitude to purely perceptual shifts created by contrast biases in the visual search arrays. This suggests that the shift in probe responses may in fact be a perceptual artefact rather than a strategic adaptation to optimise the signal-to-noise ratio. These results highlight the distinction between early attentional mechanisms and later, target identification mechanisms. SIGNIFICANCE STATEMENT: Classical theories of attention suggest that attention is guided by a feature-specific target template. In recent designs this has been challenged by an apparent non- veridical tuning of the template in situations where the target stimulus is similar to non-targets. The current studies compare two theories that propose different explanations for non-veridical tuning; the Relational and the Optimal Tuning account. We show that the Relational account describes the mechanism that guides early search behaviour, while the Optimal Tuning account describes perceptual decision-making. Optimal Tuning effects may be due to an artefact that has not been described in visual search before (simultaneous contrast).


Subject(s)
Color Perception , Pattern Recognition, Visual , Adaptation, Physiological , Attention , Eye Movements , Humans , Reaction Time , Visual Perception
19.
Cortex ; 134: 52-64, 2021 01.
Article in English | MEDLINE | ID: mdl-33249300

ABSTRACT

Attention is an important function that allows us to selectively enhance the processing of relevant stimuli in our environment. Fittingly, a number of studies have revealed that potentially threatening/fearful stimuli capture attention more efficiently. Interestingly, in separate fMRI studies, threatening stimuli situated close to viewers were found to enhance brain activity in fear-relevant areas more than stimuli that were further away. Despite these observations, few studies have examined the effect of personal distance on attentional capture by emotional stimuli. Using electroencephalography (EEG), the current investigation addressed this question by investigating attentional capture of emotional faces that were either looming/receding, or were situated at different distances from the viewer. In Experiment 1, participants carried out an incidental task while looming or receding fearful and neutral faces were presented bilaterally. A significant lateralised N170 and N2pc were found for a looming upright fearful face, however no significant components were found for a looming upright neutral face or inverted fearful and neutral faces. In Experiment 2, participants made gender judgements of emotional faces that appeared on a screen situated within or beyond peripersonal space (respectively 50 cm or 120 cm). Although response times did not differ, significantly more errors were made when faces appeared in near as opposed to far space. Importantly, ERPs revealed a significant N2pc for fearful faces presented in peripersonal distance, compared to the far distance. Our findings show that personal distance markedly affects neural responses to emotional stimuli, with increased attention towards fearful upright faces that appear in close distance.


Subject(s)
Attention , Facial Expression , Electroencephalography , Evoked Potentials , Fear , Humans , Reaction Time
20.
J Cogn Neurosci ; 33(1): 63-76, 2021 01.
Article in English | MEDLINE | ID: mdl-32985948

ABSTRACT

Areas in frontoparietal cortex have been shown to be active in a range of cognitive tasks and have been proposed to play a key role in goal-driven activities (Dosenbach, N. U. F., Fair, D. A., Miezin, F. M., Cohen, A. L., Wenger, K. K., Dosenbach, R. A. T., et al. Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences, U.S.A., 104, 11073-11078, 2007; Duncan, J. The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behavior. Trends in Cognitive Sciences, 14, 172-179, 2010). Here, we examine the role this frontoparietal system plays in visual search. Visual search, like many complex tasks, consists of a sequence of operations: target selection, stimulus-response (SR) mapping, and response execution. We independently manipulated the difficulty of target selection and SR mapping in a novel visual search task that involved identical stimulus displays. Enhanced activity was observed in areas of frontal and parietal cortex during both difficult target selection and SR mapping. In addition, anterior insula and ACC showed preferential representation of SR-stage information, whereas the medial frontal gyrus, precuneus, and inferior parietal sulcus showed preferential representation of target selection-stage information. A connectivity analysis revealed dissociable neural circuits underlying visual search. We hypothesize that these circuits regulate distinct mental operations associated with the allocation of spatial attention, stimulus decisions, shifts of task set from selection to SR mapping, and SR mapping. Taken together, the results show frontoparietal involvement in all stages of visual search and a specialization with respect to cognitive operations.


Subject(s)
Magnetic Resonance Imaging , Visual Perception , Animals , Attention , Brain Mapping , Frontal Lobe , Parietal Lobe
SELECTION OF CITATIONS
SEARCH DETAIL
...