Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Commun Psychol ; 2(1): 56, 2024.
Article in English | MEDLINE | ID: mdl-38859821

ABSTRACT

Adaptive biases in favor of approaching, or "looming", sounds have been found across ages and species, thereby implicating the potential of their evolutionary origin and universal basis. The human auditory system is well-developed at birth, yet spatial hearing abilities further develop with age. To disentangle the speculated inborn, evolutionary component of the auditory looming bias from its learned counterpart, we collected high-density electroencephalographic data across human adults and newborns. As distance-motion cues we manipulated either the sound's intensity or spectral shape, which is pinna-induced and thus prenatally inaccessible. Through cortical source localisation we demonstrated the emergence of the bias in both age groups at the level of Heschl's gyrus. Adults exhibited the bias in both attentive and inattentive states; yet differences in amplitude and latency appeared based on attention and cue type. Contrary to the adults, in newborns the bias was elicited only through manipulations of intensity and not spectral cues. We conclude that the looming bias comprises innate components while flexibly incorporating the spatial cues acquired through lifelong exposure.

2.
J Exp Psychol Gen ; 152(3): 794-824, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36227301

ABSTRACT

To investigate whether language rules the visual features that can be discriminated (a radical assumption of linguistic relativity), we examined crosslinguistic differences between native Korean and German speakers during liminal perception of a target disk that was difficult to perceive because its visibility suffered from masking by a ring that followed and enclosed the target disk (metacontrast-masking). Target-mask fit varied, with half of the masks tightly and the other half loosely encircling the targets. In Korean, such tight versus loose spatial relations are semantically distinguished and thus highly practiced, whereas in German, they are collapsed within a single semantic category, thus are not distinguished by language. We expected higher sensitivity and greater attention to varying spatial target-mask distances in Korean than in German speakers. This was confirmed in Experiment 1, where Korean speakers consistently outperformed German speakers in discriminating liminal metacontrast-masked stimuli. To ensure that this effect was not attributable to generic differences in attention capture or by language-independent differences between participant groups, we investigated stimulus-driven attention capture by color singletons and conducted a control experiment using object-substitution masking, where tightness of fit was not manipulated. We found no differences between Korean and German speakers regarding stimulus-driven attention capture or perceptual sensitivity. This was confirmed in Experiment 3, where we manipulated types of masking within participants. In addition, we validated the tightness-of-fit manipulation in a language-related task (Experiment 4). Overall, our results are consistent with linguistic relativity, namely its assumed generalized language influences in nonlinguistic perceptual tasks. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Language , Linguistics , Humans , Semantics , Perception , Perceptual Masking
3.
Front Psychol ; 13: 875744, 2022.
Article in English | MEDLINE | ID: mdl-35668967

ABSTRACT

How does the language we speak affect our perception? Here, we argue for linguistic relativity and present an explanation through "language-induced automatized stimulus-driven attention" (LASA): Our respective mother tongue automatically influences our attention and, hence, perception, and in this sense determines what we see. As LASA is highly practiced throughout life, it is difficult to suppress, and even shows in language-independent non-linguistic tasks. We argue that attention is involved in language-dependent processing and point out that automatic or stimulus-driven forms of attention, albeit initially learned as serving a linguistic skill, account for linguistic relativity as they are automatized and generalize to non-linguistic tasks. In support of this possibility, we review evidence for such automatized stimulus-driven attention in language-independent non-linguistic tasks. We conclude that linguistic relativity is possible and in fact a reality, although it might not be as powerful as assumed by some of its strongest proponents.

4.
Front Psychol ; 13: 840746, 2022.
Article in English | MEDLINE | ID: mdl-35496171

ABSTRACT

In two experiments, we tested whether fearful facial expressions capture attention in an awareness-independent fashion. In Experiment 1, participants searched for a visible neutral face presented at one of two positions. Prior to the target, a backward-masked and, thus, invisible emotional (fearful/disgusted) or neutral face was presented as a cue, either at target position or away from the target position. If negative emotional faces capture attention in a stimulus-driven way, we would have expected a cueing effect: better performance where fearful or disgusted facial cues were presented at target position than away from the target. However, no evidence of capture of attention was found, neither in behavior (response times or error rates), nor in event-related lateralizations (N2pc). In Experiment 2, we went one step further and used fearful faces as visible targets, too. Thereby, we sought to boost awareness-independent capture of attention by fearful faces. However, still, we found no significant attention-capture effect. Our results show that fearful facial expressions do not capture attention in an awareness-independent way. Results are discussed in light of existing theories.

5.
Audit Percept Cogn ; 4(1-2): 60-73, 2021 Apr 03.
Article in English | MEDLINE | ID: mdl-35494218

ABSTRACT

Our auditory system constantly keeps track of our environment, informing us about our surroundings and warning us of potential threats. The auditory looming bias is an early perceptual phenomenon, reflecting higher alertness of listeners to approaching auditory objects, rather than to receding ones. Experimentally, this sensation has been elicited by using both intensity-varying stimuli, as well as spectrally varying stimuli with constant intensity. Following the intensity-based approach, recent research delving into the cortical mechanisms underlying the looming bias argues for top-down signaling from the prefrontal cortex to the auditory cortex in order to prioritize approaching over receding sonic motion. We here test the generalizability of that finding to spectrally induced looms by re-analyzing previously published data. Our results indicate the promoted top-down projection but at time points slightly preceding the motion onset and thus considered to reflect a bias driven by anticipation. At time points following the motion onset, our findings show a bottom-up bias along the dorsal auditory pathway directed toward the prefrontal cortex.

6.
Front Hum Neurosci ; 14: 352, 2020.
Article in English | MEDLINE | ID: mdl-32982706

ABSTRACT

To investigate the relation between attention and awareness, we manipulated visibility/awareness and stimulus-driven attention capture among metacontrast-masked visual stimuli. By varying the time interval between target and mask, we manipulated target visibility measured as target discrimination accuracies (ACCs; Experiments 1 and 2) and as subjective awareness ratings (Experiment 3). To modulate stimulus-driven attention capture, we presented the masked target either as a color-singleton (the target stands out by its unique color among homogeneously colored non-singletons), as a non-singleton together with a distractor singleton elsewhere (an irrelevant distractor has a unique color, whereas the target is colored like the other stimuli) or without a singleton (no stimulus stands out; only in Experiment 1). As color singletons capture attention in a stimulus-driven way, we expected target visibility/discrimination performance to be best for target singletons and worst with distractor singletons. In Experiments 1 and 2, we confirmed that the masking interval and the singleton manipulation influenced ACCs in an independent way and that attention capture by the singletons, with facilitated performance in target-singleton compared to distractor-singleton conditions, was found regardless of the interval-induced (in-)visibility of the targets. In Experiment 1, we also confirmed that attention capture was the same among participants with worse and better visibility/discrimination performance. In Experiment 2, we confirmed attention capture by color singletons with better discrimination performance for probes presented at singleton position, compared to other positions. Finally, in Experiment 3, we found that attention capture by target singletons also increased target awareness and that this capture effect on subjective awareness was independent of the effect of the masking interval, too. Together, results provide new evidence that stimulus-driven attention and awareness operate independently from one another and that stimulus-driven attention capture can precede awareness.

7.
Vision Res ; 160: 43-51, 2019 07.
Article in English | MEDLINE | ID: mdl-31078664

ABSTRACT

To distinguish if search for alphanumerical characters is based on features or on conceptual category membership, we conducted two experiments where we presented upright and inverted characters as cues in a contingent-capture protocol. Here, only cues matching the top-down search template (e.g., a letter cue when searching for target letters) capture attention and lead to validity effects: shorter search times and fewer errors for validly than invalidly cued targets. Top-down nonmatching cues (e.g., a number cue when searching for target letters) do not capture attention. To tell a feature-based explanation from one based on conceptual category membership, we used both upright (canonical) and inverted characters as cues. These cues share the same features, but inverted cues cannot be conceptually categorized as easily as upright cues. Thus, we expected no difference between upright and inverted cues when search is feature-based, whereas inverted cues would elicit no or at least considerably weaker validity effects if search relies on conceptual category membership. Altogether, the results of both experiments (with overlapping and with separate sets of characters for cues and targets) provide evidence for search based on feature representations, as among other things, significant validity effects were found with upright and inverted characters as cues. However, an influence of category membership was also evident, as validity effects of inverted characters were diminished.


Subject(s)
Attention/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Color Perception/physiology , Cues , Female , Humans , Male , Photic Stimulation , Reaction Time
8.
Atten Percept Psychophys ; 81(6): 1846-1879, 2019 Aug.
Article in English | MEDLINE | ID: mdl-30924054

ABSTRACT

To investigate if top-down contingent capture by color cues relies on verbal or semantic templates, we combined different stimuli representing colors physically or semantically in six contingent-capture experiments. In contingent capture, only cues that match the top-down search templates lead to validity effects (shorter search times and fewer errors for validly than for invalidly cued targets) resulting from attentional capture by the cue. We compared validity effects of color cues and color-word cues in top-down search for color targets (Experiment 1a) and color-word targets (Experiment 2). We also compared validity effects of color cues and color-associated symbolic cues during search for color targets (Experiment 1b) and of color-word cues during search for both color and color-word targets (Experiment 3). Only cues of the same stimulus category as the target (either color or color-word cues) captured attention. This makes it unlikely that color search is based on verbal or semantic search templates. Additionally, the validity effect of matching color-word cues during search for color-word targets was neither changed by cue-target graphic (font) similarity versus dissimilarity (Experiment 4) nor by articulatory suppression (Experiment 5). These results suggested either a phonological long-term memory template or an orthographically mediated effect of the color-word cues during search for color-words. Altogether, our findings are in line with a pronounced role of color-based templates during contingent capture by color and do not support semantic or verbal influences in this situation.


Subject(s)
Attention/physiology , Color Perception/physiology , Memory, Long-Term/physiology , Semantics , Verbal Learning/physiology , Adult , Color , Cues , Female , Humans , Male , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...