Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
iScience ; 26(12): 108307, 2023 Dec 15.
Article in English | MEDLINE | ID: mdl-38025782

ABSTRACT

The neural and computational mechanisms underlying visual motion perception have been extensively investigated over several decades, but little attempt has been made to measure and analyze, how human observers perceive the map of motion vectors, or optical flow, in complex naturalistic scenes. Here, we developed a psychophysical method to assess human-perceived motion flows using local vector matching and a flash probe. The estimated perceived flow for naturalistic movies agreed with the physically correct flow (ground truth) at many points, but also showed consistent deviations from the ground truth (flow illusions) at other points. Comparisons with the predictions of various computational models, including cutting-edge computer vision algorithms and coordinate transformation models, indicated that some flow illusions are attributable to lower-level factors such as spatiotemporal pooling and signal loss, while others reflect higher-level computations, including vector decomposition. Our study demonstrates a promising data-driven psychophysical paradigm for an advanced understanding of visual motion perception.

2.
J Cogn Neurosci ; 35(2): 276-290, 2023 02 01.
Article in English | MEDLINE | ID: mdl-36306257

ABSTRACT

Attention to the relevant object and space is the brain's strategy to effectively process the information of interest in complex environments with limited neural resources. Numerous studies have documented how attention is allocated in the visual domain, whereas the nature of attention in the auditory domain has been much less explored. Here, we show that the pupillary light response can serve as a physiological index of auditory attentional shift and can be used to probe the relationship between space-based and object-based attention as well. Experiments demonstrated that the pupillary response corresponds to the luminance condition where the attended auditory object (e.g., spoken sentence) was located, regardless of whether attention was directed by a spatial (left or right) or nonspatial (e.g., the gender of the talker) cue and regardless of whether the sound was presented via headphones or loudspeakers. These effects on the pupillary light response could not be accounted for as a consequence of small (although observable) biases in gaze position drifting. The overall results imply a unified audiovisual representation of spatial attention. Auditory object-based attention contains the space representation of the attended auditory object, even when the object is oriented without explicit spatial guidance.


Subject(s)
Auditory Perception , Cues , Humans , Auditory Perception/physiology , Space Perception/physiology
3.
Brain Cogn ; 164: 105916, 2022 12.
Article in English | MEDLINE | ID: mdl-36260953

ABSTRACT

Reading comprehension requires the semantic integration of words across space and time. However, it remains unclear whether comprehension requires visual awareness for such semantic integration. Compared to earlier studies that investigated semantic integration indirectly from its priming effect, we used functional magnetic resonance imaging (fMRI) to directly examine the processes of semantic integration with or without visual awareness. Specifically, we manipulated participants' visual awareness by continuous flash suppression (CFS) while they viewed a meaningful sequence of four Chinese words (i.e., an idiom) or its meaningless counterpart (i.e., a random sequence). Behaviorally, participants had better recognition memory for idioms than random sequences only when their visual awareness was interfered rather than blocked by CFS. Neurally, semantics-processing areas, such as the superior temporal gyrus and inferior frontal gyrus, were significantly activated only when participants were aware of word sequences, be they meaningful or meaningless. By contrast, orthography-processing areas, such as the fusiform gyrus and inferior occipital gyrus, were significantly activated regardless of visual awareness or word sequence. Taken together, these results suggest that visual awareness modules the functioning of the semantic neural network in the brain and facilitates reading comprehension.


Subject(s)
Semantics , Humans , Magnetic Resonance Imaging/methods , Brain Mapping , Temporal Lobe/diagnostic imaging , Brain/diagnostic imaging , Reading
4.
Neurosci Res ; 185: 29-39, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36113812

ABSTRACT

Crowding refers to impaired object recognition of peripheral visual targets caused by nearby flankers. It has been shown that the response to a word was faster when it was preceded by a semantically related than unrelated crowded prime, demonstrating that semantic priming survives crowding. This study examines neural correlates of semantic priming under visual crowding using magnetoencephalography with four conditions: prime (isolated, crowded) x prime-target relationship (related, unrelated). Participants judged whether the target was a word or a nonword. We found significant differences in θ activity at the left inferior frontal gyrus (IFG) for both isolated and crowded primes when comparing the unrelated and related conditions, although the activation was delayed with the crowded prime compared to the isolated prime. The locations within the IFG were also different: theta-band activation was at BA 45 in the isolated condition and at BA 47 in the crowded condition. Phase-locking-value analysis revealed that bilateral IFG was more synchronized with unrelated prime-target pairs than related pairs regardless of whether the primes were isolated or crowded, indicating the recruitment of the right hemisphere when the prime-target semantic relationship was remote. Finally, the distinct waveform patterns found in the isolated and crowded conditions from both the source localization and PLV analysis suggest different neural mechanisms for processing semantic information with isolated primes versus crowded primes.


Subject(s)
Magnetoencephalography , Semantics , Humans , Visual Perception , Prefrontal Cortex/physiology
5.
Vis cogn ; 28(3): 218-238, 2020.
Article in English | MEDLINE | ID: mdl-33100884

ABSTRACT

Humans are quick to notice if an object is unstable. Does that assessment require attention or can instability serve as a preattentive feature that can guide the deployment of attention? This paper describes a series of visual search experiments, designed to address this question. Experiment 1 shows that less stable images among more stable images are found more efficiently than more stable among less stable; a search asymmetry that supports guidance by instability. Experiment 2 shows efficient search but no search asymmetry when the orientation of the objects is removed as a confound. Experiment 3 independently varies the orientation cues and perceived stability and finds a clear main effect of apparent stability. Experiment 4 shows converging evidence for a role of stability using different stimuli that lack an orientation cue. However, here both search for stable and unstable targets is inefficient. Experiment 5 is a control for Experiment 4, showing that the stability effect in Experiment 4 is not simple side-effects of the geometry of the stimuli. On balance, the data support a role for instability in the guidance of attention in visual search. (184 words).

6.
PLoS One ; 13(11): e0206799, 2018.
Article in English | MEDLINE | ID: mdl-30419039

ABSTRACT

Previous studies showed that emotional faces break through interocular suppression more easily compared to neutral faces under the continuous flash suppression (CFS) paradigm. However, there is controversy over whether emotional content or low-level properties contributed to the results. In this study, we directly manipulated the meaningfulness of facial expression to test the role of emotional content in breaking CFS (b-CFS). In addition, an explicit emotion judgment for different facial expressions (happy, neutral, and fearful) used in the b-CFS task was also conducted afterwards to examine the relationship between b-CFS time and emotion judgment. In Experiment 1, face orientation and luminance polarity were manipulated to generate upright-positive and inverted-negative faces. In Experiment 2, Asian and Caucasian faces were presented to Taiwanese participants so that these stimuli served as own-race and other-race faces, respectively. We found robust face familiarity effects in both experiments within the same experimental framework: upright-positive and own-race faces had shorter b-CFS times than inverted-negative and other-race faces, respectively. This indicates potential involvement of high-level processing under interocular suppression. In Experiment 1, different b-CFS times were found between emotional and neutral faces in both upright-positive and inverted-negative conditions. Furthermore, with sufficient duration (1000 ms) participants could still extract emotional content in explicit valence judgment even from inverted-negative faces, though with a smaller degree than upright-positive faces. In Experiment 2, differential b-CFS times were found between emotional and neutral faces with own-race but not other-race faces. Correlation analyses from both experiments showed that the magnitude of emotion judgment was correlated with b-CFS time only for familiar (upright-positive / own-race) but not unfamiliar (inverted-negative / other-race) faces. These results suggest that emotional content can be extracted under interocular suppression with familiar faces, and low-level properties in unfamiliar faces may play a major role in the b-CFS time.


Subject(s)
Emotions , Facial Recognition , Judgment , Social Perception , Facial Expression , Female , Group Processes , Humans , Male , Photic Stimulation , Racism/psychology , Recognition, Psychology
7.
Psychon Bull Rev ; 25(6): 2215-2223, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30128936

ABSTRACT

Whether emotional information from facial expressions can be processed unconsciously is still controversial; this debate is partially due to ambiguities in distinguishing the unconscious-conscious boundary and to possible contributions from low-level (rather than emotional) properties. To avoid these possible confounding factors, we adopted an affective-priming paradigm with the continuous flash suppression (CFS) method in order to render an emotional face invisible. After presenting an invisible face (prime) with either positive or negative valence under CFS, a visible word (target) with an emotionally congruent or incongruent valence was presented. The participants were required to judge the emotional valence (positive or negative) of the target. The face prime was presented for either a short (200 ms, Exp. 1) or a long (1,000 ms, Exp. 2) duration in order to test whether the priming effect would vary with the prime duration. The consistent priming effects across the two priming durations showed that, as compared to their incongruent counterparts, congruent facial expressions can facilitate emotional judgments of subsequent words. These results suggest that the emotional information from facial expressions can be processed unconsciously.


Subject(s)
Consciousness/physiology , Emotions/physiology , Facial Expression , Language , Pattern Recognition, Visual/physiology , Adult , Humans , Young Adult
8.
Cogn Affect Behav Neurosci ; 17(5): 954-972, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28681130

ABSTRACT

Previous studies found that word meaning can be processed unconsciously. Yet it remains unknown whether temporally segregated words can be integrated into a holistic meaningful phrase without consciousness. The first four experiments were designed to examine this by sequentially presenting the first three words of Chinese four-word idioms as prime to one eye and dynamic Mondrians to the other (i.e., the continuous flash suppression paradigm; CFS). An unmasked target word followed the three masked words in a lexical decision task. Results from such invisible (CFS) condition were compared with the visible condition where the preceding words were superimposed on the Mondrians and presented to both eyes. Lower performance in behavioral experiments and larger N400 event-related potentials (ERP) component for incongruent- than congruent-ending words were found in the visible condition. However, no such congruency effect was found in the invisible condition, even with enhanced statistical power and top-down attention, and with several potential confounding factors (contrast-dependent processing, long interval, no conscious training) excluded. Experiment 5 demonstrated that familiarity of word orientation without temporal integration can be processed unconsciously, excluding the possibility of general insensitivity of our paradigm. The overall result pattern therefore suggests that consciousness plays an important role in semantic temporal integration in the conditions we tested.


Subject(s)
Cerebral Cortex/physiology , Consciousness/physiology , Evoked Potentials/physiology , Language , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adolescent , Adult , Electroencephalography , Humans , Perceptual Masking/physiology , Semantics , Time Factors , Young Adult
9.
Conscious Cogn ; 54: 114-128, 2017 09.
Article in English | MEDLINE | ID: mdl-28606359

ABSTRACT

We examined whether semantic processing occurs without awareness using continuous flash suppression (CFS). In two priming tasks, participants were required to judge whether a target was a word or a non-word, and to report whether the masked prime was visible. Experiment 1 manipulated the lexical congruency between the prime-target pairs and Experiment 2 manipulated their semantic relatedness. Despite the absence of behavioral priming effects (Experiment 1), the ERP results revealed that an N4 component was sensitive to the prime-target lexical congruency (Experiment 1) and semantic relatedness (Experiment 2) when the prime was rendered invisible under CFS. However, these results were reversed with respect to those that emerged when the stimuli were perceived consciously. Our findings suggest that some form of lexical and semantic processing can occur during CFS-induced unawareness, but are associated with different electrophysiological outcomes.


Subject(s)
Awareness/physiology , Cerebral Cortex/physiology , Consciousness/physiology , Evoked Potentials/physiology , Language , Pattern Recognition, Visual/physiology , Adult , Electroencephalography , Female , Humans , Male , Reading , Semantics , Unconscious, Psychology , Young Adult
10.
Exp Brain Res ; 232(4): 1109-16, 2014 Apr.
Article in English | MEDLINE | ID: mdl-24449005

ABSTRACT

People tend to look toward where a sound occurs; however, the role of spatial congruency between sound and sight in the effect of sound facilitation on visual detection remains controversial. We propose that the role of spatial congruency depends on the reliability of the information provided by the facilitator; if it is relatively unreliable, adding spatially congruent information can help to unify different sensory inputs to compensate for this unreliability. To test this, we examine the influence of sound location on visual detection with a non-temporal task, presumably unfavorable for sound since it is better for temporal resolution, and predict that spatial congruency should matter in this situation. We used the continuous flash suppression paradigm that makes the visual stimuli invisible to keep the relationship of sound and sight opaque. The sound is on the same depth plane as the visual stimulus (the congruent condition) or on a different plane (the incongruent condition). The target was presented to one eye with luminance contrast gradually increased and continuously masked by flashed Mondrian masks presented to the other eye until the target was released from suppression. We found that sound facilitated visual detection (measured by released-from-suppression time) in the spatially congruent condition but not in the spatially incongruent condition. Together with previous findings in the literature, it is suggested that both task type and modality determine the reliability of the information for multisensory integration and thus determine whether spatial congruency is critical.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Photic Stimulation/methods , Spatial Behavior/physiology , Visual Perception/physiology , Adolescent , Female , Humans , Male , Reproducibility of Results , Young Adult
11.
Conscious Cogn ; 20(2): 223-33, 2011 Jun.
Article in English | MEDLINE | ID: mdl-20709576

ABSTRACT

Previous research has shown implicit semantic processing of faces or pictures, but whether symbolic carriers such as words can be processed this way remains controversial. Here we examine this issue by adopting the continuous flash suppression paradigm to ensure that the processing undergone is indeed unconscious without the involvement of partial awareness. Negative or neutral words projected into one eye were made invisible due to strong suppression induced by dynamic-noise patterns shown in the other eye through binocular rivalry. Inverted and scrambled words were used as controls to provide baselines at orthographic and feature levels, respectively. Compared to neutral words, emotion-described and emotion-induced negative words required longer time to release from suppression, but only for upright words. These results suggest that words can be processed unconsciously up to semantic level since under interocular suppression completely invisible words can lead to different processing speed due to the emotion information they carry.


Subject(s)
Awareness , Semantics , China , Emotions , Humans , Inhibition, Psychological , Photic Stimulation , Reaction Time , Subliminal Stimulation , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...