Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 33
Filter
Add more filters










Publication year range
1.
Commun Biol ; 6(1): 882, 2023 08 30.
Article in English | MEDLINE | ID: mdl-37648896

ABSTRACT

Perceptual judgements are formed through invisible cognitive processes. Reading out these judgements is essential for advancing our understanding of decision making and requires inferring covert cognitive states based on overt motor actions. Although intuition suggests that these actions must be related to the formation of decisions about where to move body parts, actions have been reported to be influenced by perceptual judgements even when the action is irrelevant to the perceptual judgement. However, despite performing multiple actions in our daily lives, how perceptual judgements influence multiple judgement-irrelevant actions is unknown. Here we show that perceptual judgements affect only saccadic eye movements when simultaneous judgement-irrelevant saccades and reaches are made, demonstrating that perceptual judgement-related signals continuously flow into the oculomotor system alone when multiple judgement-irrelevant actions are performed. This suggests that saccades are useful for making inferences about covert perceptual decisions, even when the actions are not tied to decision making.


Subject(s)
Eye Movements , Movement , Perception , Judgment , Saccades
2.
PLoS One ; 17(9): e0275059, 2022.
Article in English | MEDLINE | ID: mdl-36149886

ABSTRACT

Plasticity-related proteins (PRPs), which are synthesized in a synapse activation-dependent manner, are shared by multiple synapses to a limited spatial extent for a specific period. In addition, stimulated synapses can utilize shared PRPs through synaptic tagging and capture (STC). In particular, the phenomenon by which short-lived early long-term potentiation is transformed into long-lived late long-term potentiation using shared PRPs is called "late-associativity," which is the underlying principle of "cluster plasticity." We hypothesized that the competitive capture of PRPs by multiple synapses modulates late-associativity and affects the fate of each synapse in terms of whether it is integrated into a synapse cluster. We tested our hypothesis by developing a computational model to simulate STC, late-associativity, and the competitive capture of PRPs. The experimental results obtained using the model revealed that the number of competing synapses, timing of stimulation to each synapse, and basal PRP level in the dendritic compartment altered the effective temporal window of STC and influenced the conditions under which late-associativity occurs. Furthermore, it is suggested that the competitive capture of PRPs results in the selection of synapses to be integrated into a synapse cluster via late-associativity.


Subject(s)
Neuronal Plasticity , Synapses , Long-Term Potentiation/physiology , Neuronal Plasticity/physiology , Synapses/metabolism
3.
Proc Natl Acad Sci U S A ; 119(4)2022 01 25.
Article in English | MEDLINE | ID: mdl-35046030

ABSTRACT

Purposeful motor actions depend on the brain's representation of the body, called the body schema, and disorders of the body schema have been reported to show motor deficits. The body schema has been assumed for almost a century to be a common body representation supporting all types of motor actions, and previous studies have considered only a single motor action. Although we often execute multiple motor actions, how the body schema operates during such actions is unknown. To address this issue, I developed a technique to measure the body schema during multiple motor actions. Participants made simultaneous eye and reach movements to the same location of 10 landmarks on their hand. By analyzing the internal configuration of the locations of these points for each of the eye and reach movements, I produced maps of the mental representation of hand shape. Despite these two movements being simultaneously directed to the same bodily location, the resulting hand map (i.e., a part of the body schema) was much more distorted for reach movements than for eye movements. Furthermore, the weighting of visual and proprioceptive bodily cues to build up this part of the body schema differed for each effector. These results demonstrate that the body schema is organized as multiple effector-specific body representations. I propose that the choice of effector toward one's body can determine which body representation in the brain is observed and that this visualization approach may offer a new way to understand patients' body schema.


Subject(s)
Body Image , Adult , Eye Movements , Female , Human Body , Humans , Male , Motor Activity , Movement , Psychomotor Performance , Visual Perception , Young Adult
4.
Behav Res Methods ; 54(2): 729-751, 2022 04.
Article in English | MEDLINE | ID: mdl-34346042

ABSTRACT

Virtual reality (VR) is a new methodology for behavioral studies. In such studies, the millisecond accuracy and precision of stimulus presentation are critical for data replicability. Recently, Python, which is a widely used programming language for scientific research, has contributed to reliable accuracy and precision in experimental control. However, little is known about whether modern VR environments have millisecond accuracy and precision for stimulus presentation, since most standard methods in laboratory studies are not optimized for VR environments. The purpose of this study was to systematically evaluate the accuracy and precision of visual and auditory stimuli generated in modern VR head-mounted displays (HMDs) from HTC and Oculus using Python 2 and 3. We used the newest Python tools for VR and Black Box Toolkit to measure the actual time lag and jitter. The results showed that there was an 18-ms time lag for visual stimulus in both HMDs. For the auditory stimulus, the time lag varied between 40 and 60 ms, depending on the HMD. The jitters of those time lags were 1 ms for visual stimulus and 4 ms for auditory stimulus, which are sufficiently low for general experiments. These time lags were robustly equal, even when auditory and visual stimuli were presented simultaneously. Interestingly, all results were perfectly consistent in both Python 2 and 3 environments. Thus, the present study will help establish a more reliable stimulus control for psychological and neuroscientific research controlled by Python environments.


Subject(s)
Virtual Reality , Humans
5.
Sci Rep ; 11(1): 3995, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33597567

ABSTRACT

Two different motion mechanisms have been identified with motion aftereffect (MAE). (1) A slow motion mechanism, accessed by a static MAE, is sensitive to high-spatial and low-temporal frequency; (2) a fast motion mechanism, accessed by a flicker MAE, is sensitive to low-spatial and high-temporal frequency. We examined their respective responses to global motion after adapting to a global motion pattern constructed of multiple compound Gabor patches arranged circularly. Each compound Gabor patch contained two gratings at different spatial frequencies (0.53 and 2.13 cpd) drifting in opposite directions. The participants reported the direction and duration of the MAE for a variety of global motion patterns. We discovered that static MAE durations depended on the global motion patterns, e.g., longer MAE duration to patches arranged to see rotation than to random motion (Exp 1), and increase with global motion strength (patch number in Exp 2). In contrast, flicker MAEs durations are similar across different patterns and adaptation strength. Further, the global integration occurred at the adaptation stage, rather than at the test stage (Exp 3). These results suggest that slow motion mechanism, assessed by static MAE, integrate motion signals over space while fast motion mechanisms do not, at least under the conditions used.

6.
Sci Rep ; 11(1): 418, 2021 01 11.
Article in English | MEDLINE | ID: mdl-33432104

ABSTRACT

Awareness of the body is essential for accurate motor control. However, how this awareness influences motor control is poorly understood. The awareness of the body includes awareness of visible body parts as one's own (sense of body ownership) and awareness of voluntary actions over that visible body part (sense of agency). Here, I show that sense of agency over a visible hand improves the initiation of movement, regardless of sense of body ownership. The present study combined the moving rubber hand illusion, which allows experimental manipulation of agency and body ownership, and the finger-tracking paradigm, which allows behavioral quantification of motor control by the ability to coordinate eye with hand movements. This eye-hand coordination requires awareness of the hand to track the hand with the eye. I found that eye-hand coordination is improved when participants experience a sense of agency over a tracked artificial hand, regardless of their sense of body ownership. This improvement was selective for the initiation, but not maintenance, of eye-hand coordination. These results reveal that the prospective experience of explicit sense of agency improves motor control, suggesting that artificial manipulation of prospective agency may be beneficial to rehabilitation and sports training techniques.


Subject(s)
Awareness/physiology , Ownership , Psychomotor Performance/physiology , Self Concept , Body Image/psychology , Computer Graphics , Eye Movements/physiology , Hand/physiology , Humans , Illusions/physiology , Illusions/psychology , Movement/physiology , Proprioception/physiology , Prospective Studies , Touch Perception/physiology , User-Computer Interface , Virtual Reality , Visual Perception/physiology
7.
Sci Rep ; 10(1): 9273, 2020 06 09.
Article in English | MEDLINE | ID: mdl-32518393

ABSTRACT

To establish a perceptually stable world despite the large retinal shifts caused by saccadic eye movements, the visual system reduces its sensitivity to the displacement of visual stimuli during saccades (e.g. saccadic suppression of displacement, SSD). Previous studies have demonstrated that inserting a temporal blank right after a saccade improves displacement detection performance. This 'blanking effect' suggests that visual information right after the saccade may play an important role in SSD. To understand the mechanisms underlying SSD, we here compare the effect of pre- and post-saccadic stimulus contrast on displacement detection during a saccade with and without inserting a blank. Our results show that observers' sensitivity to detect visual displacement was reduced by increasing post-saccadic stimulus contrast, but a blank relieves the impairment. We successfully explain the results with a model proposing that parvo-pathway signals suppress the magno-pathway processes responsible for detecting displacements across saccades. Our results suggest that the suppression of the magno-pathway by parvo-pathway signals immediately after a saccade causes SSD, which helps to achieve the perceptual stability of the visual world across saccades.


Subject(s)
Saccades/physiology , Female , Fixation, Ocular/physiology , Humans , Male , Models, Biological , Photic Stimulation/methods , Retina/physiology , Visual Perception , Young Adult
8.
Vision Res ; 172: 11-26, 2020 07.
Article in English | MEDLINE | ID: mdl-32388210

ABSTRACT

Perception of motion in depth is one of the most important visual functions for living in the three-dimensional world. Two binocular cues have been investigated for motion in depth: inter-ocular velocity difference (IOVD) and changing disparity (CD). IOVD provides direction information directly by comparing velocity signals from the two retinas. In this study, we propose for the first time a motion-in-depth model of IOVD that predicts motion-in-depth direction. The model is based on a psychophysical assumption that there are four channels tuned to different directions in depth (Journal of Physiology 235 (1973) 17-29). We modeled these channels by combining outputs of low-level motion detectors that are sensitive to left and right retinal stimulation. Using these channels, we constructed a model of motion in depth that successfully predicted a variety of psychophysical results including direction discrimination, perceived direction, spatial frequency tuning, effect of speed on rotation in depth, effect of lateral motion direction, and effect of binocular and temporal correlations.


Subject(s)
Depth Perception/physiology , Models, Theoretical , Motion Perception/physiology , Cues , Humans , Psychophysics , Retina/physiology , Vision, Binocular/physiology , Visual Cortex/physiology
9.
Sci Rep ; 9(1): 652, 2019 01 24.
Article in English | MEDLINE | ID: mdl-30679685

ABSTRACT

The experiences that body parts are owned and localized in space are two key aspects of body awareness. Although initial work assumed that the perceived location of one's body part can be used as a behavioral measure to assess the feeling of owning a body part, recent studies call into question the relationship between localization and ownership of body parts. Yet, little is known about the processes underlying these two aspects of body-part awareness. Here, I applied a statistically optimal cue combination paradigm to a perceptual illusion in which ownership over an artificial hand is experienced, and found that variances predicted by a model of optimal cue combination are similar to those observed in localization of the participant's hand, but systematically diverge from those observed in ownership of the artificial hand. These findings provide strong evidence for separate processes between ownership and localization of body parts, and indicate a need to revise current models of body part ownership. Results from this study suggest that the neural substrates for perceptual identification of one's body parts-such as body ownership-are distinct from those underlying spatial localization of the body parts, thus implying a functional distinction between "who" and "where" in the processing of body part information.


Subject(s)
Human Body , Sensation , Adult , Humans , Likelihood Functions , Male , Models, Biological , Proprioception/physiology , Young Adult
10.
J Vis ; 18(9): 17, 2018 09 04.
Article in English | MEDLINE | ID: mdl-30242388

ABSTRACT

When a rotating object (inducer) is briefly replaced by a static face image (test stimulus), the orientation of the face appears to shift in the rotation direction of the inducer (object orientation induction, OOI). The OOI effect suggests that there is a process to continuously analyze and update the orientation of an object in motion. We investigated the perception of object orientation in motion, examining potential factors that contribute to OOI. Experiment 1 showed that the phenomenon is general to objects rather than specific to faces; OOI could be observed with non-face objects. Experiment 2 showed that OOI is a 3D effect, as the orientation shift for a bent-wire object depended on its configuration in the depth dimension. Experiment 3 showed that salient features are necessary to indicate the intrinsic orientation of the inducing object for producing OOI. Experiment 4 showed that change in the facing direction of the inducer object is a crucial factor for OOI, but neither the object shape nor its identity is important. A strong OOI effect was observed even when the inducer kept changing its shape and identity, as long as its direction change generated continuous rotation. Finally, Experiment 5 showed that OOI is a phenomenon in the pathway for fast visual processing. A single inducer presented shorter than 100ms before influenced the perceived orientation of the test stimulus. Together these results suggest that there is a predictive process that continuously analyzes and updates the orientation of rotating objects, independently of their identification.


Subject(s)
Motion Perception/physiology , Orientation, Spatial/physiology , Pattern Recognition, Visual/physiology , Rotation , Adult , Humans , Male , Young Adult
11.
Sci Rep ; 8(1): 7171, 2018 05 08.
Article in English | MEDLINE | ID: mdl-29740127

ABSTRACT

Spatial representation surrounding a viewer including outside the visual field is crucial for moving around the three-dimensional world. To obtain such spatial representations, we predict that there is a learning process that integrates visual inputs from different viewpoints covering all the 360° visual angles. We report here the learning effect of the spatial layouts on six displays arranged to surround the viewer, showing shortening of visual search time on surrounding layouts that are repeatedly used (contextual cueing effect). The learning effect is found even in the time to reach the display with the target as well as the time to reach the target within the target display, which indicates that there is an implicit learning effect on spatial configurations of stimulus elements across displays. Since, furthermore, the learning effect is found between layouts and the target presented on displays located even 120° apart, this effect should be based on the representation that covers visual information far outside the visual field.


Subject(s)
Learning/physiology , Space Perception/physiology , Visual Fields/physiology , Visual Perception/physiology , Attention , Humans , Orientation/physiology , Photic Stimulation , Reaction Time/physiology
13.
Vision Res ; 129: 1-12, 2016 12.
Article in English | MEDLINE | ID: mdl-27773657

ABSTRACT

Two phenomena have been reported to affect the perceived displacement of a visual target during saccadic eye movements: the blanking effect and landmark effect. In the blanking effect, temporarily blanking the target after a saccade improves displacement judgments. In the landmark effect, illusory target displacement occurs when a continuously presented landmark is displaced during a saccade, and the target is temporarily blanked after the saccade without displacement. We show that the strengths of the blanking and landmark effects vary with stimulus contrast. In the blanking effect, target displacement detection rate increased with luminance contrast of the target. In the landmark effect, illusory target displacement decreased with luminance contrast of the target. Moreover, the landmark effect was found even for stimuli without luminance contrast (equiluminant color stimuli), while the blanking effect disappeared. These results can be attributed to a reduction in sensitivity of target displacement by a reduction of luminance contrast, which suggests that changes in luminance, or transient signals, play a critical role in visual stability across saccades.


Subject(s)
Contrast Sensitivity/physiology , Fixation, Ocular/physiology , Saccades/physiology , Adult , Analysis of Variance , Female , Humans , Lighting , Photic Stimulation/methods , Visual Perception , Young Adult
14.
Sci Rep ; 6: 35513, 2016 10 19.
Article in English | MEDLINE | ID: mdl-27759056

ABSTRACT

Visual attention spreads over a range around the focus as the spotlight metaphor describes. Spatial spread of attentional enhancement and local selection/inhibition are crucial factors determining the profile of the spatial attention. Enhancement and ignorance/suppression are opposite effects of attention, and appeared to be mutually exclusive. Yet, no unified view of the factors has been provided despite their necessity for understanding the functions of spatial attention. This report provides electroencephalographic and behavioral evidence for the attentional spread at an early stage and selection/inhibition at a later stage of visual processing. Steady state visual evoked potential showed broad spatial tuning whereas the P3 component of the event related potential showed local selection or inhibition of the adjacent areas. Based on these results, we propose a two-stage model of spatial attention with broad spread at an early stage and local selection at a later stage.


Subject(s)
Attention Deficit Disorder with Hyperactivity/physiopathology , Attention/physiology , Behavior/physiology , Biobehavioral Sciences , Evoked Potentials, Visual , Space Perception/physiology , Visual Perception/physiology , Adult , Electroencephalography , Humans , Male , Photic Stimulation , Visual Fields , Young Adult
15.
Vision Res ; 117: 59-66, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26475088

ABSTRACT

Despite decades of attempts to create a model for predicting gaze locations by using saliency maps, a highly accurate gaze prediction model for general conditions has yet to be devised. In this study, we propose a gaze prediction method based on head direction that can improve the accuracy of any model. We used a probability distribution of eye position based on head direction (static eye-head coordination) and added this information to a model of saliency-based visual attention. Using empirical data on eye and head directions while observers were viewing natural scenes, we estimated a probability distribution of eye position. We then combined the relationship between eye position and head direction with visual saliency to predict gaze locations. The model showed that information on head direction improved the prediction accuracy. Further, there was no difference in the gaze prediction accuracy between the two models using information on head direction with and without eye-head coordination. Therefore, information on head direction is useful for predicting gaze location when it is available. Furthermore, this gaze prediction model can be applied relatively easily to many daily situations such as during walking.


Subject(s)
Attention , Eye/anatomy & histology , Fixation, Ocular/physiology , Head/anatomy & histology , Psychomotor Performance , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
16.
J Vis ; 15(11): 12, 2015 Aug 01.
Article in English | MEDLINE | ID: mdl-26275213

ABSTRACT

Pursuit eye movements correlate with perceived motion in both velocity and direction, even without retinal motion. Cortical cells in the monkey medial temporal region generate signals for initiating pursuit eye movements and respond to retinal motion for perception. However, recent studies suggest multiple motion processes, fast and slow, even for low-level motion. Here we investigated whether the relationship with pursuit eye movements is different for fast and slow motion processes, using a motion aftereffect technique with superimposed low- and high-spatial-frequency gratings. A previous study showed that the low- and high-spatial-frequency gratings adapt the fast and slow motion processes, respectively, and that a static test probes the slow motion process and a flicker test probes the fast motion process (Shioiri & Matsumiya, 2009). In the present study, an adaptation stimulus was composed of two gratings with different spatial frequencies and orientations but the same temporal frequency, moving in the orthogonal direction of ±45° from the vertical. We measured the directions of perceived motion and pursuit eye movements to a test stimulus presented after motion adaptation with changing relative contrasts of the two adapting gratings. Pursuit eye movements were observed in the same direction as that of the motion aftereffects, independent of the relative contrasts of the two adapting gratings, for both the static and flicker tests. These results suggest that pursuit eye movements and perception share motion signals in both slow and fast motion processes.


Subject(s)
Motion Perception/physiology , Pursuit, Smooth/physiology , Adaptation, Physiological , Adult , Eye Movements/physiology , Humans , Male , Orientation , Photic Stimulation/methods , Retina/physiology , Temporal Lobe , Young Adult
17.
PLoS One ; 10(3): e0121035, 2015.
Article in English | MEDLINE | ID: mdl-25799510

ABSTRACT

We investigated coordinated movements between the eyes and head ("eye-head coordination") in relation to vision for action. Several studies have measured eye and head movements during a single gaze shift, focusing on the mechanisms of motor control during eye-head coordination. However, in everyday life, gaze shifts occur sequentially and are accompanied by movements of the head and body. Under such conditions, visual cognitive processing influences eye movements and might also influence eye-head coordination because sequential gaze shifts include cycles of visual processing (fixation) and data acquisition (gaze shifts). In the present study, we examined how the eyes and head move in coordination during visual search in a large visual field. Subjects moved their eyes, head, and body without restriction inside a 360° visual display system. We found patterns of eye-head coordination that differed those observed in single gaze-shift studies. First, we frequently observed multiple saccades during one continuous head movement, and the contribution of head movement to gaze shifts increased as the number of saccades increased. This relationship between head movements and sequential gaze shifts suggests eye-head coordination over several saccade-fixation sequences; this could be related to cognitive processing because saccade-fixation cycles are the result of visual cognitive processing. Second, distribution bias of eye position during gaze fixation was highly correlated with head orientation. The distribution peak of eye position was biased in the same direction as head orientation. This influence of head orientation suggests that eye-head coordination is involved in gaze fixation, when the visual system processes retinal information. This further supports the role of eye-head coordination in visual cognitive processing.


Subject(s)
Psychomotor Performance/physiology , Vision, Ocular/physiology , Adult , Eye Movements , Female , Head Movements , Humans , Male , Young Adult
18.
Multisens Res ; 27(2): 127-37, 2014.
Article in English | MEDLINE | ID: mdl-25296475

ABSTRACT

The face aftereffect (FAE; the illusion of faces after adaptation to a face) has been reported to occur without retinal overlap between adaptor and test, but recent studies revealed that the FAE is not constant across all test locations, which suggests that the FAE is also retinotopic. However, it remains unclear whether the characteristic of the retinotopy of the FAE for one facial aspect is the same as that of the FAE for another facial aspect. In the research reported here, an examination of the retinotopy of the FAE for facial expression indicated that the facial expression aftereffect occurs without retinal overlap between adaptor and test, and depends on the retinal distance between them. Furthermore, the results indicate that, although dependence of the FAE on adaptation-test distance is similar between facial expression and facial identity, the FAE for facial identity is larger than that for facial expression when a test face is presented in the opposite hemifield. On the basis of these results, I discuss adaptation mechanisms underlying facial expression processing and facial identity processing for the retinotopy of the FAE.


Subject(s)
Adaptation, Physiological/physiology , Face , Figural Aftereffect/physiology , Pattern Recognition, Visual/physiology , Retina/physiology , Adult , Discrimination, Psychological , Female , Humans , Male , Young Adult
19.
Vis Neurosci ; 31(6): 387-400, 2014 Nov.
Article in English | MEDLINE | ID: mdl-25136894

ABSTRACT

The appearance of colors can be affected by their spatiotemporal context. The shift in color appearance according to the surrounding colors is called color induction or chromatic induction; in particular, the shift in opponent color of the surround is called chromatic contrast. To investigate whether chromatic induction occurs even when the chromatic surround is imperceptible, we measured chromatic induction during interocular suppression. A multicolor or uniform color field was presented as the surround stimulus, and a colored continuous flash suppression (CFS) stimulus was presented to the dominant eye of each subject. The subjects were asked to report the appearance of the test field only when the stationary surround stimulus is invisible by interocular suppression with CFS. The resulting shifts in color appearance due to chromatic induction were significant even under the conditions of interocular suppression for all surround stimuli. The magnitude of chromatic induction differed with the surround conditions, and this difference was preserved regardless of the viewing conditions. The chromatic induction effect was reduced by CFS, in proportion to the magnitude of chromatic induction under natural (i.e., no-CFS) viewing conditions. According to an analysis with linear model fitting, we revealed the presence of at least two kinds of subprocesses for chromatic induction that reside at higher and lower levels than the site of interocular suppression. One mechanism yields different degrees of chromatic induction based on the complexity of the surround, which is unaffected by interocular suppression, while the other mechanism changes its output with interocular suppression acting as a gain control. Our results imply that the total chromatic induction effect is achieved via a linear summation of outputs from mechanisms that reside at different levels of visual processing.


Subject(s)
Adaptation, Ocular/physiology , Color Perception/physiology , Color , Contrast Sensitivity/physiology , Visual Fields/physiology , Female , Humans , Male , Photic Stimulation , Psychometrics
20.
Curr Biol ; 24(2): 165-169, 2014 Jan 20.
Article in English | MEDLINE | ID: mdl-24374307

ABSTRACT

The question of how our body parts successfully interact with objects in the outside world is a fundamental problem in cognitive science and neuroscience. This problem is closely related to biologically important behaviors such as avoiding collisions or safely reaching for an object. Although previous studies have suggested that perceiving the space around one's own body is essential for interacting successfully with objects, how one's own body parts influence the ability to perceive the space around the body is unknown. Here, we report a visual motion aftereffect (MAE) that shows spatial selectivity in hand-centered coordinates. The MAE is an illusion of visual motion resulting from adaptation to a moving pattern and normally occurs with retinal overlap between adaptor and test. We found that the MAE occurs without retinal overlap between the adaptor and test when they are presented at the same position relative to a seen hand. This MAE appeared only when participants voluntarily controlled the hand that was felt to be their own. Our results reveal that sense of owning an actively moved body part generates a perceptual representation of the space encoded in body-part-centered coordinates that might be useful for guiding movements of one's body parts.


Subject(s)
Figural Aftereffect , Motion , Movement , Hand/physiology , Humans , Retina/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...