Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36.358
Filter
1.
Proc Natl Acad Sci U S A ; 121(24): e2317707121, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38830105

ABSTRACT

Human pose, defined as the spatial relationships between body parts, carries instrumental information supporting the understanding of motion and action of a person. A substantial body of previous work has identified cortical areas responsive to images of bodies and different body parts. However, the neural basis underlying the visual perception of body part relationships has received less attention. To broaden our understanding of body perception, we analyzed high-resolution fMRI responses to a wide range of poses from over 4,000 complex natural scenes. Using ground-truth annotations and an application of three-dimensional (3D) pose reconstruction algorithms, we compared similarity patterns of cortical activity with similarity patterns built from human pose models with different levels of depth availability and viewpoint dependency. Targeting the challenge of explaining variance in complex natural image responses with interpretable models, we achieved statistically significant correlations between pose models and cortical activity patterns (though performance levels are substantially lower than the noise ceiling). We found that the 3D view-independent pose model, compared with two-dimensional models, better captures the activation from distinct cortical areas, including the right posterior superior temporal sulcus (pSTS). These areas, together with other pose-selective regions in the LOTC, form a broader, distributed cortical network with greater view-tolerance in more anterior patches. We interpret these findings in light of the computational complexity of natural body images, the wide range of visual tasks supported by pose structures, and possible shared principles for view-invariant processing between articulated objects and ordinary, rigid objects.


Subject(s)
Brain , Magnetic Resonance Imaging , Humans , Magnetic Resonance Imaging/methods , Male , Female , Adult , Brain/physiology , Brain/diagnostic imaging , Brain Mapping/methods , Visual Perception/physiology , Posture/physiology , Young Adult , Imaging, Three-Dimensional/methods , Photic Stimulation/methods , Algorithms
2.
PeerJ ; 12: e17295, 2024.
Article in English | MEDLINE | ID: mdl-38827290

ABSTRACT

This study aimed to examine the influence of sport skill levels on behavioural and neuroelectric performance in visuospatial attention and memory visuospatial tasks were administered to 54 participants, including 18 elite and 18 amateur table tennis players and 18 nonathletes, while event-related potentials were recorded. In all the visuospatial attention and memory conditions, table tennis players displayed faster reaction times than nonathletes, regardless of skill level, although there was no difference in accuracy between groups. In addition, regardless of task conditions, both player groups had a greater P3 amplitude than nonathletes, and elite players exhibited a greater P3 amplitude than amateurs players. The results of this study indicate that table tennis players, irrespective of their skill level, exhibit enhanced visuospatial capabilities. Notably, athletes at the elite level appear to benefit from an augmented allocation of attentional resources when engaging in visuospatial tasks.


Subject(s)
Attention , Cognition , Evoked Potentials , Reaction Time , Humans , Male , Young Adult , Attention/physiology , Cognition/physiology , Evoked Potentials/physiology , Reaction Time/physiology , Female , Tennis/physiology , Tennis/psychology , Adult , Space Perception/physiology , Athletes/psychology , Athletic Performance/physiology , Visual Perception/physiology , Electroencephalography , Adolescent
3.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38700440

ABSTRACT

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Subject(s)
Attention , Auditory Perception , Electroencephalography , Visual Perception , Humans , Attention/physiology , Male , Female , Young Adult , Adult , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Photic Stimulation/methods , Evoked Potentials/physiology , Brain/physiology , Adolescent
4.
Elife ; 122024 May 03.
Article in English | MEDLINE | ID: mdl-38700934

ABSTRACT

Probing memory of a complex visual image within a few hundred milliseconds after its disappearance reveals significantly greater fidelity of recall than if the probe is delayed by as little as a second. Classically interpreted, the former taps into a detailed but rapidly decaying visual sensory or 'iconic' memory (IM), while the latter relies on capacity-limited but comparatively stable visual working memory (VWM). While iconic decay and VWM capacity have been extensively studied independently, currently no single framework quantitatively accounts for the dynamics of memory fidelity over these time scales. Here, we extend a stationary neural population model of VWM with a temporal dimension, incorporating rapid sensory-driven accumulation of activity encoding each visual feature in memory, and a slower accumulation of internal error that causes memorized features to randomly drift over time. Instead of facilitating read-out from an independent sensory store, an early cue benefits recall by lifting the effective limit on VWM signal strength imposed when multiple items compete for representation, allowing memory for the cued item to be supplemented with information from the decaying sensory trace. Empirical measurements of human recall dynamics validate these predictions while excluding alternative model architectures. A key conclusion is that differences in capacity classically thought to distinguish IM and VWM are in fact contingent upon a single resource-limited WM store.


Subject(s)
Memory, Short-Term , Models, Neurological , Humans , Memory, Short-Term/physiology , Visual Perception/physiology , Adult , Mental Recall/physiology , Male , Female , Young Adult
5.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38725291

ABSTRACT

A widely used psychotherapeutic treatment for post-traumatic stress disorder (PTSD) involves performing bilateral eye movement (EM) during trauma memory retrieval. However, how this treatment-described as eye movement desensitization and reprocessing (EMDR)-alleviates trauma-related symptoms is unclear. While conventional theories suggest that bilateral EM interferes with concurrently retrieved trauma memories by taxing the limited working memory resources, here, we propose that bilateral EM actually facilitates information processing. In two EEG experiments, we replicated the bilateral EM procedure of EMDR, having participants engaging in continuous bilateral EM or receiving bilateral sensory stimulation (BS) as a control while retrieving short- or long-term memory. During EM or BS, we presented bystander images or memory cues to probe neural representations of perceptual and memory information. Multivariate pattern analysis of the EEG signals revealed that bilateral EM enhanced neural representations of simultaneously processed perceptual and memory information. This enhancement was accompanied by heightened visual responses and increased neural excitability in the occipital region. Furthermore, bilateral EM increased information transmission from the occipital to the frontoparietal region, indicating facilitated information transition from low-level perceptual representation to high-level memory representation. These findings argue for theories that emphasize information facilitation rather than disruption in the EMDR treatment.


Subject(s)
Electroencephalography , Eye Movement Desensitization Reprocessing , Humans , Female , Male , Young Adult , Adult , Eye Movement Desensitization Reprocessing/methods , Eye Movements/physiology , Stress Disorders, Post-Traumatic/physiopathology , Stress Disorders, Post-Traumatic/therapy , Stress Disorders, Post-Traumatic/psychology , Visual Perception/physiology , Memory/physiology , Brain/physiology , Photic Stimulation/methods , Memory, Short-Term/physiology
6.
Sci Rep ; 14(1): 10164, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38702338

ABSTRACT

Orientation processing is one of the most fundamental functions in both visual and somatosensory perception. Converging findings suggest that orientation processing in both modalities is closely linked: somatosensory neurons share a similar orientation organisation as visual neurons, and the visual cortex has been found to be heavily involved in tactile orientation perception. Hence, we hypothesized that somatosensation would exhibit a similar orientation adaptation effect, and this adaptation effect would be transferable between the two modalities, considering the above-mentioned connection. The tilt aftereffect (TAE) is a demonstration of orientation adaptation and is used widely in behavioural experiments to investigate orientation mechanisms in vision. By testing the classic TAE paradigm in both tactile and crossmodal orientation tasks between vision and touch, we were able to show that tactile perception of orientation shows a very robust TAE, similar to its visual counterpart. We further show that orientation adaptation in touch transfers to produce a TAE when tested in vision, but not vice versa. Additionally, when examining the test sequence following adaptation for serial effects, we observed another asymmetry between the two conditions where the visual test sequence displayed a repulsive intramodal serial dependence effect while the tactile test sequence exhibited an attractive serial dependence. These findings provide concrete evidence that vision and touch engage a similar orientation processing mechanism. However, the asymmetry in the crossmodal transfer of TAE and serial dependence points to a non-reciprocal connection between the two modalities, providing further insights into the underlying processing mechanism.


Subject(s)
Adaptation, Physiological , Touch Perception , Visual Perception , Humans , Male , Female , Adult , Touch Perception/physiology , Visual Perception/physiology , Young Adult , Orientation/physiology , Touch/physiology , Orientation, Spatial/physiology , Vision, Ocular/physiology , Visual Cortex/physiology
7.
Sci Rep ; 14(1): 10183, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38702452

ABSTRACT

The perception of halos and other night vision disturbances is a common complaint in clinical practice. Such visual disturbances must be assessed in order to fully characterize each patient's visual performance, which is particularly relevant when carrying out a range of daily tasks. Visual problems are usually assessed using achromatic stimuli, yet the stimuli encountered in daily life have very different chromaticities. Hence, it is important to assess the effect of the chromaticity of visual stimuli on night vision disturbances. The aim of this work is to study the influence of the chromaticity of different visual stimuli on night vision disturbances by analyzing straylight and visual discrimination under low-light conditions. For that, we assessed the monocular and binocular visual discrimination of 27 subjects under low illumination using the Halo test. The subjects' visual discrimination was assessed after exposure to different visual stimuli: achromatic, red, green, and blue, both at the monitor's maximum luminance and maintaining the same luminance value for the different visual stimuli. Monocular straylight was also measured for an achromatic, red, green, and blue stimuli. The blue stimulus had the greatest effect on halos in both monocular and binocular conditions. Visual discrimination was similar for the red, green, and achromatic stimuli, but worsened at lower luminance. The greatest influence of straylight was observed for the blue stimulus. In addition, visual discrimination correlated with straylight measurements for achromatic stimuli, wherein greater straylight values correlated with an increased perception of halos and other visual disturbances.


Subject(s)
Photic Stimulation , Humans , Male , Female , Adult , Night Vision/physiology , Young Adult , Light , Vision, Binocular/physiology , Visual Perception/physiology , Color Perception/physiology , Vision Disorders/physiopathology , Lighting , Middle Aged
8.
Sci Rep ; 14(1): 10261, 2024 05 04.
Article in English | MEDLINE | ID: mdl-38704441

ABSTRACT

Previous studies have suggested behavioral patterns, such as visual attention and eye movements, relate to individual personality traits. However, these studies mainly focused on free visual tasks, and the impact of visual field restriction remains inadequately understood. The primary objective of this study is to elucidate the patterns of conscious eye movements induced by visual field restriction and to examine how these patterns relate to individual personality traits. Building on previous research, we aim to gain new insights through two behavioral experiments, unraveling the intricate relationship between visual behaviors and individual personality traits. As a result, both Experiment 1 and Experiment 2 revealed differences in eye movements during free observation and visual field restriction. Particularly, simulation results based on the analyzed data showed clear distinctions in eye movements between free observation and visual field restriction conditions. This suggests that eye movements during free observation involve a mixture of conscious and unconscious eye movements. Furthermore, we observed significant correlations between conscious eye movements and personality traits, with more pronounced effects in the visual field restriction condition used in Experiment 2 compared to Experiment 1. These analytical findings provide a novel perspective on human cognitive processes through visual perception.


Subject(s)
Eye Movements , Personality , Visual Fields , Humans , Visual Fields/physiology , Eye Movements/physiology , Male , Personality/physiology , Female , Adult , Young Adult , Attention/physiology , Visual Perception/physiology
10.
Commun Biol ; 7(1): 550, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38719883

ABSTRACT

Perceptual and cognitive processing relies on flexible communication among cortical areas; however, the underlying neural mechanism remains unclear. Here we report a mechanism based on the realistic spatiotemporal dynamics of propagating wave patterns in neural population activity. Using a biophysically plausible, multiarea spiking neural circuit model, we demonstrate that these wave patterns, characterized by their rich and complex dynamics, can account for a wide variety of empirically observed neural processes. The coordinated interactions of these wave patterns give rise to distributed and dynamic communication (DDC) that enables flexible and rapid routing of neural activity across cortical areas. We elucidate how DDC unifies the previously proposed oscillation synchronization-based and subspace-based views of interareal communication, offering experimentally testable predictions that we validate through the analysis of Allen Institute Neuropixels data. Furthermore, we demonstrate that DDC can be effectively modulated during attention tasks through the interplay of neuromodulators and cortical feedback loops. This modulation process explains many neural effects of attention, underscoring the fundamental functional role of DDC in cognition.


Subject(s)
Attention , Models, Neurological , Attention/physiology , Humans , Cerebral Cortex/physiology , Animals , Nerve Net/physiology , Visual Perception/physiology , Neurons/physiology , Cognition/physiology
11.
Sci Rep ; 14(1): 10593, 2024 05 08.
Article in English | MEDLINE | ID: mdl-38719939

ABSTRACT

Previous research on the neural correlates of consciousness (NCC) in visual perception revealed an early event-related potential (ERP), the visual awareness negativity (VAN), to be associated with stimulus awareness. However, due to the use of brief stimulus presentations in previous studies, it remains unclear whether awareness-related negativities represent a transient onset-related response or correspond to the duration of a conscious percept. Studies are required that allow prolonged stimulus presentation under aware and unaware conditions. The present ERP study aimed to tackle this challenge by using a novel stimulation design. Male and female human participants (n = 62) performed a visual task while task-irrelevant line stimuli were presented in the background for either 500 or 1000 ms. The line stimuli sometimes contained a face, which needed so-called visual one-shot learning to be seen. Half of the participants were informed about the presence of the face, resulting in faces being perceived by the informed but not by the uninformed participants. Comparing ERPs between the informed and uninformed group revealed an enhanced negativity over occipitotemporal electrodes that persisted for the entire duration of stimulus presentation. Our results suggest that sustained visual awareness negativities (SVAN) are associated with the duration of stimulus presentation.


Subject(s)
Consciousness , Electroencephalography , Evoked Potentials , Visual Perception , Humans , Male , Female , Consciousness/physiology , Visual Perception/physiology , Adult , Young Adult , Evoked Potentials/physiology , Photic Stimulation , Awareness/physiology , Evoked Potentials, Visual/physiology
12.
J Vis ; 24(5): 3, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38709511

ABSTRACT

In everyday life we frequently make simple visual judgments about object properties, for example, how big or wide is a certain object? Our goal is to test whether there are also task-specific oculomotor routines that support perceptual judgments, similar to the well-established exploratory routines for haptic perception. In a first study, observers saw different scenes with two objects presented in a photorealistic virtual reality environment. Observers were asked to judge which of two objects was taller or wider while gaze was tracked. All tasks were performed with the same set of virtual objects in the same scenes, so that we can compare spatial characteristics of exploratory gaze behavior to quantify oculomotor routines for each task. Width judgments showed fixations around the center of the objects with larger horizontal spread. In contrast, for height judgments, gaze was shifted toward the top of the objects with larger vertical spread. These results suggest specific strategies in gaze behavior that presumably are used for perceptual judgments. To test the causal link between oculomotor behavior and perception, in a second study, observers could freely gaze at the object or we introduced a gaze-contingent setup forcing observers to fixate specific positions on the object. Discrimination performance was similar between free-gaze and the gaze-contingent conditions for width and height judgments. These results suggest that although gaze is adapted for different tasks, performance seems to be based on a perceptual strategy, independent of potential cues that can be provided by the oculomotor system.


Subject(s)
Eye Movements , Fixation, Ocular , Judgment , Humans , Judgment/physiology , Male , Female , Adult , Eye Movements/physiology , Young Adult , Fixation, Ocular/physiology , Photic Stimulation/methods , Virtual Reality , Visual Perception/physiology
13.
Sci Rep ; 14(1): 10494, 2024 05 07.
Article in English | MEDLINE | ID: mdl-38714660

ABSTRACT

Binocular visual plasticity can be initiated via either bottom-up or top-down mechanisms, but it is unknown if these two forms of adult plasticity can be independently combined. In seven participants with normal binocular vision, sensory eye dominance was assessed using a binocular rivalry task, before and after a period of monocular deprivation and with and without selective attention directed towards one eye. On each trial, participants reported the dominant monocular target and the inter-ocular contrast difference between the stimuli was systematically altered to obtain estimates of ocular dominance. We found that both monocular light- and pattern-deprivation shifted dominance in favour of the deprived eye. However, this shift was completely counteracted if the non-deprived eye's stimulus was selectively attended. These results reveal that shifts in ocular dominance, driven by bottom-up and top-down selection, appear to act independently to regulate the relative contrast gain between the two eyes.


Subject(s)
Dominance, Ocular , Vision, Binocular , Humans , Vision, Binocular/physiology , Dominance, Ocular/physiology , Adult , Male , Female , Young Adult , Neuronal Plasticity/physiology , Photic Stimulation , Vision, Monocular/physiology , Visual Perception/physiology , Attention/physiology
14.
Cereb Cortex ; 34(5)2024 May 02.
Article in English | MEDLINE | ID: mdl-38706138

ABSTRACT

Perceptual decision-making is affected by uncertainty arising from the reliability of incoming sensory evidence (perceptual uncertainty) and the categorization of that evidence relative to a choice boundary (categorical uncertainty). Here, we investigated how these factors impact the temporal dynamics of evidence processing during decision-making and subsequent metacognitive judgments. Participants performed a motion discrimination task while electroencephalography was recorded. We manipulated perceptual uncertainty by varying motion coherence, and categorical uncertainty by varying the angular offset of motion signals relative to a criterion. After each trial, participants rated their desire to change their mind. High uncertainty impaired perceptual and metacognitive judgments and reduced the amplitude of the centro-parietal positivity, a neural marker of evidence accumulation. Coherence and offset affected the centro-parietal positivity at different time points, suggesting that perceptual and categorical uncertainty affect decision-making in sequential stages. Moreover, the centro-parietal positivity predicted participants' metacognitive judgments: larger predecisional centro-parietal positivity amplitude was associated with less desire to change one's mind, whereas larger postdecisional centro-parietal positivity amplitude was associated with greater desire to change one's mind, but only following errors. These findings reveal a dissociation between predecisional and postdecisional evidence processing, suggesting that the CPP tracks potentially distinct cognitive processes before and after a decision.


Subject(s)
Decision Making , Electroencephalography , Judgment , Metacognition , Humans , Male , Female , Decision Making/physiology , Young Adult , Metacognition/physiology , Adult , Uncertainty , Judgment/physiology , Motion Perception/physiology , Brain/physiology , Photic Stimulation/methods , Visual Perception/physiology
15.
Cortex ; 175: 41-53, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38703715

ABSTRACT

Visual search is speeded when a target is repeatedly presented in an invariant scene context of nontargets (contextual cueing), demonstrating observers' capability for using statistical long-term memory (LTM) to make predictions about upcoming sensory events, thus improving attentional orienting. In the current study, we investigated whether expectations arising from individual, learned environmental structures can encompass multiple target locations. We recorded event-related potentials (ERPs) while participants performed a contextual cueing search task with repeated and non-repeated spatial item configurations. Notably, a given search display could be associated with either a single target location (standard contextual cueing) or two possible target locations. Our result showed that LTM-guided attention was always limited to only one target position in single- but also in the dual-target displays, as evidenced by expedited reaction times (RTs) and enhanced N1pc and N2pc deflections contralateral to one ("dominant") target of up to two repeating target locations. This contrasts with the processing of non-learned ("minor") target positions (in dual-target displays), which revealed slowed RTs alongside an initial N1pc "misguidance" signal that then vanished in the subsequent N2pc. This RT slowing was accompanied by enhanced N200 and N400 waveforms over fronto-central electrodes, suggesting that control mechanisms regulate the competition between dominant and minor targets. Our study thus reveals a dissociation in processing dominant versus minor targets: While LTM templates guide attention to dominant targets, minor targets necessitate control processes to overcome the automatic bias towards previously learned, dominant target locations.


Subject(s)
Attention , Cues , Electroencephalography , Evoked Potentials , Reaction Time , Humans , Attention/physiology , Male , Female , Evoked Potentials/physiology , Reaction Time/physiology , Young Adult , Adult , Electroencephalography/methods , Visual Perception/physiology , Photic Stimulation/methods , Orientation/physiology , Memory, Long-Term/physiology
16.
Cortex ; 175: 54-65, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38704919

ABSTRACT

The dorsal attention network (DAN) is a network of brain regions essential for attentional orienting, which includes the lateral intraparietal area (LIP) and frontal eye field (FEF). Recently, the putative human dorsal posterior infero-temporal area (phPITd) has been identified as a new node of the DAN. However, its functional relationship with other areas of the DAN and its specific role in visual attention remained unclear. In this study, we analyzed a large publicly available neuroimaging dataset to investigate the intrinsic functional connectivities (FCs) of the phPITd with other brain areas. The results showed that the intrinsic FCs of the phPITd with the areas of the visual network and the DAN were significantly stronger than those with the ventral attention network (VAN) areas and areas of other networks. We further conducted individual difference analyses with a sample size of 295 participants and a series of attentional tasks to investigate which attentional components each phPITd-based DAN edge predicts. Our findings revealed that the intrinsic FC of the left phPITd with the LIPv could predict individual ability in attentional orienting, but not in alerting, executive control, and distractor suppression. Our results not only provide direct evidence of the phPITd's functional relationship with the LIPv, but also offer a comprehensive understanding of its specific role in visual attention.


Subject(s)
Attention , Brain Mapping , Magnetic Resonance Imaging , Temporal Lobe , Visual Perception , Humans , Attention/physiology , Male , Female , Adult , Temporal Lobe/physiology , Temporal Lobe/diagnostic imaging , Young Adult , Magnetic Resonance Imaging/methods , Visual Perception/physiology , Orientation/physiology , Parietal Lobe/physiology , Parietal Lobe/diagnostic imaging , Nerve Net/physiology , Nerve Net/diagnostic imaging
17.
Brain Behav ; 14(5): e3517, 2024 May.
Article in English | MEDLINE | ID: mdl-38702896

ABSTRACT

INTRODUCTION: Attention and working memory are key cognitive functions that allow us to select and maintain information in our mind for a short time, being essential for our daily life and, in particular, for learning and academic performance. It has been shown that musical training can improve working memory performance, but it is still unclear if and how the neural mechanisms of working memory and particularly attention are implicated in this process. In this work, we aimed to identify the oscillatory signature of bimodal attention and working memory that contributes to improved working memory in musically trained children. MATERIALS AND METHODS: We recruited children with and without musical training and asked them to complete a bimodal (auditory/visual) attention and working memory task, whereas their brain activity was measured using electroencephalography. Behavioral, time-frequency, and source reconstruction analyses were made. RESULTS: Results showed that, overall, musically trained children performed better on the task than children without musical training. When comparing musically trained children with children without musical training, we found modulations in the alpha band pre-stimuli onset and the beginning of stimuli onset in the frontal and parietal regions. These correlated with correct responses to the attended modality. Moreover, during the end phase of stimuli presentation, we found modulations correlating with correct responses independent of attention condition in the theta and alpha bands, in the left frontal and right parietal regions. CONCLUSIONS: These results suggest that musically trained children have improved neuronal mechanisms for both attention allocation and memory encoding. Our results can be important for developing interventions for people with attention and working memory difficulties.


Subject(s)
Alpha Rhythm , Attention , Memory, Short-Term , Music , Theta Rhythm , Humans , Memory, Short-Term/physiology , Attention/physiology , Male , Female , Child , Theta Rhythm/physiology , Alpha Rhythm/physiology , Auditory Perception/physiology , Electroencephalography , Visual Perception/physiology , Brain/physiology
18.
Headache ; 64(5): 482-493, 2024 May.
Article in English | MEDLINE | ID: mdl-38693749

ABSTRACT

OBJECTIVE: In this cross-sectional observational study, we aimed to investigate sensory profiles and multisensory integration processes in women with migraine using virtual dynamic interaction systems. BACKGROUND: Compared to studies on unimodal sensory processing, fewer studies show that multisensory integration differs in patients with migraine. Multisensory integration of visual, auditory, verbal, and haptic modalities has not been evaluated in migraine. METHODS: A 12-min virtual dynamic interaction game consisting of four parts was played by the participants. During the game, the participants were exposed to either visual stimuli only or multisensory stimuli in which auditory, verbal, and haptic stimuli were added to the visual stimuli. A total of 78 women participants (28 with migraine without aura and 50 healthy controls) were enrolled in this prospective exploratory study. Patients with migraine and healthy participants who met the inclusion criteria were randomized separately into visual and multisensory groups: Migraine multisensory (14 adults), migraine visual (14 adults), healthy multisensory (25 adults), and healthy visual (25 adults). The Sensory Profile Questionnaire was utilized to assess the participants' sensory profiles. The game scores and survey results were analyzed. RESULTS: In visual stimulus, the gaming performance scores of patients with migraine without aura were similar to the healthy controls, at a median (interquartile range [IQR]) of 81.8 (79.5-85.8) and 80.9 (77.1-84.2) (p = 0.149). Error rate of visual stimulus in patients with migraine without aura were comparable to healthy controls, at a median (IQR) of 0.11 (0.08-0.13) and 0.12 (0.10-0.14), respectively (p = 0,166). In multisensory stimulation, average gaming score was lower in patients with migraine without aura compared to healthy individuals (median [IQR] 82.2 [78.8-86.3] vs. 78.6 [74.0-82.4], p = 0.028). In women with migraine, exposure to new sensory modality upon visual stimuli in the fourth, seventh, and tenth rounds (median [IQR] 78.1 [74.1-82.0], 79.7 [77.2-82.5], 76.5 [70.2-82.1]) exhibited lower game scores compared to visual stimuli only (median [IQR] 82.3 [77.9-87.8], 84.2 [79.7-85.6], 80.8 [79.0-85.7], p = 0.044, p = 0.049, p = 0.016). According to the Sensory Profile Questionnaire results, sensory sensitivity, and sensory avoidance scores of patients with migraine (median [IQR] score 45.5 [41.0-54.7] and 47.0 [41.5-51.7]) were significantly higher than healthy participants (median [IQR] score 39.0 [34.0-44.2] and 40.0 [34.0-48.0], p < 0.001, p = 0.001). CONCLUSION: The virtual dynamic game approach showed for the first time that the gaming performance of patients with migraine without aura was negatively affected by the addition of auditory, verbal, and haptic stimuli onto visual stimuli. Multisensory integration of sensory modalities including haptic stimuli is disturbed even in the interictal period in women with migraine. Virtual games can be employed to assess the impact of sensory problems in the course of the disease. Also, sensory training could be a potential therapy target to improve multisensory processing in migraine.


Subject(s)
Migraine Disorders , Humans , Female , Adult , Cross-Sectional Studies , Migraine Disorders/physiopathology , Prospective Studies , Video Games , Visual Perception/physiology , Young Adult , Virtual Reality , Photic Stimulation/methods , Auditory Perception/physiology
19.
J Vis ; 24(5): 5, 2024 May 01.
Article in English | MEDLINE | ID: mdl-38722273

ABSTRACT

A key question in perception research is how stimulus variations translate into perceptual magnitudes, that is, the perceptual encoding process. As experimenters, we cannot probe perceptual magnitudes directly, but infer the encoding process from responses obtained in a psychophysical experiment. The most prominent experimental technique to measure perceptual appearance is matching, where observers adjust a probe stimulus to match a target in its appearance along the dimension of interest. The resulting data quantify the perceived magnitude of the target in physical units of the probe, and are thus an indirect expression of the underlying encoding process. In this paper, we show analytically and in simulation that data from matching tasks do not sufficiently constrain perceptual encoding functions, because there exist an infinite number of pairs of encoding functions that generate the same matching data. We use simulation to demonstrate that maximum likelihood conjoint measurement (Ho, Landy, & Maloney, 2008; Knoblauch & Maloney, 2012) does an excellent job of recovering the shape of ground truth encoding functions from data that were generated with these very functions. Finally, we measure perceptual scales and matching data for White's effect (White, 1979) and show that the matching data can be predicted from the estimated encoding functions, down to individual differences.


Subject(s)
Psychophysics , Humans , Psychophysics/methods , Visual Perception/physiology , Photic Stimulation/methods
20.
PLoS One ; 19(5): e0303400, 2024.
Article in English | MEDLINE | ID: mdl-38739635

ABSTRACT

Visual abilities tend to vary predictably across the visual field-for simple low-level stimuli, visibility is better along the horizontal vs. vertical meridian and in the lower vs. upper visual field. In contrast, face perception abilities have been reported to show either distinct or entirely idiosyncratic patterns of variation in peripheral vision, suggesting a dissociation between the spatial properties of low- and higher-level vision. To assess this link more clearly, we extended methods used in low-level vision to develop an acuity test for face perception, measuring the smallest size at which facial gender can be reliably judged in peripheral vision. In 3 experiments, we show the characteristic inversion effect, with better acuity for upright faces than inverted, demonstrating the engagement of high-level face-selective processes in peripheral vision. We also observe a clear advantage for gender acuity on the horizontal vs. vertical meridian and a smaller-but-consistent lower- vs. upper-field advantage. These visual field variations match those of low-level vision, indicating that higher-level face processing abilities either inherit or actively maintain the characteristic patterns of spatial selectivity found in early vision. The commonality of these spatial variations throughout the visual hierarchy means that the location of faces in our visual field systematically influences our perception of them.


Subject(s)
Facial Recognition , Visual Fields , Humans , Visual Fields/physiology , Female , Male , Adult , Facial Recognition/physiology , Young Adult , Photic Stimulation , Visual Perception/physiology , Visual Acuity/physiology , Face/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...