Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 12(1): 2291, 2022 02 10.
Article in English | MEDLINE | ID: mdl-35145166

ABSTRACT

Our visual environment impacts multiple aspects of cognition including perception, attention and memory, yet most studies traditionally remove or control the external environment. As a result, we have a limited understanding of neurocognitive processes beyond the controlled lab environment. Here, we aim to study neural processes in real-world environments, while also maintaining a degree of control over perception. To achieve this, we combined mobile EEG (mEEG) and augmented reality (AR), which allows us to place virtual objects into the real world. We validated this AR and mEEG approach using a well-characterised cognitive response-the face inversion effect. Participants viewed upright and inverted faces in three EEG tasks (1) a lab-based computer task, (2) walking through an indoor environment while seeing face photographs, and (3) walking through an indoor environment while seeing virtual faces. We find greater low frequency EEG activity for inverted compared to upright faces in all experimental tasks, demonstrating that cognitively relevant signals can be extracted from mEEG and AR paradigms. This was established in both an epoch-based analysis aligned to face events, and a GLM-based approach that incorporates continuous EEG signals and face perception states. Together, this research helps pave the way to exploring neurocognitive processes in real-world environments while maintaining experimental control using AR.


Subject(s)
Augmented Reality , Cognition/physiology , Electroencephalography/methods , Environment , Facial Recognition/physiology , Neurosciences/methods , Neurosciences/trends , Adult , Female , Humans , Male , Walking/physiology , Young Adult
2.
Wellcome Open Res ; 7: 165, 2022.
Article in English | MEDLINE | ID: mdl-37274451

ABSTRACT

Background: The environments that we live in impact on our ability to recognise objects, with recognition being facilitated when objects appear in expected locations (congruent) compared to unexpected locations (incongruent). However, these findings are based on experiments where the object is isolated from its environment. Moreover, it is not clear which components of the recognition process are impacted by the environment. In this experiment, we seek to examine the impact real world environments have on object recognition. Specifically, we will use mobile electroencephalography (mEEG) and augmented reality (AR) to investigate how the visual and semantic processing aspects of object recognition are changed by the environment. Methods: We will use AR to place congruent and incongruent virtual objects around indoor and outdoor environments. During the experiment a total of 34 participants will walk around the environments and find these objects while we record their eye movements and neural signals. We will perform two primary analyses. First, we will analyse the event-related potential (ERP) data using paired samples t-tests in the N300/400 time windows in an attempt to replicate congruency effects on the N300/400. Second, we will use representational similarity analysis (RSA) and computational models of vision and semantics to determine how visual and semantic processes are changed by congruency. Conclusions: Based on previous literature, we hypothesise that scene-object congruence would facilitate object recognition. For ERPs, we predict a congruency effect in the N300/N400, and for RSA we predict that higher level visual and semantic information will be represented earlier for congruent scenes than incongruent scenes. By collecting mEEG data while participants are exploring a real-world environment, we will be able to determine the impact of a natural context on object recognition, and the different processing stages of object recognition.

3.
Sci Rep ; 7(1): 7128, 2017 08 02.
Article in English | MEDLINE | ID: mdl-28769042

ABSTRACT

The orientation of a visual grating can be decoded from human primary visual cortex (V1) using functional magnetic resonance imaging (fMRI) at conventional resolutions (2-3 mm voxel width, 3T scanner). It is unclear to what extent this information originates from different spatial scales of neuronal selectivity, ranging from orientation columns to global areal maps. According to the global-areal-map account, fMRI orientation decoding relies exclusively on fMRI voxels in V1 exhibiting a radial or vertical preference. Here we show, by contrast, that 2-mm isotropic voxels in a small patch of V1 within a quarterfield representation exhibit reliable opposite selectivities. Sets of voxels with opposite selectivities are locally intermingled and each set can support orientation decoding. This indicates that global areal maps cannot fully account for orientation information in fMRI and demonstrates that fMRI also reflects fine-grained patterns of neuronal selectivity.

4.
Neuropsychologia ; 88: 65-73, 2016 07 29.
Article in English | MEDLINE | ID: mdl-26427739

ABSTRACT

In everyday life our senses are exposed to a constant influx of sensory signals. The brain binds signals into a coherent percept based on temporal, spatial or semantic correspondences. In addition, synaesthetic correspondences may form important cues for multisensory binding. This study focussed on the synaesthetic correspondences between auditory pitch and visual size. While high pitch has been associated with small objects in static contexts, recent research has surprisingly found that increasing size is linked with rising pitch. The current study presented participants with small/large visual circles/discs together with high/low pitched pure tones in an intersensory selective attention paradigm. Whilst fixating a central cross participants discriminated between small and large visual size in the visual modality or between high and low pitch in the auditory modality. Across a series of five experiments, we observed convergent evidence that participants associated small visual size with low pitch and large visual size with high pitch. In other words, we observed the pitch-size mapping that has previously been observed only for dynamic contexts. We suggest that these contradictory findings may emerge because participants can interpret visual size as an index of permanent object size or distance (e.g. in motion) from the observer. Moreover, the pitch-size mapping may depend not only on relative but also on the absolute levels of pitch and size of the presented stimuli.


Subject(s)
Pattern Recognition, Visual , Pitch Perception , Acoustic Stimulation , Adult , Attention , Discrimination, Psychological , Female , Humans , Male , Photic Stimulation , Reaction Time , Young Adult
5.
J Assoc Res Otolaryngol ; 16(6): 747-62, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26162415

ABSTRACT

The frequency following response (FFR) is a scalp-recorded measure of phase-locked brainstem activity to stimulus-related periodicities. Three experiments investigated the specificity of the FFR for carrier and modulation frequency using adaptation. FFR waveforms evoked by alternating-polarity stimuli were averaged for each polarity and added, to enhance envelope, or subtracted, to enhance temporal fine structure information. The first experiment investigated peristimulus adaptation of the FFR for pure and complex tones as a function of stimulus frequency and fundamental frequency (F0). It showed more adaptation of the FFR in response to sounds with higher frequencies or F0s than to sounds with lower frequency or F0s. The second experiment investigated tuning to modulation rate in the FFR. The FFR to a complex tone with a modulation rate of 213 Hz was not reduced more by an adaptor that had the same modulation rate than by an adaptor with a different modulation rate (90 or 504 Hz), thus providing no evidence that the FFR originates mainly from neurons that respond selectively to the modulation rate of the stimulus. The third experiment investigated tuning to audio frequency in the FFR using pure tones. An adaptor that had the same frequency as the target (213 or 504 Hz) did not generally reduce the FFR to the target more than an adaptor that differed in frequency (by 1.24 octaves). Thus, there was no evidence that the FFR originated mainly from neurons tuned to the frequency of the target. Instead, the results are consistent with the suggestion that the FFR for low-frequency pure tones at medium to high levels mainly originates from neurons tuned to higher frequencies. Implications for the use and interpretation of the FFR are discussed.


Subject(s)
Adaptation, Physiological , Hearing/physiology , Adolescent , Adult , Electrodiagnosis , Female , Healthy Volunteers , Humans , Male , Young Adult
6.
Front Psychol ; 4: 493, 2013.
Article in English | MEDLINE | ID: mdl-23964251

ABSTRACT

The orientation of a large grating can be decoded from V1 functional magnetic resonance imaging (fMRI) data, even at low resolution (3-mm isotropic voxels). This finding has suggested that columnar-level neuronal information might be accessible to fMRI at 3T. However, orientation decodability might alternatively arise from global orientation-preference maps. Such global maps across V1 could result from bottom-up processing, if the preferences of V1 neurons were biased toward particular orientations (e.g., radial from fixation, or cardinal, i.e., vertical or horizontal). Global maps could also arise from local recurrent or top-down processing, reflecting pre-attentive perceptual grouping, attention spreading, or predictive coding of global form. Here we investigate whether fMRI orientation decoding with 2-mm voxels requires (a) globally coherent orientation stimuli and/or (b) global-scale patterns of V1 activity. We used opposite-orientation gratings (balanced about the cardinal orientations) and spirals (balanced about the radial orientation), along with novel patch-swapped variants of these stimuli. The two stimuli of a patch-swapped pair have opposite orientations everywhere (like their globally coherent parent stimuli). However, the two stimuli appear globally similar, a patchwork of opposite orientations. We find that all stimulus pairs are robustly decodable, demonstrating that fMRI orientation decoding does not require globally coherent orientation stimuli. Furthermore, decoding remained robust after spatial high-pass filtering for all stimuli, showing that fine-grained components of the fMRI patterns reflect visual orientations. Consistent with previous studies, we found evidence for global radial and vertical preference maps in V1. However, these were weak or absent for patch-swapped stimuli, suggesting that global preference maps depend on globally coherent orientations and might arise through recurrent or top-down processes related to the perception of global form.

7.
Front Psychol ; 2: 391, 2012.
Article in English | MEDLINE | ID: mdl-22232613

ABSTRACT

In this study, it is demonstrated that moving sounds have an effect on the direction in which one sees visual stimuli move. During the main experiment sounds were presented consecutively at four speaker locations inducing left or rightward auditory apparent motion. On the path of auditory apparent motion, visual apparent motion stimuli were presented with a high degree of directional ambiguity. The main outcome of this experiment is that our participants perceived visual apparent motion stimuli that were ambiguous (equally likely to be perceived as moving left or rightward) more often as moving in the same direction than in the opposite direction of auditory apparent motion. During the control experiment we replicated this finding and found no effect of sound motion direction on eye movements. This indicates that auditory motion can capture our visual motion percept when visual motion direction is insufficiently determinate without affecting eye movements.

8.
Behav Brain Res ; 228(1): 16-21, 2012 Mar 01.
Article in English | MEDLINE | ID: mdl-22173002

ABSTRACT

Many studies have used test batteries for the evaluation of affective behavior in rodents. This has the advantage that treatment effects can be examined on different aspects of the affective domain. However, the behavior in one test may affect the behavior in following test. The present study examined possible order effects in rats that were tested in three different tests: Open Field (OF), Zero Maze (ZM) and Forced Swim Test (FST). The data of the present study indicated that the behavior in ZM was the least affected by the order of testing. In contrast, the behavior in the FST (and to a less extend the OF) was dependent on the order of the test in the test battery. Repeated testing in the same test did not change the behavior in the ZM. However, the behavior in the OF and FST changed with repeated testing. The present study indicates that the performance of rats in a test can be dependent on the order in a test battery. Consequently, these data caution the interpretation of treatment effects in studies in which test batteries are used.


Subject(s)
Behavioral Research/methods , Mood Disorders/psychology , Animals , Behavioral Research/statistics & numerical data , Disease Models, Animal , Male , Maze Learning , Motor Activity , Rats , Rats, Wistar , Statistics as Topic/methods , Swimming
SELECTION OF CITATIONS
SEARCH DETAIL
...