Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
Add more filters










Publication year range
1.
J Neurosci ; 44(26)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-38740441

ABSTRACT

Humans make decisions about food every day. The visual system provides important information that forms a basis for these food decisions. Although previous research has focused on visual object and category representations in the brain, it is still unclear how visually presented food is encoded by the brain. Here, we investigate the time-course of food representations in the brain. We used time-resolved multivariate analyses of electroencephalography (EEG) data, obtained from human participants (both sexes), to determine which food features are represented in the brain and whether focused attention is needed for this. We recorded EEG while participants engaged in two different tasks. In one task, the stimuli were task relevant, whereas in the other task, the stimuli were not task relevant. Our findings indicate that the brain can differentiate between food and nonfood items from ∼112 ms after the stimulus onset. The neural signal at later latencies contained information about food naturalness, how much the food was transformed, as well as the perceived caloric content. This information was present regardless of the task. Information about whether food is immediately ready to eat, however, was only present when the food was task relevant and presented at a slow presentation rate. Furthermore, the recorded brain activity correlated with the behavioral responses in an odd-item-out task. The fast representation of these food features, along with the finding that this information is used to guide food categorization decision-making, suggests that these features are important dimensions along which the representation of foods is organized.


Subject(s)
Brain , Electroencephalography , Food , Photic Stimulation , Humans , Male , Female , Brain/physiology , Adult , Electroencephalography/methods , Young Adult , Photic Stimulation/methods , Reaction Time/physiology , Time Factors , Attention/physiology , Decision Making/physiology
2.
PLoS Comput Biol ; 20(1): e1011760, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38190390

ABSTRACT

The basic computations performed in the human early visual cortex are the foundation for visual perception. While we know a lot about these computations, a key missing piece is how the coding of visual features relates to our perception of the environment. To investigate visual feature coding, interactions, and their relationship to human perception, we investigated neural responses and perceptual similarity judgements to a large set of visual stimuli that varied parametrically along four feature dimensions. We measured neural responses using electroencephalography (N = 16) to 256 grating stimuli that varied in orientation, spatial frequency, contrast, and colour. We then mapped the response profiles of the neural coding of each visual feature and their interactions, and related these to independently obtained behavioural judgements of stimulus similarity. The results confirmed fundamental principles of feature coding in the visual system, such that all four features were processed simultaneously but differed in their dynamics, and there was distinctive conjunction coding for different combinations of features in the neural responses. Importantly, modelling of the behaviour revealed that every stimulus feature contributed to perceptual judgements, despite the untargeted nature of the behavioural task. Further, the relationship between neural coding and behaviour was evident from initial processing stages, signifying that the fundamental features, not just their interactions, contribute to perception. This study highlights the importance of understanding how feature coding progresses through the visual hierarchy and the relationship between different stages of processing and perception.


Subject(s)
Visual Cortex , Visual Perception , Humans , Photic Stimulation/methods , Visual Perception/physiology , Electroencephalography , Visual Cortex/physiology , Brain Mapping
3.
Conscious Cogn ; 117: 103598, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38086154

ABSTRACT

Little is known about the perceptual characteristics of mental images nor how they vary across sensory modalities. We conducted an exhaustive survey into how mental images are experienced across modalities, mainly targeting visual and auditory imagery of a single stimulus, the letter "O", to facilitate direct comparisons. We investigated temporal properties of mental images (e.g. onset latency, duration), spatial properties (e.g. apparent location), effort (e.g. ease, spontaneity, control), movement requirements (e.g. eye movements), real-imagined interactions (e.g. inner speech while reading), beliefs about imagery norms and terminologies, as well as respondent confidence. Participants also reported on the five traditional senses and their prominence during thinking, imagining, and dreaming. Overall, visual and auditory experiences dominated mental events, although auditory mental images were superior to visual mental images on almost every metric tested except regarding spatial properties. Our findings suggest that modality-specific differences in mental imagery may parallel those of other sensory neural processes.


Subject(s)
Imagination , Sensation , Humans , Visual Perception , Imagery, Psychotherapy , Auditory Perception
4.
Neurosci Conscious ; 2023(1): niad018, 2023.
Article in English | MEDLINE | ID: mdl-37621984

ABSTRACT

Mental imagery is a process by which thoughts become experienced with sensory characteristics. Yet, it is not clear why mental images appear diminished compared to veridical images, nor how mental images are phenomenologically distinct from hallucinations, another type of non-veridical sensory experience. Current evidence suggests that imagination and veridical perception share neural resources. If so, we argue that considering how neural representations of externally generated stimuli (i.e. sensory input) and internally generated stimuli (i.e. thoughts) might interfere with one another can sufficiently differentiate between veridical, imaginary, and hallucinatory perception. We here use a simple computational model of a serially connected, hierarchical network with bidirectional information flow to emulate the primate visual system. We show that modelling even first approximations of neural competition can more coherently explain imagery phenomenology than non-competitive models. Our simulations predict that, without competing sensory input, imagined stimuli should ubiquitously dominate hierarchical representations. However, with competition, imagination should dominate high-level representations but largely fail to outcompete sensory inputs at lower processing levels. To interpret our findings, we assume that low-level stimulus information (e.g. in early visual cortices) contributes most to the sensory aspects of perceptual experience, while high-level stimulus information (e.g. towards temporal regions) contributes most to its abstract aspects. Our findings therefore suggest that ongoing bottom-up inputs during waking life may prevent imagination from overriding veridical sensory experience. In contrast, internally generated stimuli may be hallucinated when sensory input is dampened or eradicated. Our approach can explain individual differences in imagery, along with aspects of daydreaming, hallucinations, and non-visual mental imagery.

5.
Annu Rev Vis Sci ; 9: 313-335, 2023 09 15.
Article in English | MEDLINE | ID: mdl-36889254

ABSTRACT

Patterns of brain activity contain meaningful information about the perceived world. Recent decades have welcomed a new era in neural analyses, with computational techniques from machine learning applied to neural data to decode information represented in the brain. In this article, we review how decoding approaches have advanced our understanding of visual representations and discuss efforts to characterize both the complexity and the behavioral relevance of these representations. We outline the current consensus regarding the spatiotemporal structure of visual representations and review recent findings that suggest that visual representations are at once robust to perturbations, yet sensitive to different mental states. Beyond representations of the physical world, recent decoding work has shone a light on how the brain instantiates internally generated states, for example, during imagery and prediction. Going forward, decoding has remarkable potential to assess the functional relevance of visual representations for human behavior, reveal how representations change across development and during aging, and uncover their presentation in various mental disorders.


Subject(s)
Aging , Brain , Humans , Machine Learning
6.
Neuroimage ; 261: 119517, 2022 11 01.
Article in English | MEDLINE | ID: mdl-35901917

ABSTRACT

The ability to perceive moving objects is crucial for threat identification and survival. Recent neuroimaging evidence has shown that goal-directed movement is an important element of object processing in the brain. However, prior work has primarily used moving stimuli that are also animate, making it difficult to disentangle the effect of movement from aliveness or animacy in representational categorisation. In the current study, we investigated the relationship between how the brain processes movement and aliveness by including stimuli that are alive but still (e.g., plants), and stimuli that are not alive but move (e.g., waves). We examined electroencephalographic (EEG) data recorded while participants viewed static images of moving or non-moving objects that were either natural or artificial. Participants classified the images according to aliveness, or according to capacity for movement. Movement explained significant variance in the neural data over and above that of aliveness, showing that capacity for movement is an important dimension in the representation of visual objects in humans.


Subject(s)
Brain Mapping , Electroencephalography , Brain , Humans , Movement , Pattern Recognition, Visual , Photic Stimulation
7.
Vision Res ; 199: 108079, 2022 10.
Article in English | MEDLINE | ID: mdl-35749833

ABSTRACT

Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at https://osf.io/n2z73/.


Subject(s)
Brain Mapping , Brain , Artificial Intelligence , Brain Mapping/methods , Humans
8.
Sci Rep ; 12(1): 6968, 2022 04 28.
Article in English | MEDLINE | ID: mdl-35484363

ABSTRACT

Selective attention prioritises relevant information amongst competing sensory input. Time-resolved electrophysiological studies have shown stronger representation of attended compared to unattended stimuli, which has been interpreted as an effect of attention on information coding. However, because attention is often manipulated by making only the attended stimulus a target to be remembered and/or responded to, many reported attention effects have been confounded with target-related processes such as visual short-term memory or decision-making. In addition, attention effects could be influenced by temporal expectation about when something is likely to happen. The aim of this study was to investigate the dynamic effect of attention on visual processing using multivariate pattern analysis of electroencephalography (EEG) data, while (1) controlling for target-related confounds, and (2) directly investigating the influence of temporal expectation. Participants viewed rapid sequences of overlaid oriented grating pairs while detecting a "target" grating of a particular orientation. We manipulated attention, one grating was attended and the other ignored (cued by colour), and temporal expectation, with stimulus onset timing either predictable or not. We controlled for target-related processing confounds by only analysing non-target trials. Both attended and ignored gratings were initially coded equally in the pattern of responses across EEG sensors. An effect of attention, with preferential coding of the attended stimulus, emerged approximately 230 ms after stimulus onset. This attention effect occurred even when controlling for target-related processing confounds, and regardless of stimulus onset expectation. These results provide insight into the effect of feature-based attention on the dynamic processing of competing visual information.


Subject(s)
Attention , Motivation , Attention/physiology , Cues , Electroencephalography , Humans , Visual Perception/physiology
9.
Sci Data ; 9(1): 3, 2022 01 10.
Article in English | MEDLINE | ID: mdl-35013331

ABSTRACT

The neural basis of object recognition and semantic knowledge has been extensively studied but the high dimensionality of object space makes it challenging to develop overarching theories on how the brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to 1,854 object concepts and 22,248 images in the THINGS stimulus set, a manually curated and high-quality image database that was specifically designed for studying human vision. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


Subject(s)
Brain/physiology , Electroencephalography , Recognition, Psychology , Visual Perception , Adolescent , Adult , Female , Humans , Male , Semantics , Young Adult
10.
J Cogn Neurosci ; 34(2): 290-312, 2022 01 05.
Article in English | MEDLINE | ID: mdl-34813647

ABSTRACT

Attention can be deployed in different ways: When searching for a taxi in New York City, we can decide where to attend (e.g., to the street) and what to attend to (e.g., yellow cars). Although we use the same word to describe both processes, nonhuman primate data suggest that these produce distinct effects on neural tuning. This has been challenging to assess in humans, but here we used an opportunity afforded by multivariate decoding of MEG data. We found that attending to an object at a particular location and attending to a particular object feature produced effects that interacted multiplicatively. The two types of attention induced distinct patterns of enhancement in occipital cortex, with feature-selective attention producing relatively more enhancement of small feature differences and spatial attention producing relatively larger effects for larger feature differences. An information flow analysis further showed that stimulus representations in occipital cortex were Granger-caused by coding in frontal cortices earlier in time and that the timing of this feedback matched the onset of attention effects. The data suggest that spatial and feature-selective attention rely on distinct neural mechanisms that arise from frontal-occipital information exchange, interacting multiplicatively to selectively enhance task-relevant information.


Subject(s)
Attention , Frontal Lobe , Animals
11.
Proc Natl Acad Sci U S A ; 118(6)2021 02 09.
Article in English | MEDLINE | ID: mdl-33526693

ABSTRACT

Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia.


Subject(s)
Color Perception/physiology , Pattern Recognition, Visual/physiology , Synesthesia/physiopathology , Adolescent , Adult , Aged , Female , Humans , Magnetoencephalography , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology , Synesthesia/diagnostic imaging , Young Adult
12.
Front Syst Neurosci ; 14: 600601, 2020.
Article in English | MEDLINE | ID: mdl-33328912

ABSTRACT

Most of the mammalian neocortex is comprised of a highly similar anatomical structure, consisting of a granular cell layer between superficial and deep layers. Even so, different cortical areas process different information. Taken together, this suggests that cortex features a canonical functional microcircuit that supports region-specific information processing. For example, the primate primary visual cortex (V1) combines the two eyes' signals, extracts stimulus orientation, and integrates contextual information such as visual stimulation history. These processes co-occur during the same laminar stimulation sequence that is triggered by the onset of visual stimuli. Yet, we still know little regarding the laminar processing differences that are specific to each of these types of stimulus information. Univariate analysis techniques have provided great insight by examining one electrode at a time or by studying average responses across multiple electrodes. Here we focus on multivariate statistics to examine response patterns across electrodes instead. Specifically, we applied multivariate pattern analysis (MVPA) to linear multielectrode array recordings of laminar spiking responses to decode information regarding the eye-of-origin, stimulus orientation, and stimulus repetition. MVPA differs from conventional univariate approaches in that it examines patterns of neural activity across simultaneously recorded electrode sites. We were curious whether this added dimensionality could reveal neural processes on the population level that are challenging to detect when measuring brain activity without the context of neighboring recording sites. We found that eye-of-origin information was decodable for the entire duration of stimulus presentation, but diminished in the deepest layers of V1. Conversely, orientation information was transient and equally pronounced along all layers. More importantly, using time-resolved MVPA, we were able to evaluate laminar response properties beyond those yielded by univariate analyses. Specifically, we performed a time generalization analysis by training a classifier at one point of the neural response and testing its performance throughout the remaining period of stimulation. Using this technique, we demonstrate repeating (reverberating) patterns of neural activity that have not previously been observed using standard univariate approaches.

13.
J Neurosci ; 40(35): 6779-6789, 2020 08 26.
Article in English | MEDLINE | ID: mdl-32703903

ABSTRACT

The ability to rapidly and accurately recognize complex objects is a crucial function of the human visual system. To recognize an object, we need to bind incoming visual features, such as color and form, together into cohesive neural representations and integrate these with our preexisting knowledge about the world. For some objects, typical color is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (MEG) data to examine how object-color knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-color combinations influences object representations, although not at the initial stages of object and color processing. We find evidence that color decoding peaks later for atypical object-color combinations compared with typical object-color combinations, illustrating the interplay between processing incoming object features and stored object knowledge. Together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.SIGNIFICANCE STATEMENT To recognize objects, we have to be able to bind object features, such as color and shape, into one coherent representation and compare it with stored object knowledge. The MEG data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using color as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently colored objects (e.g., a yellow banana) relative to incongruently colored objects (e.g., a red banana). This effect of object-color knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.


Subject(s)
Brain/physiology , Color Perception , Pattern Recognition, Visual , Adult , Concept Formation , Female , Humans , Male
14.
J Neurosci ; 40(10): 2108-2118, 2020 03 04.
Article in English | MEDLINE | ID: mdl-32001611

ABSTRACT

In tonal music, continuous acoustic waveforms are mapped onto discrete, hierarchically arranged, internal representations of pitch. To examine the neural dynamics underlying this transformation, we presented male and female human listeners with tones embedded within a Western tonal context while recording their cortical activity using magnetoencephalography. Machine learning classifiers were then trained to decode different tones from their underlying neural activation patterns at each peristimulus time sample, providing a dynamic measure of their dissimilarity in cortex. Comparing the time-varying dissimilarity between tones with the predictions of acoustic and perceptual models, we observed a temporal evolution in the brain's representational structure. Whereas initial dissimilarities mirrored their fundamental-frequency separation, dissimilarities beyond 200 ms reflected the perceptual status of each tone within the tonal hierarchy of Western music. These effects occurred regardless of stimulus regularities within the context or whether listeners were engaged in a task requiring explicit pitch analysis. Lastly, patterns of cortical activity that discriminated between tones became increasingly stable in time as the information coded by those patterns transitioned from low-to-high level properties. Current results reveal the dynamics with which the complex perceptual structure of Western tonal music emerges in cortex at the timescale of an individual tone.SIGNIFICANCE STATEMENT Little is understood about how the brain transforms an acoustic waveform into the complex perceptual structure of musical pitch. Applying neural decoding techniques to the cortical activity of human subjects engaged in music listening, we measured the dynamics of information processing in the brain on a moment-to-moment basis as subjects heard each tone. In the first 200 ms after onset, transient patterns of neural activity coded the fundamental frequency of tones. Subsequently, a period emerged during which more temporally stable activation patterns coded the perceptual status of each tone within the "tonal hierarchy" of Western music. Our results provide a crucial link between the complex perceptual structure of tonal music and the underlying neural dynamics from which it emerges.


Subject(s)
Cerebral Cortex/physiology , Models, Neurological , Pitch Perception/physiology , Adult , Female , Humans , Machine Learning , Magnetoencephalography , Male
15.
J Cogn Neurosci ; 32(1): 111-123, 2020 01.
Article in English | MEDLINE | ID: mdl-31560265

ABSTRACT

Human listeners are bombarded by acoustic information that the brain rapidly organizes into coherent percepts of objects and events in the environment, which aids speech and music perception. The efficiency of auditory object recognition belies the critical constraint that acoustic stimuli necessarily require time to unfold. Using magnetoencephalography, we studied the time course of the neural processes that transform dynamic acoustic information into auditory object representations. Participants listened to a diverse set of 36 tokens comprising everyday sounds from a typical human environment. Multivariate pattern analysis was used to decode the sound tokens from the magnetoencephalographic recordings. We show that sound tokens can be decoded from brain activity beginning 90 msec after stimulus onset with peak decoding performance occurring at 155 msec poststimulus onset. Decoding performance was primarily driven by differences between category representations (e.g., environmental vs. instrument sounds), although within-category decoding was better than chance. Representational similarity analysis revealed that these emerging neural representations were related to harmonic and spectrotemporal differences among the stimuli, which correspond to canonical acoustic features processed by the auditory pathway. Our findings begin to link the processing of physical sound properties with the perception of auditory objects and events in cortex.


Subject(s)
Auditory Pathways/physiology , Auditory Perception/physiology , Cerebral Cortex/physiology , Concept Formation/physiology , Magnetoencephalography/methods , Acoustics , Adult , Female , Functional Neuroimaging , Humans , Male , Time Factors , Young Adult
16.
Neuropsychologia ; 145: 106535, 2020 08.
Article in English | MEDLINE | ID: mdl-29037506

ABSTRACT

How is emotion represented in the brain: is it categorical or along dimensions? In the present study, we applied multivariate pattern analysis (MVPA) to magnetoencephalography (MEG) to study the brain's temporally unfolding representations of different emotion constructs. First, participants rated 525 images on the dimensions of valence and arousal and by intensity of discrete emotion categories (happiness, sadness, fear, disgust, and sadness). Thirteen new participants then viewed subsets of these images within an MEG scanner. We used Representational Similarity Analysis (RSA) to compare behavioral ratings to the unfolding neural representation of the stimuli in the brain. Ratings of valence and arousal explained significant proportions of the MEG data, even after corrections for low-level image properties. Additionally, behavioral ratings of the discrete emotions fear, disgust, and happiness significantly predicted early neural representations, whereas rating models of anger and sadness did not. Different emotion constructs also showed unique temporal signatures. Fear and disgust - both highly arousing and negative - were rapidly discriminated by the brain, but disgust was represented for an extended period of time relative to fear. Overall, our findings suggest that 1) dimensions of valence and arousal are quickly represented by the brain, as are some discrete emotions, and 2) different emotion constructs exhibit unique temporal dynamics. We discuss implications of these findings for theoretical understanding of emotion and for the interplay of discrete and dimensional aspects of emotional experience.


Subject(s)
Brain/physiology , Emotions , Anger , Arousal , Disgust , Fear , Female , Happiness , Humans , Male , Sadness , Young Adult
17.
Vision (Basel) ; 3(4)2019 Oct 21.
Article in English | MEDLINE | ID: mdl-31735854

ABSTRACT

Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results in the context of prior findings of mental imagery.

18.
Neuroimage ; 202: 116083, 2019 11 15.
Article in English | MEDLINE | ID: mdl-31400529

ABSTRACT

How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system's capacity to use image features to resolve a recognisable object.


Subject(s)
Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Adolescent , Adult , Electroencephalography , Female , Humans , Middle Aged , Recognition, Psychology/physiology , Signal Processing, Computer-Assisted , Young Adult
19.
Neuroimage ; 200: 373-381, 2019 10 15.
Article in English | MEDLINE | ID: mdl-31254648

ABSTRACT

Colour is a defining feature of many objects, playing a crucial role in our ability to rapidly recognise things in the world around us and make categorical distinctions. For example, colour is a useful cue when distinguishing lemons from limes or blackberries from raspberries. That means our representation of many objects includes key colour-related information. The question addressed here is whether the neural representation activated by knowing that something is red is the same as that activated when we actually see something red, particularly in regard to timing. We addressed this question using neural timeseries (magnetoencephalography, MEG) data to contrast real colour perception and implied object colour activation. We applied multivariate pattern analysis (MVPA) to analyse the brain activation patterns evoked by colour accessed via real colour perception and implied colour activation. Applying MVPA to MEG data allows us here to focus on the temporal dynamics of these processes. Male and female human participants (N = 18) viewed isoluminant red and green shapes and grey-scale, luminance-matched pictures of fruits and vegetables that are red (e.g., tomato) or green (e.g., kiwifruit) in nature. We show that the brain activation pattern evoked by real colour perception is similar to implied colour activation, but that this pattern is instantiated at a later time. These results suggest that a common colour representation can be triggered by activating object representations from memory and perceiving colours.


Subject(s)
Cerebral Cortex/physiology , Color Perception/physiology , Functional Neuroimaging/methods , Magnetoencephalography/methods , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Young Adult
20.
Neuropsychologia ; 129: 310-317, 2019 06.
Article in English | MEDLINE | ID: mdl-31028755

ABSTRACT

The mere presence of information in the brain does not always mean that this information is available to consciousness. Experiments using paradigms such as binocular rivalry, visual masking, and the attentional blink have shown that visual information can be processed and represented by the visual system without reaching consciousness. Using multivariate pattern analysis (MVPA) and magneto-encephalography (MEG), we investigated the temporal dynamics of information processing for unconscious and conscious stimuli. We decoded stimulus information from the brain recordings while manipulating visual consciousness by presenting stimuli at threshold contrast in a backward masking paradigm. Participants' consciousness was measured using both a forced-choice categorisation task and self-report. We show that brain activity during both conscious and non-conscious trials contained stimulus information and that this information was enhanced in conscious trials. Overall, our results indicate that visual consciousness is characterised by enhanced neural activity representing the visual stimulus and that this effect arises as early as 180 ms post-stimulus onset.


Subject(s)
Brain/physiology , Consciousness/physiology , Visual Perception/physiology , Adolescent , Female , Healthy Volunteers , Humans , Magnetoencephalography , Male , Multivariate Analysis , Photic Stimulation , Time Factors , Unconsciousness , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...