Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 70
Filter
1.
Nat Commun ; 15(1): 1002, 2024 Feb 02.
Article in English | MEDLINE | ID: mdl-38307834

ABSTRACT

Visual illusions and mental imagery are non-physical sensory experiences that involve cortical feedback processing in the primary visual cortex. Using laminar functional magnetic resonance imaging (fMRI) in two studies, we investigate if information about these internal experiences is visible in the activation patterns of different layers of primary visual cortex (V1). We find that imagery content is decodable mainly from deep layers of V1, whereas seemingly 'real' illusory content is decodable mainly from superficial layers. Furthermore, illusory content shares information with perceptual content, whilst imagery content does not generalise to illusory or perceptual information. Together, our results suggest that illusions and imagery, which differ immensely in their subjective experiences, also involve partially distinct early visual microcircuits. However, overlapping microcircuit recruitment might emerge based on the nuanced nature of subjective conscious experience.


Subject(s)
Illusions , Visual Cortex , Humans , Illusions/physiology , Primary Visual Cortex , Visual Cortex/physiology , Photic Stimulation/methods , Feedback , Magnetic Resonance Imaging , Brain Mapping
2.
J Neurosci ; 44(16)2024 Apr 17.
Article in English | MEDLINE | ID: mdl-38395614

ABSTRACT

Perception is an intricate interplay between feedforward visual input and internally generated feedback signals that comprise concurrent contextual and time-distant mnemonic (episodic and semantic) information. Yet, an unresolved question is how the composition of feedback signals changes across the lifespan and to what extent feedback signals undergo age-related dedifferentiation, that is, a decline in neural specificity. Previous research on this topic has focused on feedforward perceptual representation and episodic memory reinstatement, suggesting reduced fidelity of neural representations at the item and category levels. In this fMRI study, we combined an occlusion paradigm that filters feedforward input to the visual cortex and multivariate analysis techniques to investigate the information content in cortical feedback, focusing on age-related differences in its composition. We further asked to what extent differentiation in feedback signals (in the occluded region) is correlated to differentiation in feedforward signals. Comparing younger (18-30 years) and older female and male adults (65-75 years), we found that contextual but not mnemonic feedback was prone to age-related dedifferentiation. Semantic feedback signals were even better differentiated in older adults, highlighting the growing importance of generalized knowledge across ages. We also found that differentiation in feedforward signals was correlated with differentiation in episodic but not semantic feedback signals. Our results provide evidence for age-related adjustments in the composition of feedback signals and underscore the importance of examining dedifferentiation in aging for both feedforward and feedback processing.


Subject(s)
Memory, Episodic , Visual Cortex , Male , Humans , Female , Aged , Feedback , Longevity , Magnetic Resonance Imaging , Visual Perception
3.
Med Image Anal ; 93: 103090, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38241763

ABSTRACT

Many clinical and research studies of the human brain require accurate structural MRI segmentation. While traditional atlas-based methods can be applied to volumes from any acquisition site, recent deep learning algorithms ensure high accuracy only when tested on data from the same sites exploited in training (i.e., internal data). Performance degradation experienced on external data (i.e., unseen volumes from unseen sites) is due to the inter-site variability in intensity distributions, and to unique artefacts caused by different MR scanner models and acquisition parameters. To mitigate this site-dependency, often referred to as the scanner effect, we propose LOD-Brain, a 3D convolutional neural network with progressive levels-of-detail (LOD), able to segment brain data from any site. Coarser network levels are responsible for learning a robust anatomical prior helpful in identifying brain structures and their locations, while finer levels refine the model to handle site-specific intensity distributions and anatomical variations. We ensure robustness across sites by training the model on an unprecedentedly rich dataset aggregating data from open repositories: almost 27,000 T1w volumes from around 160 acquisition sites, at 1.5 - 3T, from a population spanning from 8 to 90 years old. Extensive tests demonstrate that LOD-Brain produces state-of-the-art results, with no significant difference in performance between internal and external sites, and robust to challenging anatomical variations. Its portability paves the way for large-scale applications across different healthcare institutions, patient populations, and imaging technology manufacturers. Code, model, and demo are available on the project website.


Subject(s)
Magnetic Resonance Imaging , Neuroimaging , Humans , Child , Adolescent , Young Adult , Adult , Middle Aged , Aged , Aged, 80 and over , Brain/diagnostic imaging , Algorithms , Artifacts
4.
Curr Biol ; 33(18): 3865-3871.e3, 2023 09 25.
Article in English | MEDLINE | ID: mdl-37643620

ABSTRACT

Neuronal activity in the primary visual cortex (V1) is driven by feedforward input from within the neurons' receptive fields (RFs) and modulated by contextual information in regions surrounding the RF. The effect of contextual information on spiking activity occurs rapidly and is therefore challenging to dissociate from feedforward input. To address this challenge, we recorded the spiking activity of V1 neurons in monkeys viewing either natural scenes or scenes where the information in the RF was occluded, effectively removing the feedforward input. We found that V1 neurons responded rapidly and selectively to occluded scenes. V1 responses elicited by occluded stimuli could be used to decode individual scenes and could be predicted from those elicited by non-occluded images, indicating that there is an overlap between visually driven and contextual responses. We used representational similarity analysis to show that the structure of V1 representations of occluded scenes measured with electrophysiology in monkeys correlates strongly with the representations of the same scenes in humans measured with functional magnetic resonance imaging (fMRI). Our results reveal that contextual influences rapidly alter V1 spiking activity in monkeys over distances of several degrees in the visual field, carry information about individual scenes, and resemble those in human V1. VIDEO ABSTRACT.


Subject(s)
Visual Cortex , Visual Perception , Animals , Humans , Visual Perception/physiology , Haplorhini , Primary Visual Cortex , Visual Cortex/physiology , Visual Fields , Photic Stimulation/methods
5.
Biology (Basel) ; 12(7)2023 Jul 20.
Article in English | MEDLINE | ID: mdl-37508451

ABSTRACT

Neurons in the primary visual cortex (V1) receive sensory inputs that describe small, local regions of the visual scene and cortical feedback inputs from higher visual areas processing the global scene context. Investigating the spatial precision of this visual contextual modulation will contribute to our understanding of the functional role of cortical feedback inputs in perceptual computations. We used human functional magnetic resonance imaging (fMRI) to test the spatial precision of contextual feedback inputs to V1 during natural scene processing. We measured brain activity patterns in the stimulated regions of V1 and in regions that we blocked from direct feedforward input, receiving information only from non-feedforward (i.e., feedback and lateral) inputs. We measured the spatial precision of contextual feedback signals by generalising brain activity patterns across parametrically spatially displaced versions of identical images using an MVPA cross-classification approach. We found that fMRI activity patterns in cortical feedback signals predicted our scene-specific features in V1 with a precision of approximately 4 degrees. The stimulated regions of V1 carried more precise scene information than non-stimulated regions; however, these regions also contained information patterns that generalised up to 4 degrees. This result shows that contextual signals relating to the global scene are similarly fed back to V1 when feedforward inputs are either present or absent. Our results are in line with contextual feedback signals from extrastriate areas to V1, describing global scene information and contributing to perceptual computations such as the hierarchical representation of feature boundaries within natural scenes.

6.
Neuroimage ; 265: 119778, 2023 01.
Article in English | MEDLINE | ID: mdl-36462731

ABSTRACT

Efficient processing of the visual environment necessitates the integration of incoming sensory evidence with concurrent contextual inputs and mnemonic content from our past experiences. To examine how this integration takes place in the brain, we isolated different types of feedback signals from the neural patterns of non-stimulated areas of the early visual cortex in humans (i.e., V1 and V2). Using multivariate pattern analysis, we showed that both contextual and time-distant information, coexist in V1 and V2 as feedback signals. In addition, we found that the extent to which mnemonic information is reinstated in V1 and V2 depends on whether the information is retrieved episodically or semantically. Critically, this reinstatement was independent on the retrieval route in the object-selective cortex. These results demonstrate that our early visual processing contains not just direct and indirect information from the visual surrounding, but also memory-based predictions.


Subject(s)
Visual Cortex , Visual Perception , Humans , Feedback , Memory , Multivariate Analysis , Brain Mapping
7.
Front Hum Neurosci ; 15: 750417, 2021.
Article in English | MEDLINE | ID: mdl-34803635

ABSTRACT

Peripheral vision has different functional priorities for mammals than foveal vision. One of its roles is to monitor the environment while central vision is focused on the current task. Becoming distracted too easily would be counterproductive in this perspective, so the brain should react to behaviourally relevant changes. Gist processing is good for this purpose, and it is therefore not surprising that evidence from both functional brain imaging and behavioural research suggests a tendency to generalize and blend information in the periphery. This may be caused by the balance of perceptual influence in the periphery between bottom-up (i.e., sensory information) and top-down (i.e., prior or contextual information) processing channels. Here, we investigated this interaction behaviourally using a peripheral numerosity discrimination task with top-down and bottom-up manipulations. Participants compared numerosity between the left and right peripheries of a screen. Each periphery was divided into a centre and a surrounding area, only one of which was a task relevant target region. Our top-down task modulation was the instruction which area to attend - centre or surround. We varied the signal strength by altering the stimuli durations i.e., the amount of information presented/processed (as a combined bottom-up and recurrent top-down feedback factor). We found that numerosity perceived in target regions was affected by contextual information in neighbouring (but irrelevant) areas. This effect appeared as soon as stimulus duration allowed the task to be reliably performed and persisted even at the longest duration (1 s). We compared the pattern of results with an ideal-observer model and found a qualitative difference in the way centre and surround areas interacted perceptually in the periphery. When participants reported on the central area, the irrelevant surround would affect the response as a weighted combination - consistent with the idea of a receptive field focused in the target area to which irrelevant surround stimulation leaks in. When participants report on surround, we can best describe the response with a model in which occasionally the attention switches from task relevant surround to task irrelevant centre - consistent with a selection model of two competing streams of information. Overall our results show that the influence of spatial context in the periphery is mandatory but task dependent.

8.
Neuropsychologia ; 163: 108070, 2021 12 10.
Article in English | MEDLINE | ID: mdl-34695420

ABSTRACT

For autistic individuals, sensory stimulation can be experienced as overwhelming. Models of predictive coding postulate that cortical mechanisms disamplify predictable information and amplify prediction errors that surpass a defined precision level. In autism, the neuronal processing is putting an inflexibly high precision on prediction errors according to the HIPPEA theory (High, Inflexible Precision of Prediction Errors in Autism). We used an apparent motion paradigm to test this prediction. In apparent motion paradigms, the illusory motion of an object creates a prediction about where and when an internally generated token would be moving along the apparent motion trace. This illusion facilitates the perception of a flashing stimulus (target) appearing in-time with the apparent motion token and is perceived as a predictable event (predictable target). In contrast, a flashing stimulus appearing out-of-time with the apparent motion illusion is an unpredictable target that is less often detected even though it produces a prediction error signal. If a prediction error does not surpass a given precision threshold the stimulation event is discounted and therefore less often detected than predictable tokens. In autism, the precision threshold is lower and the same prediction errors (unpredictable target) triggers a detection similar to that of a predictable flash stimulus. To test this hypothesis, we recruited 11 autistic males and 9 neurotypical matched controls. The participants were tasked to detect flashing stimuli placed on an apparent motion trace either in-time or out-of-time with the apparent motion illusion. Descriptively, 66% (6/9) of neurotypical and 64% (7/11) of autistic participants were better at detecting predictable targets. The prediction established by illusory motion appears to assist autistic and neurotypical individuals equally in the detection of predictable over unpredictable targets. Importantly, 55% (6/11) of autistic participants had faster responses for unpredictable targets, whereas only 22% (2/9) of neurotypicals had faster responses to unpredictable compared to predictable targets. Hence, these tentative results suggest that for autistic participants, unpredictable targets produce an above threshold prediction error, which leads to faster response. This difference in unpredictable target detection can be encapsulated under the HIPPEA theory, suggesting that precision setting could be aberrant in autistic individuals with respect to prediction errors. These tentative results should be considered in light of the small sample. For this reason, we provide the full set of materials necessary to replicate and extend the results.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Illusions , Humans , Male
9.
Hum Brain Mapp ; 42(17): 5563-5580, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34598307

ABSTRACT

Ultra-high-field magnetic resonance imaging (MRI) enables sub-millimetre resolution imaging of the human brain, allowing the study of functional circuits of cortical layers at the meso-scale. An essential step in many functional and structural neuroimaging studies is segmentation, the operation of partitioning the MR images in anatomical structures. Despite recent efforts in brain imaging analysis, the literature lacks in accurate and fast methods for segmenting 7-tesla (7T) brain MRI. We here present CEREBRUM-7T, an optimised end-to-end convolutional neural network, which allows fully automatic segmentation of a whole 7T T1w MRI brain volume at once, without partitioning the volume, pre-processing, nor aligning it to an atlas. The trained model is able to produce accurate multi-structure segmentation masks on six different classes plus background in only a few seconds. The experimental part, a combination of objective numerical evaluations and subjective analysis, confirms that the proposed solution outperforms the training labels it was trained on and is suitable for neuroimaging studies, such as layer functional MRI studies. Taking advantage of a fine-tuning operation on a reduced set of volumes, we also show how it is possible to effectively apply CEREBRUM-7T to different sites data. Furthermore, we release the code, 7T data, and other materials, including the training labels and the Turing test.


Subject(s)
Brain/anatomy & histology , Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neural Networks, Computer , Neuroimaging/methods , Humans
10.
J Vis ; 21(7): 5, 2021 07 06.
Article in English | MEDLINE | ID: mdl-34259828

ABSTRACT

The promise of artificial intelligence in understanding biological vision relies on the comparison of computational models with brain data with the goal of capturing functional principles of visual information processing. Convolutional neural networks (CNN) have successfully matched the transformations in hierarchical processing occurring along the brain's feedforward visual pathway, extending into ventral temporal cortex. However, we are still to learn if CNNs can successfully describe feedback processes in early visual cortex. Here, we investigated similarities between human early visual cortex and a CNN with encoder/decoder architecture, trained with self-supervised learning to fill occlusions and reconstruct an unseen image. Using representational similarity analysis (RSA), we compared 3T functional magnetic resonance imaging (fMRI) data from a nonstimulated patch of early visual cortex in human participants viewing partially occluded images, with the different CNN layer activations from the same images. Results show that our self-supervised image-completion network outperforms a classical object-recognition supervised network (VGG16) in terms of similarity to fMRI data. This work provides additional evidence that optimal models of the visual system might come from less feedforward architectures trained with less supervision. We also find that CNN decoder pathway activations are more similar to brain processing compared to encoder activations, suggesting an integration of mid- and low/middle-level features in early visual cortex. Challenging an artificial intelligence model to learn natural image representations via self-supervised learning and comparing them with brain data can help us to constrain our understanding of information processing, such as neuronal predictive coding.


Subject(s)
Magnetic Resonance Imaging , Visual Cortex , Artificial Intelligence , Humans , Neural Networks, Computer , Visual Cortex/diagnostic imaging , Visual Perception
11.
Prog Neurobiol ; 205: 102121, 2021 10.
Article in English | MEDLINE | ID: mdl-34273456

ABSTRACT

The brain is capable of integrating signals from multiple sensory modalities. Such multisensory integration can occur in areas that are commonly considered unisensory, such as planum temporale (PT) representing the auditory association cortex. However, the roles of different afferents (feedforward vs. feedback) to PT in multisensory processing are not well understood. Our study aims to understand that by examining laminar activity patterns in different topographical subfields of human PT under unimodal and multisensory stimuli. To this end, we adopted an advanced mesoscopic (sub-millimeter) fMRI methodology at 7 T by acquiring BOLD (blood-oxygen-level-dependent contrast, which has higher sensitivity) and VAPER (integrated blood volume and perfusion contrast, which has superior laminar specificity) signal concurrently, and performed all analyses in native fMRI space benefiting from an identical acquisition between functional and anatomical images. We found a division of function between visual and auditory processing in PT and distinct feedback mechanisms in different subareas. Specifically, anterior PT was activated more by auditory inputs and received feedback modulation in superficial layers. This feedback depended on task performance and likely arose from top-down influences from higher-order multimodal areas. In contrast, posterior PT was preferentially activated by visual inputs and received visual feedback in both superficial and deep layers, which is likely projected directly from the early visual cortex. Together, these findings provide novel insights into the mechanism of multisensory interaction in human PT at the mesoscopic spatial scale.


Subject(s)
Brain Mapping , Brain , Acoustic Stimulation , Auditory Perception , Humans , Magnetic Resonance Imaging
12.
Behav Brain Sci ; 43: e142, 2020 06 19.
Article in English | MEDLINE | ID: mdl-32645808

ABSTRACT

Predictive processing as a computational motif of the neocortex needs to be elaborated into theories of higher cognitive functions that include simulating future behavioural outcomes. We contribute to the neuroscientific perspective of predictive processing as a foundation for the proposed representational architectures of the mind.


Subject(s)
Neocortex , Cognition , Neurons
13.
Curr Biol ; 30(15): 3039-3044.e2, 2020 08 03.
Article in English | MEDLINE | ID: mdl-32559449

ABSTRACT

Complex natural sounds, such as bird singing, people talking, or traffic noise, induce decodable fMRI activation patterns in early visual cortex of sighted blindfolded participants [1]. That is, early visual cortex receives non-visual and potentially predictive information from audition. However, it is unclear whether the transfer of auditory information to early visual areas is an epiphenomenon of visual imagery or, alternatively, whether it is driven by mechanisms independent from visual experience. Here, we show that we can decode natural sounds from activity patterns in early "visual" areas of congenitally blind individuals who lack visual imagery. Thus, visual imagery is not a prerequisite of auditory feedback to early visual cortex. Furthermore, the spatial pattern of sound decoding accuracy in early visual cortex was remarkably similar in blind and sighted individuals, with an increasing decoding accuracy gradient from foveal to peripheral regions. This suggests that the typical organization by eccentricity of early visual cortex develops for auditory feedback, even in the lifelong absence of vision. The same feedback to early visual cortex might support visual perception in the sighted [1] and drive the recruitment of this area for non-visual functions in blind individuals [2, 3].


Subject(s)
Blindness/congenital , Blindness/physiopathology , Sound , Visual Cortex/physiology , Acoustic Stimulation , Feedback, Sensory/physiology , Humans , Magnetic Resonance Imaging , Visual Cortex/diagnostic imaging
14.
Sci Rep ; 10(1): 7565, 2020 05 05.
Article in English | MEDLINE | ID: mdl-32371891

ABSTRACT

At ultra-high field, fMRI voxels can span the sub-millimeter range, allowing the recording of blood oxygenation level dependent (BOLD) responses at the level of fundamental units of neural computation, such as cortical columns and layers. This sub-millimeter resolution, however, is only nominal in nature as a number of factors limit the spatial acuity of functional voxels. Multivoxel Pattern Analysis (MVPA) may provide a means to detect information at finer spatial scales that may otherwise not be visible at the single voxel level due to limitations in sensitivity and specificity. Here, we evaluate the spatial scale of stimuli specific BOLD responses in multivoxel patterns exploited by linear Support Vector Machine, Linear Discriminant Analysis and Naïve Bayesian classifiers across cortical depths in V1. To this end, we artificially misaligned the testing relative to the training portion of the data in increasing spatial steps, then investigated the breakdown of the classifiers' performances. A one voxel shift led to a significant decrease in decoding accuracy (p < 0.05) across all cortical depths, indicating that stimulus specific responses in a multivoxel pattern of BOLD activity exploited by multivariate decoders can be as precise as the nominal resolution of single voxels (here 0.8 mm isotropic). Our results further indicate that large draining vessels, prominently residing in proximity of the pial surface, do not, in this case, hinder the ability of MVPA to exploit fine scale patterns of BOLD signals. We argue that tailored analytical approaches can help overcoming limitations in high-resolution fMRI and permit studying the mesoscale organization of the human brain with higher sensitivities.


Subject(s)
Brain Mapping , Brain/physiology , Models, Theoretical , Oxygen/metabolism , Algorithms , Brain Mapping/methods , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Support Vector Machine
15.
Med Image Anal ; 62: 101688, 2020 05.
Article in English | MEDLINE | ID: mdl-32272345

ABSTRACT

Many functional and structural neuroimaging studies call for accurate morphometric segmentation of different brain structures starting from image intensity values of MRI scans. Current automatic (multi-) atlas-based segmentation strategies often lack accuracy on difficult-to-segment brain structures and, since these methods rely on atlas-to-scan alignment, they may take long processing times. Alternatively, recent methods deploying solutions based on Convolutional Neural Networks (CNNs) are enabling the direct analysis of out-of-the-scanner data. However, current CNN-based solutions partition the test volume into 2D or 3D patches, which are processed independently. This process entails a loss of global contextual information, thereby negatively impacting the segmentation accuracy. In this work, we design and test an optimised end-to-end CNN architecture that makes the exploitation of global spatial information computationally tractable, allowing to process a whole MRI volume at once. We adopt a weakly supervised learning strategy by exploiting a large dataset composed of 947 out-of-the-scanner (3 Tesla T1-weighted 1mm isotropic MP-RAGE 3D sequences) MR Images. The resulting model is able to produce accurate multi-structure segmentation results in only a few seconds. Different quantitative measures demonstrate an improved accuracy of our solution when compared to state-of-the-art techniques. Moreover, through a randomised survey involving expert neuroscientists, we show that subjective judgements favour our solution with respect to widely adopted atlas-based software.


Subject(s)
Brain , Cerebrum , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Brain/diagnostic imaging , Humans , Neural Networks, Computer
16.
J Neurosci ; 39(47): 9410-9423, 2019 11 20.
Article in English | MEDLINE | ID: mdl-31611306

ABSTRACT

Human behavior is dependent on the ability of neuronal circuits to predict the outside world. Neuronal circuits in early visual areas make these predictions based on internal models that are delivered via non-feedforward connections. Despite our extensive knowledge of the feedforward sensory features that drive cortical neurons, we have a limited grasp on the structure of the brain's internal models. Progress in neuroscience therefore depends on our ability to replicate the models that the brain creates internally. Here we record human fMRI data while presenting partially occluded visual scenes. Visual occlusion allows us to experimentally control sensory input to subregions of visual cortex while internal models continue to influence activity in these regions. Because the observed activity is dependent on internal models, but not on sensory input, we have the opportunity to map visual features conveyed by the brain's internal models. Our results show that activity related to internal models in early visual cortex are more related to scene-specific features than to categorical or depth features. We further demonstrate that behavioral line drawings provide a good description of internal model structure representing scene-specific features. These findings extend our understanding of internal models, showing that line drawings provide a window into our brains' internal models of vision.SIGNIFICANCE STATEMENT We find that fMRI activity patterns corresponding to occluded visual information in early visual cortex fill in scene-specific features. Line drawings of the missing scene information correlate with our recorded activity patterns, and thus to internal models. Despite our extensive knowledge of the sensory features that drive cortical neurons, we have a limited grasp on the structure of our brains' internal models. These results therefore constitute an advance to the field of neuroscience by extending our knowledge about the models that our brains construct to efficiently represent and predict the world. Moreover, they link a behavioral measure to these internal models, which play an active role in many components of human behavior, including visual predictions, action planning, and decision making.


Subject(s)
Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Visual Perception/physiology , Adult , Female , Humans , Magnetic Resonance Imaging/methods , Male , Young Adult
17.
J Neurosci Methods ; 328: 108319, 2019 12 01.
Article in English | MEDLINE | ID: mdl-31585315

ABSTRACT

BACKGROUND: Deep neural networks have revolutionised machine learning, with unparalleled performance in object classification. However, in brain imaging (e.g., fMRI), the direct application of Convolutional Neural Networks (CNN) to decoding subject states or perception from imaging data seems impractical given the scarcity of available data. NEW METHOD: In this work we propose a robust method to transfer information from deep learning (DL) features to brain fMRI data with the goal of decoding. By adopting Reduced Rank Regression with Ridge Regularisation we establish a multivariate link between imaging data and the fully connected layer (fc7) of a CNN. We exploit the reconstructed fc7 features by performing an object image classification task on two datasets: one of the largest fMRI databases, taken from different scanners from more than two hundred subjects watching different movie clips, and another with fMRI data taken while watching static images. RESULTS: The fc7 features could be significantly reconstructed from the imaging data, and led to significant decoding performance. COMPARISON WITH EXISTING METHODS: The decoding based on reconstructed fc7 outperformed the decoding based on imaging data alone. CONCLUSION: In this work we show how to improve fMRI-based decoding benefiting from the mapping between functional data and CNN features. The potential advantage of the proposed method is twofold: the extraction of stimuli representations by means of an automatic procedure (unsupervised) and the embedding of high-dimensional neuroimaging data onto a space designed for visual object discrimination, leading to a more manageable space from dimensionality point of view.


Subject(s)
Brain Mapping/methods , Brain/physiology , Deep Learning , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Transfer, Psychology , Visual Perception/physiology , Adult , Brain/diagnostic imaging , Humans
18.
Trends Neurosci ; 42(9): 589-603, 2019 09.
Article in English | MEDLINE | ID: mdl-31399289

ABSTRACT

There are three neural feedback pathways to the primary visual cortex (V1): corticocortical, pulvinocortical, and cholinergic. What are the respective functions of these three projections? Possible functions range from contextual modulation of stimulus processing and feedback of high-level information to predictive processing (PP). How are these functions subserved by different pathways and can they be integrated into an overarching theoretical framework? We propose that corticocortical and pulvinocortical connections are involved in all three functions, whereas the role of cholinergic projections is limited by their slow response to stimuli. PP provides a broad explanatory framework under which stimulus-context modulation and high-level processing are subsumed, involving multiple feedback pathways that provide mechanisms for inferring and interpreting what sensory inputs are about.


Subject(s)
Neural Pathways/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Visual Perception/physiology , Animals , Humans , Neurons/physiology , Photic Stimulation/methods
19.
Neuroimage ; 197: 785-791, 2019 08 15.
Article in English | MEDLINE | ID: mdl-28687519

ABSTRACT

The cortex is a massively recurrent network, characterized by feedforward and feedback connections between brain areas as well as lateral connections within an area. Feedforward, horizontal and feedback responses largely activate separate layers of a cortical unit, meaning they can be dissociated by lamina-resolved neurophysiological techniques. Such techniques are invasive and are therefore rarely used in humans. However, recent developments in high spatial resolution fMRI allow for non-invasive, in vivo measurements of brain responses specific to separate cortical layers. This provides an important opportunity to dissociate between feedforward and feedback brain responses, and investigate communication between brain areas at a more fine- grained level than previously possible in the human species. In this review, we highlight recent studies that successfully used laminar fMRI to isolate layer-specific feedback responses in human sensory cortex. In addition, we review several areas of cognitive neuroscience that stand to benefit from this new technological development, highlighting contemporary hypotheses that yield testable predictions for laminar fMRI. We hope to encourage researchers with the opportunity to embrace this development in fMRI research, as we expect that many future advancements in our current understanding of human brain function will be gained from measuring lamina-specific brain responses.


Subject(s)
Brain Mapping/methods , Brain/physiology , Cognitive Neuroscience/methods , Magnetic Resonance Imaging/methods , Animals , Cognitive Neuroscience/trends , Humans
20.
Front Neuroanat ; 12: 56, 2018.
Article in English | MEDLINE | ID: mdl-30065634

ABSTRACT

This review article addresses the function of the layers of the cerebral cortex. We develop the perspective that cortical layering needs to be understood in terms of its functional anatomy, i.e., the terminations of synaptic inputs on distinct cellular compartments and their effect on cortical activity. The cortex is a hierarchical structure in which feed forward and feedback pathways have a layer-specific termination pattern. We take the view that the influence of synaptic inputs arriving at different cortical layers can only be understood in terms of their complex interaction with cellular biophysics and the subsequent computation that occurs at the cellular level. We use high-resolution fMRI, which can resolve activity across layers, as a case study for implementing this approach by describing how cognitive events arising from the laminar distribution of inputs can be interpreted by taking into account the properties of neurons that span different layers. This perspective is based on recent advances in measuring subcellular activity in distinct feed-forward and feedback axons and in dendrites as they span across layers.

SELECTION OF CITATIONS
SEARCH DETAIL
...