Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
Add more filters










Publication year range
1.
Behav Brain Sci ; 46: e404, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38054291

ABSTRACT

Deep neural networks (DNNs) provide a unique opportunity to move towards a generic modelling framework in psychology. The high representational capacity of these models combined with the possibility for further extensions has already allowed us to investigate the forest, namely the complex landscape of representations and processes that underlie human cognition, without forgetting about the trees, which include individual psychological phenomena.


Subject(s)
Brain , Cognition , Humans , Neural Networks, Computer
2.
PLoS Comput Biol ; 19(4): e1011086, 2023 04.
Article in English | MEDLINE | ID: mdl-37115763

ABSTRACT

Human vision is still largely unexplained. Computer vision made impressive progress on this front, but it is still unclear to which extent artificial neural networks approximate human object vision at the behavioral and neural levels. Here, we investigated whether machine object vision mimics the representational hierarchy of human object vision with an experimental design that allows testing within-domain representations for animals and scenes, as well as across-domain representations reflecting their real-world contextual regularities such as animal-scene pairs that often co-occur in the visual environment. We found that DCNNs trained in object recognition acquire representations, in their late processing stage, that closely capture human conceptual judgements about the co-occurrence of animals and their typical scenes. Likewise, the DCNNs representational hierarchy shows surprising similarities with the representational transformations emerging in domain-specific ventrotemporal areas up to domain-general frontoparietal areas. Despite these remarkable similarities, the underlying information processing differs. The ability of neural networks to learn a human-like high-level conceptual representation of object-scene co-occurrence depends upon the amount of object-scene co-occurrence present in the image set thus highlighting the fundamental role of training history. Further, although mid/high-level DCNN layers represent the category division for animals and scenes as observed in VTC, its information content shows reduced domain-specific representational richness. To conclude, by testing within- and between-domain selectivity while manipulating contextual regularities we reveal unknown similarities and differences in the information processing strategies employed by human and artificial visual systems.


Subject(s)
Pattern Recognition, Visual , Visual Cortex , Humans , Brain Mapping , Magnetic Resonance Imaging , Visual Perception , Photic Stimulation
3.
Annu Rev Psychol ; 74: 113-135, 2023 01 18.
Article in English | MEDLINE | ID: mdl-36378917

ABSTRACT

Objects are the core meaningful elements in our visual environment. Classic theories of object vision focus upon object recognition and are elegant and simple. Some of their proposals still stand, yet the simplicity is gone. Recent evolutions in behavioral paradigms, neuroscientific methods, and computational modeling have allowed vision scientists to uncover the complexity of the multidimensional representational space that underlies object vision. We review these findings and propose that the key to understanding this complexity is to relate object vision to the full repertoire of behavioral goals that underlie human behavior, running far beyond object recognition. There might be no such thing as core object recognition, and if it exists, then its importance is more limited than traditionally thought.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Visual , Humans , Visual Perception , Vision, Ocular , Biological Evolution
4.
Neuroimage ; 245: 118686, 2021 12 15.
Article in English | MEDLINE | ID: mdl-34728244

ABSTRACT

Representational similarity analysis (RSA) is a key element in the multivariate pattern analysis toolkit. The central construct of the method is the representational dissimilarity matrix (RDM), which can be generated for datasets from different modalities (neuroimaging, behavior, and computational models) and directly correlated in order to evaluate their second-order similarity. Given the inherent noisiness of neuroimaging signals it is important to evaluate the reliability of neuroimaging RDMs in order to determine whether these comparisons are meaningful. Recently, multivariate noise normalization (NNM) has been proposed as a widely applicable method for boosting signal estimates for RSA, regardless of choice of dissimilarity metrics, based on evidence that the analysis improves the within-subject reliability of RDMs (Guggenmos et al. 2018; Walther et al. 2016). We revisited this issue with three fMRI datasets and evaluated the impact of NNM on within- and between-subject reliability and RSA effect sizes using multiple dissimilarity metrics. We also assessed its impact across regions of interest from the same dataset, its interaction with spatial smoothing, and compared it to GLMdenoise, which has also been proposed as a method that improves signal estimates for RSA (Charest et al. 2018). We found that across these tests the impact of NNM was highly variable, as also seems to be the case for other analysis choices. Overall, we suggest being conservative before adding steps and complexities to the (pre)processing pipeline for RSA.


Subject(s)
Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Neuroimaging/methods , Datasets as Topic , Humans , Parietal Lobe/diagnostic imaging , Reproducibility of Results , Temporal Lobe/diagnostic imaging , Visual Cortex/diagnostic imaging
5.
J Cogn Neurosci ; 33(8): 1487-1503, 2021 07 01.
Article in English | MEDLINE | ID: mdl-34496373

ABSTRACT

Selecting hand actions to manipulate an object is affected both by perceptual factors and by action goals. Affordances may contribute to "stimulus-response" congruency effects driven by habitual actions to an object. In previous studies, we have demonstrated an influence of the congruency between hand and object orientations on response times when reaching to turn an object, such as a cup. In this study, we investigated how the representation of hand postures triggered by planning to turn a cup was influenced by this congruency effect, in an fMRI scanning environment. Healthy participants were asked to reach and turn a real cup that was placed in front of them either in an upright orientation or upside-down. They were instructed to use a hand orientation that was either congruent or incongruent with the cup orientation. As expected, the motor responses were faster when the hand and cup orientations were congruent. There was increased activity in a network of brain regions involving object-directed actions during action planning, which included bilateral primary and extrastriate visual, medial, and superior temporal areas, as well as superior parietal, primary motor, and premotor areas in the left hemisphere. Specific activation of the dorsal premotor cortex was associated with hand-object orientation congruency during planning and prior to any action taking place. Activity in that area and its connectivity with the lateral occipito-temporal cortex increased when planning incongruent (goal-directed) actions. The increased activity in premotor areas in trials where the orientation of the hand was incongruent to that of the object suggests a role in eliciting competing representations specified by hand postures in lateral occipito-temporal cortex.


Subject(s)
Hand , Motor Cortex , Brain Mapping , Humans , Magnetic Resonance Imaging , Psychomotor Performance , Reaction Time
6.
Cortex ; 133: 358-370, 2020 12.
Article in English | MEDLINE | ID: mdl-33186833

ABSTRACT

The ability to build and expertly manipulate manual tools sets humans apart from other animals. Watching images of manual tools has been shown to elicit a distinct pattern of neural activity in a network of parietal areas, assumingly because tools entail a potential for action-a unique feature related to their functional use and not shared with other manipulable objects. However, a question has been raised whether this selectivity reflects a processing of low-level visual properties-such as elongated shape that is idiosyncratic to most tool-objects-rather than action-specific features. To address this question, we created and behaviourally validated a stimulus set that dissociates objects that are manipulable and nonmanipulable, as well as objects with different degrees of body extension property (tools and non-tools), while controlling for object shape and low-level image properties. We tested the encoding of action-related features by investigating neural representations in two parietal regions of interest (intraparietal sulcus and superior parietal lobule) using functional MRI. Univariate differences between tools and non-tools were not observed when controlling for visual properties, but strong evidence for the action account was nevertheless revealed when using a multivariate approach. Overall, this study provides further evidence that the representational content in the dorsal visual stream reflects encoding of action-specific properties.


Subject(s)
Brain Mapping , Parietal Lobe , Humans , Magnetic Resonance Imaging , Parietal Lobe/diagnostic imaging , Pattern Recognition, Visual
7.
Sci Rep ; 10(1): 2453, 2020 02 12.
Article in English | MEDLINE | ID: mdl-32051467

ABSTRACT

Deep Convolutional Neural Networks (CNNs) are gaining traction as the benchmark model of visual object recognition, with performance now surpassing humans. While CNNs can accurately assign one image to potentially thousands of categories, network performance could be the result of layers that are tuned to represent the visual shape of objects, rather than object category, since both are often confounded in natural images. Using two stimulus sets that explicitly dissociate shape from category, we correlate these two types of information with each layer of multiple CNNs. We also compare CNN output with fMRI activation along the human visual ventral stream by correlating artificial with neural representations. We find that CNNs encode category information independently from shape, peaking at the final fully connected layer in all tested CNN architectures. Comparing CNNs with fMRI brain data, early visual cortex (V1) and early layers of CNNs encode shape information. Anterior ventral temporal cortex encodes category information, which correlates best with the final layer of CNNs. The interaction between shape and category that is found along the human visual ventral pathway is echoed in multiple deep networks. Our results suggest CNNs represent category information independently from shape, much like the human visual system.


Subject(s)
Visual Cortex/physiology , Visual Perception , Adult , Brain Mapping , Female , Humans , Male , Nerve Net/physiology , Neural Networks, Computer , Pattern Recognition, Visual , Photic Stimulation , Visual Pathways/physiology , Young Adult
8.
J Neurosci ; 39(33): 6513-6525, 2019 08 14.
Article in English | MEDLINE | ID: mdl-31196934

ABSTRACT

Recent studies showed agreement between how the human brain and neural networks represent objects, suggesting that we might start to understand the underlying computations. However, we know that the human brain is prone to biases at many perceptual and cognitive levels, often shaped by learning history and evolutionary constraints. Here, we explore one such perceptual phenomenon, perceiving animacy, and use the performance of neural networks as a benchmark. We performed an fMRI study that dissociated object appearance (what an object looks like) from object category (animate or inanimate) by constructing a stimulus set that includes animate objects (e.g., a cow), typical inanimate objects (e.g., a mug), and, crucially, inanimate objects that look like the animate objects (e.g., a cow mug). Behavioral judgments and deep neural networks categorized images mainly by animacy, setting all objects (lookalike and inanimate) apart from the animate ones. In contrast, activity patterns in ventral occipitotemporal cortex (VTC) were better explained by object appearance: animals and lookalikes were similarly represented and separated from the inanimate objects. Furthermore, the appearance of an object interfered with proper object identification, such as failing to signal that a cow mug is a mug. The preference in VTC to represent a lookalike as animate was even present when participants performed a task requiring them to report the lookalikes as inanimate. In conclusion, VTC representations, in contrast to neural networks, fail to represent objects when visual appearance is dissociated from animacy, probably due to a preferred processing of visual features typical of animate objects.SIGNIFICANCE STATEMENT How does the brain represent objects that we perceive around us? Recent advances in artificial intelligence have suggested that object categorization and its neural correlates have now been approximated by neural networks. Here, we show that neural networks can predict animacy according to human behavior but do not explain visual cortex representations. In ventral occipitotemporal cortex, neural activity patterns were strongly biased toward object appearance, to the extent that objects with visual features resembling animals were represented closely to real animals and separated from other objects from the same category. This organization that privileges animals and their features over objects might be the result of learning history and evolutionary constraints.


Subject(s)
Neural Networks, Computer , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male
9.
Neuroimage ; 181: 446-452, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30033392

ABSTRACT

Understanding other people's actions and mental states includes the interpretation of body postures and movements. In particular, hand postures are an important channel to signal both action and communicative intentions. Recognizing hand postures is computationally challenging because hand postures often differ only in the subtle configuration of relative finger positions and because visual characteristics of hand postures change across viewpoints. To allow for accurate interpretation, the brain needs to represent hand postures in a view-invariant but posture-specific manner. Here we test for such representations in hand-, body-, and object-selective regions of the lateral occipitotemporal cortex (LOTC). We used multivariate pattern analysis of fMRI data to test for view-specific and view-invariant representations of individual hand postures, separately for two domains: action-related postures (e.g., a precision grasp) and communicative postures (e.g., thumbs up). Results showed that hand-selective LOTC, but not nearby body- and object-selective LOTC, represented hand postures in a view-invariant manner, with relatively similar activity patterns to the same hand posture seen from different viewpoints. View invariance was equally strong for action and communicative postures. By contrast, object-selective cortex represented hand postures in a view-specific manner. These results indicate a role for hand-selective LOTC in solving the view-invariance problem for individual hand postures. View-invariant representations of hand postures in this region may then be accessed and further interpreted by multiple downstream systems to inform high-level judgments related to action understanding, emotion recognition, and non-verbal communication.


Subject(s)
Brain Mapping/methods , Gestures , Hand/physiology , Magnetic Resonance Imaging/methods , Motor Activity/physiology , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Posture/physiology , Temporal Lobe/physiology , Adult , Female , Humans , Male , Occipital Lobe/diagnostic imaging , Temporal Lobe/diagnostic imaging , Young Adult
10.
Neuropsychologia ; 105: 153-164, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28619529

ABSTRACT

A dominant view in the cognitive neuroscience of object vision is that regions of the ventral visual pathway exhibit some degree of category selectivity. However, recent findings obtained with multivariate pattern analyses (MVPA) suggest that apparent category selectivity in these regions is dependent on more basic visual features of stimuli. In which case a rethinking of the function and organization of the ventral pathway may be in order. We suggest that addressing this issue of functional specificity requires clear coding hypotheses, about object category and visual features, which make contrasting predictions about neuroimaging results in ventral pathway regions. One way to differentiate between categorical and featural coding hypotheses is to test for residual categorical effects: effects of category selectivity that cannot be accounted for by visual features of stimuli. A strong method for testing these effects, we argue, is to make object category and target visual features orthogonal in stimulus design. Recent studies that adopt this approach support a feature-based categorical coding hypothesis according to which regions of the ventral stream do indeed code for object category, but in a format at least partially based on the visual features of stimuli.


Subject(s)
Brain Mapping , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Humans , Photic Stimulation , Visual Cortex/diagnostic imaging , Visual Pathways/diagnostic imaging
11.
Cereb Cortex ; 27(1): 310-321, 2017 01 01.
Article in English | MEDLINE | ID: mdl-28108492

ABSTRACT

The dorsal, parietal visual stream is activated when seeing objects, but the exact nature of parietal object representations is still under discussion. Here we test 2 specific hypotheses. First, parietal cortex is biased to host some representations more than others, with a different bias compared with ventral areas. A prime example would be object action representations. Second, parietal cortex forms a general multiple-demand network with frontal areas, showing similar task effects and representational content compared with frontal areas. To differentiate between these hypotheses, we implemented a human neuroimaging study with a stimulus set that dissociates associated object action from object category while manipulating task context to be either action- or category-related. Representations in parietal as well as prefrontal areas represented task-relevant object properties (action representations in the action task), with no sign of the irrelevant object property (category representations in the action task). In contrast, irrelevant object properties were represented in ventral areas. These findings emphasize that human parietal cortex does not preferentially represent particular object properties irrespective of task, but together with frontal areas is part of a multiple-demand and content-rich cortical network representing task-relevant object properties.


Subject(s)
Parietal Lobe/physiology , Visual Perception/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male
12.
Neuroimage ; 148: 197-200, 2017 03 01.
Article in English | MEDLINE | ID: mdl-28069538

ABSTRACT

Representational similarity analysis (RSA) is an important part of the methodological toolkit in neuroimaging research. The focus of the approach is the construction of representational dissimilarity matrices (RDMs), which provide a single format for making comparisons between different neural data types, computational models, and behavior. We highlight two issues for the construction and comparison of RDMs. First, the diagonal values of RDMs, which should reflect within condition reliability of neural patterns, are typically not estimated in RSA. However, without such an estimate, one lacks a measure of the reliability of an RDM as a whole. Thus, when carrying out RSA, one should calculate the diagonal values of RDMs and not take them for granted. Second, although diagonal values of a correlation matrix can be used to estimate the reliability of neural patterns, these values must nonetheless be excluded when comparing RDMs. Via a simple simulation we show that inclusion of these values can generate convincing looking, but entirely illusory, correlations between independent and entirely unrelated data sets. Both of these points are further illustrated by a critical discussion of Coggan et al. (2016), who investigated the extent to which category-selectivity in the ventral temporal cortex can be accounted for by low-level image properties of visual object stimuli. We observe that their results may depend on the improper inclusion of diagonal values in their analysis.


Subject(s)
Image Processing, Computer-Assisted , Neuroimaging/methods , Algorithms , Brain Mapping , Computer Simulation , Humans , Temporal Lobe/physiology
13.
PLoS Comput Biol ; 12(4): e1004896, 2016 04.
Article in English | MEDLINE | ID: mdl-27124699

ABSTRACT

Theories of object recognition agree that shape is of primordial importance, but there is no consensus about how shape might be represented, and so far attempts to implement a model of shape perception that would work with realistic stimuli have largely failed. Recent studies suggest that state-of-the-art convolutional 'deep' neural networks (DNNs) capture important aspects of human object perception. We hypothesized that these successes might be partially related to a human-like representation of object shape. Here we demonstrate that sensitivity for shape features, characteristic to human and primate vision, emerges in DNNs when trained for generic object recognition from natural photographs. We show that these models explain human shape judgments for several benchmark behavioral and neural stimulus sets on which earlier models mostly failed. In particular, although never explicitly trained for such stimuli, DNNs develop acute sensitivity to minute variations in shape and to non-accidental properties that have long been implicated to form the basis for object recognition. Even more strikingly, when tested with a challenging stimulus set in which shape and category membership are dissociated, the most complex model architectures capture human shape sensitivity as well as some aspects of the category structure that emerges from human judgments. As a whole, these results indicate that convolutional neural networks not only learn physically correct representations of object categories but also develop perceptually accurate representational spaces of shapes. An even more complete model of human object representations might be in sight by training deep architectures for multiple tasks, which is so characteristic in human development.


Subject(s)
Models, Neurological , Neural Networks, Computer , Pattern Recognition, Visual , Computational Biology , Humans
14.
J Neurosci ; 36(2): 432-44, 2016 Jan 13.
Article in English | MEDLINE | ID: mdl-26758835

ABSTRACT

The dorsal and ventral visual pathways represent both visual and conceptual object properties. Yet the relative contribution of these two factors in the representational content of visual areas is unclear. Indeed, research investigating brain category representations rarely dissociate visual and semantic properties of objects. We present a human event-related fMRI study with a two-factorial stimulus set with 54 images that explicitly dissociates shape from category to investigate their independent contribution as well as their interactions through representational similarity analyses. Results reveal a contribution from each dimension in both streams, with a transition from shape to category along the posterior-to-anterior anatomical axis. The nature of category representations differs in the two pathways: ventral areas represent object animacy and dorsal areas represent object action properties. Furthermore, information about shape evolved from low-level pixel-based to high-level perceived shape following a posterior-to-anterior gradient similar to the shape-to-category emergence. To conclude, results show that representations of shape and category independently coexist, but at the same time they are closely related throughout the visual hierarchy. SIGNIFICANCE STATEMENT: Research investigating visual cortex conceptual category representations rarely takes into account visual properties of objects. In this report, we explicitly dissociate shape from category and investigate independent contributions and interactions of these two highly correlated dimensions.


Subject(s)
Brain Mapping , Concept Formation/physiology , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Adult , Cluster Analysis , Female , Humans , Image Processing, Computer-Assisted , Judgment , Magnetic Resonance Imaging , Male , Oxygen/blood , Photic Stimulation , Psychophysics , Visual Cortex/blood supply , Visual Pathways/blood supply , Young Adult
15.
Neuropsychologia ; 84: 81-8, 2016 Apr.
Article in English | MEDLINE | ID: mdl-26344476

ABSTRACT

It is now established that the perception of tools engages a left-lateralized network of frontoparietal and occipitotemporal cortical regions. Nevertheless, the precise computational role played by these areas is not yet well understood. To address this question, we used functional MRI to investigate the distribution of responses to pictures of tools and hands relative to other object categories in the so-called "tool" areas. Although hands and tools are visually not alike and belong to different object categories, these are both functionally linked when considering the common role of hands and tools in object manipulation. This distinction can provide insight into the differential functional role of areas within the "tool" network. Results demonstrated that images of hands and tools activate a common network of brain areas in the left intraparietal sulcus (IPS), left lateral occipitotemporal cortex (LOTC) and ventral occipitotemporal cortex (VOTC). Importantly, multivoxel pattern analysis revealed that the distribution of hand and tool response patterns in these regions differs. These observations provide support for the idea that the left IPS, left LOTC and VOTC might have distinct computational roles with regard to tool use. Specifically, these results suggest that while left IPS supports tool action-related computations and VOTC primarily encodes category specific aspects of objects, left LOTC bridges ventro occipitotemporal perception-related and parietal action-related representations by encoding both types of object information.


Subject(s)
Occipital Lobe/physiology , Parietal Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Brain Mapping/methods , Hand , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Neuropsychological Tests , Photic Stimulation/methods
16.
J Neurosci ; 35(38): 12977-85, 2015 Sep 23.
Article in English | MEDLINE | ID: mdl-26400929

ABSTRACT

Regions in human lateral and ventral occipitotemporal cortices (OTC) respond selectively to pictures of the human body and its parts. What are the organizational principles underlying body part responses in these regions? Here we used representational similarity analysis (RSA) of fMRI data to test multiple possible organizational principles: shape similarity, physical proximity, cortical homunculus proximity, and semantic similarity. Participants viewed pictures of whole persons, chairs, and eight body parts (hands, arms, legs, feet, chests, waists, upper faces, and lower faces). The similarity of multivoxel activity patterns for all body part pairs was established in whole person-selective OTC regions. The resulting neural similarity matrices were then compared with similarity matrices capturing the hypothesized organizational principles. Results showed that the semantic similarity model best captured the neural similarity of body parts in lateral and ventral OTC, which followed an organization in three clusters: (1) body parts used as action effectors (hands, feet, arms, and legs), (2) noneffector body parts (chests and waists), and (3) face parts (upper and lower faces). Whole-brain RSA revealed, in addition to OTC, regions in parietal and frontal cortex in which neural similarity was related to semantic similarity. In contrast, neural similarity in occipital cortex was best predicted by shape similarity models. We suggest that the semantic organization of body parts in high-level visual cortex relates to the different functions associated with the three body part clusters, reflecting the unique processing and connectivity demands associated with the different types of information (e.g., action, social) different body parts (e.g., limbs, faces) convey. Significance statement: While the organization of body part representations in motor and somatosensory cortices has been well characterized, the principles underlying body part representations in visual cortex have not yet been explored. In the present fMRI study we used multivoxel pattern analysis and representational similarity analysis to characterize the organization of body maps in human occipitotemporal cortex (OTC). Results indicate that visual and shape dimensions do not fully account for the organization of body part representations in OTC. Instead, the representational structure of body maps in OTC appears strongly related to functional-semantic properties of body parts. We suggest that this organization reflects the unique processing and connectivity demands associated with the different types of information different body parts convey.


Subject(s)
Brain Mapping , Human Body , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/physiology , Adult , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Nerve Net/blood supply , Nerve Net/physiology , Occipital Lobe/blood supply , Oxygen/blood , Photic Stimulation , Regression Analysis , Temporal Lobe/blood supply , Young Adult
17.
J Neurosci ; 33(46): 18247-58, 2013 Nov 13.
Article in English | MEDLINE | ID: mdl-24227734

ABSTRACT

The principles driving the functional organization of object representations in high-level visual cortex are not yet fully understood. In four human fMRI experiments, we provide evidence that the organization of high-level visual cortex partly reflects the degree to which objects are typically controlled by the body to interact with the world, thereby extending the body's boundaries. Univariate whole-brain analysis showed an overlap between responses to body effectors (e.g., hands, feet, and limbs) and object effectors (e.g., hammers, combs, and tennis rackets) in lateral occipitotemporal cortex (LOTC) and parietal cortex. Region of interest analyses showed that a hand-selective region in left LOTC responded selectively to object effectors relative to a range of noneffector object control conditions (e.g., graspable objects, "act-on" objects, musical instruments). Object ratings showed that the strong response to object effectors in hand-selective LOTC was not due to general action-related object properties shared with these control conditions, such as hand priming, hand grasping, and hand-action centrality. Finally, whole-brain representational similarity analysis revealed that the similarity of multivoxel object response patterns in left lateral occipitotemporal cortex selectively predicted the degree to which objects were rated as being controlled by and extending the body. Together, these results reveal a clustering of body and object effector representations, indicating that the organization of object representations in high-level visual cortex partly reflects how objects relate to the body.


Subject(s)
Brain Mapping/methods , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Temporal Lobe/physiology , Visual Cortex/physiology , Adult , Female , Humans , Male , Middle Aged , Young Adult
18.
J Cogn Neurosci ; 25(8): 1225-34, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23647514

ABSTRACT

Previous studies have provided evidence for a tool-selective region in left lateral occipitotemporal cortex (LOTC). This region responds selectively to pictures of tools and to characteristic visual tool motion. The present human fMRI study tested whether visual experience is required for the development of tool-selective responses in left LOTC. Words referring to tools, animals, and nonmanipulable objects were presented auditorily to 14 congenitally blind and 16 sighted participants. Sighted participants additionally viewed pictures of these objects. In whole-brain group analyses, sighted participants showed tool-selective activity in left LOTC in both visual and auditory tasks. Importantly, virtually identical tool-selective LOTC activity was found in the congenitally blind group performing the auditory task. Furthermore, both groups showed equally strong tool-selective activity for auditory stimuli in a tool-selective LOTC region defined by the picture-viewing task in the sighted group. Detailed analyses in individual participants showed significant tool-selective LOTC activity in 13 of 14 blind participants and 14 of 16 sighted participants. The strength and anatomical location of this activity were indistinguishable across groups. Finally, both blind and sighted groups showed significant resting state functional connectivity between left LOTC and a bilateral frontoparietal network. Together, these results indicate that tool-selective activity in left LOTC develops without ever having seen a tool or its motion. This finding puts constraints on the possible role that this region could have in tool processing and, more generally, provides new insights into the principles shaping the functional organization of OTC.


Subject(s)
Blindness/pathology , Choice Behavior/physiology , Functional Laterality/physiology , Nerve Net/physiology , Occipital Lobe/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adult , Blindness/physiopathology , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Judgment , Magnetic Resonance Imaging , Male , Nerve Net/blood supply , Occipital Lobe/blood supply , Oxygen/blood , Photic Stimulation , Rest , Temporal Lobe/blood supply , Young Adult
19.
J Neurophysiol ; 107(5): 1443-56, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22131379

ABSTRACT

The perception of object-directed actions performed by either hands or tools recruits regions in left fronto-parietal cortex. Here, using functional MRI (fMRI), we tested whether the common role of hands and tools in object manipulation is also reflected in the distribution of response patterns to these categories in visual cortex. In two experiments we found that static pictures of hands and tools activated closely overlapping regions in left lateral occipitotemporal cortex (LOTC). Left LOTC responses to tools selectively overlapped with responses to hands but not with responses to whole bodies, nonhand body parts, other objects, or visual motion. Multivoxel pattern analysis in left LOTC indicated a high degree of similarity between response patterns to hands and tools but not between hands or tools and other body parts. Finally, functional connectivity analysis showed that the left LOTC hand/tool region was selectively connected, relative to neighboring body-, motion-, and object-responsive regions, with regions in left intraparietal sulcus and left premotor cortex that have previously been implicated in hand/tool action-related processing. Taken together, these results suggest that action-related object properties shared by hands and tools are reflected in the organization of high-order visual cortex. We propose that the functional organization of high-order visual cortex partly reflects the organization of downstream functional networks, such as the fronto-parietal action network, due to differences within visual cortex in the connectivity to these networks.


Subject(s)
Functional Laterality/physiology , Occipital Lobe/physiology , Photic Stimulation/methods , Reaction Time/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Hand , Humans , Magnetic Resonance Imaging/methods , Psychomotor Performance/physiology
20.
J Neurophysiol ; 103(6): 3389-97, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20393066

ABSTRACT

Accumulating evidence points to a map of visual regions encoding specific categories of objects. For example, a region in the human extrastriate visual cortex, the extrastriate body area (EBA), has been implicated in the visual processing of bodies and body parts. Although in the monkey, neurons selective for hands have been reported, in humans it is unclear whether areas selective for individual body parts such as the hand exist. Here, we conducted two functional MRI experiments to test for hand-preferring responses in the human extrastriate visual cortex. We found evidence for a hand-preferring region in left lateral occipitotemporal cortex in all 14 participants. This region, located in the lateral occipital sulcus, partially overlapped with left EBA, but could be functionally and anatomically dissociated from it. In experiment 2, we further investigated the functional profile of hand- and body-preferring regions by measuring responses to hands, fingers, feet, assorted body parts (arms, legs, torsos), and non-biological handlike stimuli such as robotic hands. The hand-preferring region responded most strongly to hands, followed by robotic hands, fingers, and feet, whereas its response to assorted body parts did not significantly differ from baseline. By contrast, EBA responded most strongly to body parts, followed by hands and feet, and did not significantly respond to robotic hands or fingers. Together, these results provide evidence for a representation of the hand in extrastriate visual cortex that is distinct from the representation of other body parts.


Subject(s)
Brain Mapping , Functional Laterality/physiology , Hand , Human Body , Pattern Recognition, Visual/physiology , Visual Cortex/physiology , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Oxygen/blood , Photic Stimulation/methods , Reaction Time/physiology , Visual Cortex/blood supply
SELECTION OF CITATIONS
SEARCH DETAIL
...