Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
Behav Brain Funct ; 20(1): 8, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38637870

ABSTRACT

One important role of the TPJ is the contribution to perception of the global gist in hierarchically organized stimuli where individual elements create a global visual percept. However, the link between clinical findings in simultanagnosia and neuroimaging in healthy subjects is missing for real-world global stimuli, like visual scenes. It is well-known that hierarchical, global stimuli activate TPJ regions and that simultanagnosia patients show deficits during the recognition of hierarchical stimuli and real-world visual scenes. However, the role of the TPJ in real-world scene processing is entirely unexplored. In the present study, we first localized TPJ regions significantly responding to the global gist of hierarchical stimuli and then investigated the responses to visual scenes, as well as single objects and faces as control stimuli. All three stimulus classes evoked significantly positive univariate responses in the previously localized TPJ regions. In a multivariate analysis, we were able to demonstrate that voxel patterns of the TPJ were classified significantly above chance level for all three stimulus classes. These results demonstrate a significant involvement of the TPJ in processing of complex visual stimuli that is not restricted to visual scenes and that the TPJ is sensitive to different classes of visual stimuli with a specific signature of neuronal activations.


Subject(s)
Magnetic Resonance Imaging , Parietal Lobe , Humans , Parietal Lobe/physiology , Recognition, Psychology , Neuroimaging , Multivariate Analysis , Photic Stimulation , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Brain Mapping/methods
2.
Neuroimage ; 278: 120271, 2023 09.
Article in English | MEDLINE | ID: mdl-37442310

ABSTRACT

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.


Subject(s)
Speech Perception , Speech , Humans , Magnetic Resonance Imaging , Individuality , Visual Perception/physiology , Speech Perception/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Speech Intelligibility , Acoustic Stimulation/methods
3.
Neuroimage ; 251: 119021, 2022 05 01.
Article in English | MEDLINE | ID: mdl-35192941

ABSTRACT

Object constancy is one of the most crucial mechanisms of the human visual system enabling viewpoint invariant object recognition. However, the neuronal foundations of object constancy are widely unknown. Research has shown that the ventral visual stream is involved in processing of various kinds of object stimuli and that several regions along the ventral stream are possibly sensitive to the orientation of an object in space. To systematically address the question of viewpoint sensitive object perception, we conducted a study with stroke patients as well as an fMRI experiment with healthy participants applying object stimuli in several spatial orientations, for example in typical and atypical viewing conditions. In the fMRI experiment, we found stronger BOLD signals and above-chance classification accuracies for objects presented in atypical viewing conditions in fusiform face sensitive and lateral occipito-temporal object preferring areas. In the behavioral patient study, we observed that lesions of the right fusiform gyrus were associated with lower performance in object recognition for atypical views. The complementary results from both experiments emphasize the contributions of fusiform and lateral-occipital areas to visual object constancy and indicate that visual object constancy is particularly enabled through increased neuronal activity and specific activation patterns for objects in demanding viewing conditions.


Subject(s)
Occipital Lobe , Visual Perception , Brain Mapping , Humans , Magnetic Resonance Imaging , Occipital Lobe/diagnostic imaging , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiology , Visual Perception/physiology
4.
Neuroimage ; 247: 118796, 2022 02 15.
Article in English | MEDLINE | ID: mdl-34906712

ABSTRACT

Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.


Subject(s)
Auditory Perception/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Auditory Cortex/physiology , Brain Mapping , Cognition , Comprehension/physiology , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Speech/physiology , Speech Perception/physiology , Young Adult
5.
Cortex ; 142: 357-369, 2021 09.
Article in English | MEDLINE | ID: mdl-34358731

ABSTRACT

Functional neuroimaging and patient studies demonstrated significant involvement of ventral area V4α, located in the anterior ventral pathway, in color vision. A low number of case studies reported lesions in close vicinity to this region leading to symptoms of hemiachromatopsia indicating hemifield-specific processing of color information. With the present study, we present the first group study investigating hemiachromatopsia after injury to anterior ventral brain areas. In lateral stimulus presentations with several color perception tasks, we observed symptoms of hemiachromatopsia, which were specific to patients with unilateral lesions to the ventral pathway. Particularly, we identified unilateral lesions to area V4α as an important contribution to color perception deficits under demanding viewing conditions. Our results suggest that color information processed along the anterior ventral path is hemifield-specific and that selective deficits in color perception cannot be fully compensated by the intact contralesional visual stream.


Subject(s)
Brain , Color Perception , Humans , Visual Perception
6.
Neuroimage ; 234: 117982, 2021 07 01.
Article in English | MEDLINE | ID: mdl-33757908

ABSTRACT

Lesions to posterior temporo-parietal brain regions are associated with deficits in perception of global, hierarchical shapes, but also with impairments in the processing of objects presented under demanding viewing conditions. Evidence from neuroimaging studies and lesion patterns observed in patients with simultanagnosia and agnosia for object orientation suggest similar brain regions to be involved in perception of global shapes and processing of objects in atypical ('non-canonical') orientation. In a localizer experiment, we identified individual temporo-parietal brain areas involved in global shape perception and found significantly higher BOLD signals during the processing of non-canonical compared to canonical objects. In a multivariate approach, we demonstrated that posterior temporo-parietal brain areas show distinct voxel patterns for non-canonical and canonical objects and that voxel patterns of global shapes are more similar to those of objects in non-canonical compared to canonical viewing conditions. These results suggest that temporo-parietal brain areas are not only involved in global shape perception but might serve a more general mechanism of complex object perception. Our results challenge a strict attribution of object processing to the ventral visual stream by suggesting specific dorsal contributions in more demanding viewing conditions.


Subject(s)
Magnetic Resonance Imaging/methods , Parietal Lobe/physiology , Photic Stimulation/methods , Recognition, Psychology/physiology , Temporal Lobe/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Parietal Lobe/diagnostic imaging , Temporal Lobe/diagnostic imaging , Young Adult
7.
Cortex ; 133: 371-383, 2020 12.
Article in English | MEDLINE | ID: mdl-33221701

ABSTRACT

The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers. Second, across observers there is low correlation between McGurk susceptibility and recognition of visual speech paired with auditory speech-in-noise, another common measure of multisensory integration. Using the framework of the causal inference of multisensory speech (CIMS) model, we explored the relationship between the McGurk effect, syllable perception, and sentence perception in seven experiments with a total of 296 different participants. Perceptual reports revealed a relationship between the efficacy of different McGurk stimuli created from the same talker and perception of the auditory component of the McGurk stimuli presented in isolation, both with and without added noise. The CIMS model explained this strong stimulus-level correlation using the principles of noisy sensory encoding followed by optimal cue combination within a common representational space across speech types. Because the McGurk effect (but not speech-in-noise) requires the resolution of conflicting cues between modalities, there is an additional source of individual variability that can explain the weak observer-level correlation between McGurk and noisy speech. Power calculations show that detecting this weak correlation requires studies with many more participants than those conducted to-date. Perception of the McGurk effect and other types of speech can be explained by a common theoretical framework that includes causal inference, suggesting that the McGurk effect is a valid and useful experimental tool.


Subject(s)
Illusions , Speech Perception , Acoustic Stimulation , Auditory Perception , Humans , Photic Stimulation , Recognition, Psychology , Speech , Visual Perception
8.
PLoS One ; 15(4): e0230866, 2020.
Article in English | MEDLINE | ID: mdl-32352984

ABSTRACT

Faces are one of the most important stimuli that we encounter, but humans vary dramatically in their behavior when viewing a face: some individuals preferentially fixate the eyes, others fixate the mouth, and still others show an intermediate pattern. The determinants of these large individual differences are unknown. However, individuals with Autism Spectrum Disorder (ASD) spend less time fixating the eyes of a viewed face than controls, suggesting the hypothesis that autistic traits in healthy adults might explain individual differences in face viewing behavior. Autistic traits were measured in 98 healthy adults recruited from an academic setting using the Autism-Spectrum Quotient, a validated 50-statement questionnaire. Fixations were measured using a video-based eye tracker while participants viewed two different types of audiovisual movies: short videos of talker speaking single syllables and longer videos of talkers speaking sentences in a social context. For both types of movies, there was a positive correlation between Autism-Spectrum Quotient score and percent of time fixating the lower half of the face that explained from 4% to 10% of the variance in individual face viewing behavior. This effect suggests that in healthy adults, autistic traits are one of many factors that contribute to individual differences in face viewing behavior.


Subject(s)
Attention , Autism Spectrum Disorder/psychology , Face , Adolescent , Adult , Eye Movements , Female , Humans , Male , Middle Aged , Young Adult
9.
Psychon Bull Rev ; 27(1): 70-77, 2020 Feb.
Article in English | MEDLINE | ID: mdl-31845209

ABSTRACT

Visual information from the face of an interlocutor complements auditory information from their voice, enhancing intelligibility. However, there are large individual differences in the ability to comprehend noisy audiovisual speech. Another axis of individual variability is the extent to which humans fixate the mouth or the eyes of a viewed face. We speculated that across a lifetime of face viewing, individuals who prefer to fixate the mouth of a viewed face might accumulate stronger associations between visual and auditory speech, resulting in improved comprehension of noisy audiovisual speech. To test this idea, we assessed interindividual variability in two tasks. Participants (n = 102) varied greatly in their ability to understand noisy audiovisual sentences (accuracy from 2-58%) and in the time they spent fixating the mouth of a talker enunciating clear audiovisual syllables (3-98% of total time). These two variables were positively correlated: a 10% increase in time spent fixating the mouth equated to a 5.6% increase in multisensory gain. This finding demonstrates an unexpected link, mediated by histories of visual exposure, between two fundamental human abilities: processing faces and understanding speech.


Subject(s)
Auditory Perception , Noise , Speech Perception , Visual Perception , Adolescent , Adult , Comprehension , Eye , Eye Movement Measurements , Female , Fixation, Ocular , Humans , Individuality , Male , Middle Aged , Mouth , Young Adult
10.
J Vis ; 19(13): 2, 2019 11 01.
Article in English | MEDLINE | ID: mdl-31689715

ABSTRACT

Human faces contain dozens of visual features, but viewers preferentially fixate just two of them: the eyes and the mouth. Face-viewing behavior is usually studied by manually drawing regions of interest (ROIs) on the eyes, mouth, and other facial features. ROI analyses are problematic as they require arbitrary experimenter decisions about the location and number of ROIs, and they discard data because all fixations within each ROI are treated identically and fixations outside of any ROI are ignored. We introduce a data-driven method that uses principal component analysis (PCA) to characterize human face-viewing behavior. All fixations are entered into a PCA, and the resulting eigenimages provide a quantitative measure of variability in face-viewing behavior. In fixation data from 41 participants viewing four face exemplars under three stimulus and task conditions, the first principal component (PC1) separated the eye and mouth regions of the face. PC1 scores varied widely across participants, revealing large individual differences in preference for eye or mouth fixation, and PC1 scores varied by condition, revealing the importance of behavioral task in determining fixation location. Linear mixed effects modeling of the PC1 scores demonstrated that task condition accounted for 41% of the variance, individual differences accounted for 28% of the variance, and stimulus exemplar for less than 1% of the variance. Fixation eigenimages provide a useful tool for investigating the relative importance of the different factors that drive human face-viewing behavior.


Subject(s)
Eye Movements/physiology , Facial Recognition/physiology , Fixation, Ocular/physiology , Principal Component Analysis , Adolescent , Adult , Female , Humans , Male , Young Adult
11.
Neuroimage ; 183: 25-36, 2018 12.
Article in English | MEDLINE | ID: mdl-30092347

ABSTRACT

During face-to-face communication, the mouth of the talker is informative about speech content, while the eyes of the talker convey other information, such as gaze location. Viewers most often fixate either the mouth or the eyes of the talker's face, presumably allowing them to sample these different sources of information. To study the neural correlates of this process, healthy humans freely viewed talking faces while brain activity was measured with BOLD fMRI and eye movements were recorded with a video-based eye tracker. Post hoc trial sorting was used to divide the data into trials in which participants fixated the mouth of the talker and trials in which they fixated the eyes. Although the audiovisual stimulus was identical, the two trials types evoked differing responses in subregions of the posterior superior temporal sulcus (pSTS). The anterior pSTS preferred trials in which participants fixated the mouth of the talker while the posterior pSTS preferred fixations on the eye of the talker. A second fMRI experiment demonstrated that anterior pSTS responded more strongly to auditory and audiovisual speech than posterior pSTS eye-preferring regions. These results provide evidence for functional specialization within the pSTS under more realistic viewing and stimulus conditions than in previous neuroimaging studies.


Subject(s)
Brain Mapping/methods , Eye Movements/physiology , Eye , Facial Recognition/physiology , Mouth , Social Perception , Speech Perception/physiology , Temporal Lobe/physiology , Adolescent , Adult , Eye Movement Measurements , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Temporal Lobe/diagnostic imaging , Young Adult
12.
Neuroimage ; 181: 359-369, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30010007

ABSTRACT

Recent neuroimaging studies identified posterior regions in the temporal and parietal lobes as neuro-functional correlates of subitizing and global Gestalt perception. Beyond notable overlap on a neuronal level both mechanisms are remarkably similar on a behavioral level representing both a specific form of visual top-down processing where single elements are integrated into a superordinate entity. In the present study, we investigated whether subitizing draws on principles of global Gestalt perception enabling rapid top-down processes of visual quantification. We designed two functional neuroimaging experiments: a task identifying voxels responding to global Gestalt stimuli in posterior temporo-parietal brain regions and a visual quantification task on dot patterns with magnitudes within and outside the subitizing range. We hypothesized that voxels activated in global Gestalt perception should respond stronger to dot patterns within than those outside the subitizing range. The results confirmed this prediction for left-hemispheric posterior temporo-parietal brain areas. Additionally, we trained a classifier with response patterns from global Gestalt perception to predict neural responses of visual quantification. With this approach we were able to classify from TPJ Gestalt ROIs of both hemispheres whether a trial requiring subitizing was processed. The present study demonstrates that mechanisms of subitizing seem to build on processes of high-level visual perception.


Subject(s)
Brain Mapping/methods , Magnetic Resonance Imaging/methods , Mathematical Concepts , Parietal Lobe/physiology , Pattern Recognition, Visual/physiology , Perceptual Closure/physiology , Support Vector Machine , Temporal Lobe/physiology , Adult , Female , Humans , Male , Young Adult
13.
Behav Brain Funct ; 14(1): 9, 2018 May 10.
Article in English | MEDLINE | ID: mdl-29747668

ABSTRACT

BACKGROUND: Recent research indicates that processing proportion magnitude is associated with activation in the intraparietal sulcus. Thus, brain areas associated with the processing of numbers (i.e., absolute magnitude) were activated during processing symbolic fractions as well as non-symbolic proportions. Here, we investigated systematically the cognitive processing of symbolic (e.g., fractions and decimals) and non-symbolic proportions (e.g., dot patterns and pie charts) in a two-stage procedure. First, we investigated relative magnitude-related activations of proportion processing. Second, we evaluated whether symbolic and non-symbolic proportions share common neural substrates. METHODS: We conducted an fMRI study using magnitude comparison tasks with symbolic and non-symbolic proportions, respectively. As an indicator for magnitude-related processing of proportions, the distance effect was evaluated. RESULTS: A conjunction analysis indicated joint activation of specific occipito-parietal areas including right intraparietal sulcus (IPS) during proportion magnitude processing. More specifically, results indicate that the IPS, which is commonly associated with absolute magnitude processing, is involved in processing relative magnitude information as well, irrespective of symbolic or non-symbolic presentation format. However, we also found distinct activation patterns for the magnitude processing of the different presentation formats. CONCLUSION: Our findings suggest that processing for the separate presentation formats is not only associated with magnitude manipulations in the IPS, but also increasing demands on executive functions and strategy use associated with frontal brain regions as well as visual attention and encoding in occipital regions. Thus, the magnitude processing of proportions may not exclusively reflect processing of number magnitude information but also rather domain-general processes.


Subject(s)
Frontal Lobe/physiology , Magnetic Resonance Imaging/methods , Mathematical Concepts , Occipital Lobe/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Female , Humans , Male , Photic Stimulation/methods , Random Allocation , Young Adult
14.
Front Hum Neurosci ; 12: 54, 2018.
Article in English | MEDLINE | ID: mdl-29515382

ABSTRACT

Performance in visual quantification tasks shows two characteristic patterns as a function of set size. A precise subitizing process for small sets (up to four) was contrasted with an approximate estimation process for larger sets. The spatial arrangement of elements in a set also influences visual quantification performance, with frequently perceived arrangements (e.g., dice patterns) being faster enumerated than random arrangements. Neuropsychological and imaging studies identified the intraparietal sulcus (IPS), as key brain area for quantification, both within and above the subitizing range. However, it is not yet clear if and how set size and spatial arrangement of elements in a set modulate IPS activity during quantification. In an fMRI study, participants enumerated briefly presented dot patterns with random, canonical or dice arrangement within and above the subitizing range. We evaluated how activity amplitude and pattern in the IPS were influenced by size and spatial arrangement of a set. We found a discontinuity in the amplitude of IPS response between subitizing and estimation range, with steep activity increase for sets exceeding four elements. In the estimation range, random dot arrangements elicited stronger IPS response than canonical arrangements which in turn elicited stronger response than dice arrangements. Furthermore, IPS activity patterns differed systematically between arrangements. We found a signature in the IPS response for a transition between subitizing and estimation processes during quantification. Differences in amplitude and pattern of IPS activity for different spatial arrangements indicated a more precise representation of non-symbolic numerical magnitude for dice and canonical than for random arrangements. These findings challenge the idea of an abstract coding of numerosity in the IPS even within a single notation.

15.
J Cogn Neurosci ; 30(2): 131-143, 2018 02.
Article in English | MEDLINE | ID: mdl-28949822

ABSTRACT

We examined a stroke patient (HWS) with a unilateral lesion of the right medial ventral visual stream, involving the right fusiform and parahippocampal gyri. In a number of object recognition tests with lateralized presentations of target stimuli, HWS showed significant symptoms of hemiagnosia with contralesional recognition deficits for everyday objects. We further explored the patient's capacities of visual expertise that were acquired before the current perceptual impairment became effective. We confronted him with objects he was an expert for already before stroke onset and compared this performance with the recognition of familiar everyday objects. HWS was able to identify significantly more of the specific ("expert") than of the everyday objects on the affected contralesional side. This observation of better expert object recognition in visual hemiagnosia allows for several interpretations. The results may be caused by enhanced information processing for expert objects in the ventral system in the affected or the intact hemisphere. Expert knowledge could trigger top-down mechanisms supporting object recognition despite of impaired basic functions of object processing. More importantly, the current work demonstrates that top-down mechanisms of visual expertise influence object recognition at an early stage, probably before visual object information propagates to modules of higher object recognition. Because HWS showed a lesion to the fusiform gyrus and spared capacities of expert object recognition, the current study emphasizes possible contributions of areas outside the ventral stream to visual expertise.


Subject(s)
Agnosia/psychology , Pattern Recognition, Visual , Recognition, Psychology , Agnosia/diagnostic imaging , Agnosia/etiology , Agnosia/physiopathology , Functional Laterality , Humans , Male , Middle Aged , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Stroke/complications , Stroke/diagnostic imaging , Stroke/physiopathology , Stroke/psychology , Temporal Lobe/diagnostic imaging , Temporal Lobe/physiopathology
16.
Cortex ; 98: 149-162, 2018 01.
Article in English | MEDLINE | ID: mdl-28709682

ABSTRACT

Electrophysiological monkey and human neuroimaging studies have reported a lateralization of signal processing in object perception. However, it is unclear whether these results point to a unique topographically organized signal processing in either hemisphere, or if these results represent a rather negligible spatial organization of otherwise redundant object perception systems in both hemispheres. We tested a group of 10 patients with lesions to ventral object processing regions and spared primary visual functions with lateral presentations of different categories of object stimuli. Object perception in the contralesional visual field was impaired while object perception on the ipsilesional hemifield was intact. These results demonstrate that the object perception system needs two intact ventral pathways for unimpaired object perception across the whole visual field; the loss of one system cannot be fully compensated by its contralateral homolog or spared parts of the lesioned ventral stream.


Subject(s)
Agnosia/physiopathology , Functional Laterality/physiology , Visual Fields/physiology , Visual Perception/physiology , Aged , Female , Humans , Male , Middle Aged , Photic Stimulation , Visual Pathways/physiology
17.
Neuropsychologia ; 99: 279-285, 2017 05.
Article in English | MEDLINE | ID: mdl-28343958

ABSTRACT

Simultanagnosia is a neuropsychological deficit of higher visual processes caused by temporo-parietal brain damage. It is characterized by a specific failure of recognition of a global visual Gestalt, like a visual scene or complex objects, consisting of local elements. In this study we investigated to what extend this deficit should be understood as a deficit related to specifically the visual domain or whether it should be seen as defective Gestalt processing per se. To examine if simultanagnosia occurs across sensory domains, we designed several auditory experiments sharing typical characteristics of visual tasks that are known to be particularly demanding for patients suffering from simultanagnosia. We also included control tasks for auditory working memory deficits and for auditory extinction. We tested four simultanagnosia patients who suffered from severe symptoms in the visual domain. Two of them indeed showed significant impairments in recognition of simultaneously presented sounds. However, the same two patients also suffered from severe auditory working memory deficits and from symptoms comparable to auditory extinction, both sufficiently explaining the impairments in simultaneous auditory perception. We thus conclude that deficits in auditory Gestalt perception do not appear to be characteristic for simultanagnosia and that the human brain obviously uses independent mechanisms for visual and for auditory Gestalt perception.


Subject(s)
Agnosia/psychology , Auditory Perception , Pattern Recognition, Physiological , Visual Perception , Agnosia/diagnostic imaging , Atrophy , Female , Hearing Tests , Humans , Male , Memory, Short-Term , Middle Aged , Neuropsychological Tests , Parietal Lobe/diagnostic imaging , Temporal Lobe/diagnostic imaging
18.
Brain Struct Funct ; 222(5): 2059-2070, 2017 Jul.
Article in English | MEDLINE | ID: mdl-27807627

ABSTRACT

Modern voxel-based lesion-symptom mapping (VLSM) analyses techniques provide powerful tools to examine the relationship between structure and function of the healthy human brain. However, there is still uncertainty on the type of and the appropriate time point of imaging and of behavioral testing for such analyses. Here we tested the validity of the three most common combinations of structural imaging data and behavioral scores used in VLSM analyses. Given the established knowledge about the neural substrate of the primary motor system in humans, we asked the mundane question of where the motor system is represented in the normal human brain, analyzing individual arm motor function of 60 unselected stroke patients. Only the combination of acute behavioral scores and acute structural imaging precisely identified the principal brain area for the emergence of hemiparesis after stroke, i.e., the corticospinal tract (CST). In contrast, VLSM analyses based on chronic behavior-in combination with either chronic or acute imaging-required the exclusion of patients who had recovered from an initial paresis to reveal valid anatomical results. Thus, if the primary research aim of a VLSM lesion analysis is to uncover the neural substrates of a certain function in the healthy human brain and if no longitudinal designs with repeated evaluations are planned, the combination of acute imaging and behavior represents the ideal dataset.


Subject(s)
Brain Mapping , Brain/physiology , Adult , Aged , Female , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Paresis/physiopathology , Pyramidal Tracts/physiology , Severity of Illness Index , Stroke/physiopathology
19.
Neuropsychologia ; 89: 66-73, 2016 08.
Article in English | MEDLINE | ID: mdl-27267104

ABSTRACT

Simultanagnosia caused by posterior temporo-parietal brain damage is characterized through an inability to recognize a global Gestalt from an arrangement of single objects while perception of single objects appears widely intact. We asked whether recognition of single objects in simultanagnosia is still intact if objects are really large, i.e. if they exceed the size of a usual computer screen. Single objects were presented in three different sizes: 'regular', 'medium', and 'large'. Simultanagnosia patients demonstrated a decrease of recognition performance with increasing object size; recognition of 'large' objects was significantly impaired while perception of 'regular' sized objects was unaffected. The results argue against the traditional view of preserved recognition of single objects in simultanagnosia. They provide evidence for a more general perceptual impairment that emerges irrespective of presenting single or multiple objects, but whenever the visual system has to assemble information over larger spatial distances or other demanding viewing conditions. It appears that perception of large single objects requires intact abilities of dorsal Gestalt processing, in addition to regular functions of ventral object recognition.


Subject(s)
Agnosia/physiopathology , Pattern Recognition, Visual/physiology , Recognition, Psychology/physiology , Size Perception/physiology , Aged , Agnosia/diagnostic imaging , Cerebral Cortex/diagnostic imaging , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Neuroimaging , Photic Stimulation , Principal Component Analysis , Psychophysics , Statistics, Nonparametric
20.
Hum Brain Mapp ; 37(9): 3061-79, 2016 09.
Article in English | MEDLINE | ID: mdl-27130734

ABSTRACT

In recent theoretical considerations as well as in neuroimaging findings the left angular gyrus (AG) has been associated with the retrieval of arithmetic facts. This interpretation was corroborated by higher AG activity when processing trained as compared with untrained multiplication problems. However, so far neural correlates of processing trained versus untrained problems were only compared after training. We employed an established learning paradigm (i.e., extensive training of multiplication problems) but measured brain activation before and afte training to evaluate neural correlates of arithmetic fact acquisition more specifically. When comparing activation patterns for trained and untrained problems of the post-training session, higher AG activation for trained problems was replicated. However, when activation for trained problems was compared to activation for the same problems in the pre-training session, no signal change in the AG was observed. Instead, our results point toward a central role of hippocampal, para-hippocampal, and retrosplenial structures in arithmetic fact retrieval. We suggest that the AG might not be associated with the actual retrieval of arithmetic facts, and outline an attentional account of the role of the AG in arithmetic fact retrieval that is compatible with recent attention to memory hypotheses. Hum Brain Mapp 37:3061-3079, 2016. © 2016 Wiley Periodicals, Inc.


Subject(s)
Brain/physiology , Learning/physiology , Mathematical Concepts , Parietal Lobe/physiology , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...