Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Neuroimage ; 260: 119438, 2022 10 15.
Article in English | MEDLINE | ID: mdl-35792291

ABSTRACT

Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.


Subject(s)
Electrocorticography , Electroencephalography , Brain/physiology , Brain Mapping/methods , Electrocorticography/methods , Electrodes , Electroencephalography/methods , Humans
2.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Article in English | MEDLINE | ID: mdl-33023923

ABSTRACT

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials/physiology , Nonverbal Communication/physiology , Adult , Drug Resistant Epilepsy/surgery , Electrocorticography , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Female , Humans , Middle Aged , Neurons/physiology , Nonverbal Communication/psychology , Photic Stimulation , Young Adult
3.
Neuroimage ; 222: 116970, 2020 11 15.
Article in English | MEDLINE | ID: mdl-32454204

ABSTRACT

Facing perceptual uncertainty, the brain combines information from different senses to make optimal perceptual decisions and to guide behavior. However, decision making has been investigated mostly in unimodal contexts. Thus, how the brain integrates multisensory information during decision making is still unclear. Two opposing, but not mutually exclusive, scenarios are plausible: either the brain thoroughly combines the signals from different modalities before starting to build a supramodal decision, or unimodal signals are integrated during decision formation. To answer this question, we devised a paradigm mimicking naturalistic situations where human participants were exposed to continuous cacophonous audiovisual inputs containing an unpredictable signal cue in one or two modalities and had to perform a signal detection task or a cue categorization task. First, model-based analyses of behavioral data indicated that multisensory integration takes place alongside perceptual decision making. Next, using supervised machine learning on concurrently recorded EEG, we identified neural signatures of two processing stages: sensory encoding and decision formation. Generalization analyses across experimental conditions and time revealed that multisensory cues were processed faster during both stages. We further established that acceleration of neural dynamics during sensory encoding and decision formation was directly linked to multisensory integration. Our results were consistent across both signal detection and categorization tasks. Taken together, the results revealed a continuous dynamic interplay between multisensory integration and decision making processes (mixed scenario), with integration of multimodal information taking place both during sensory encoding as well as decision formation.


Subject(s)
Cerebral Cortex/physiology , Concept Formation/physiology , Decision Making/physiology , Electroencephalography , Functional Neuroimaging , Models, Theoretical , Signal Detection, Psychological/physiology , Supervised Machine Learning , Adult , Auditory Perception/physiology , Electroencephalography/methods , Female , Functional Neuroimaging/methods , Humans , Male , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Young Adult
4.
J Cogn Neurosci ; 32(8): 1562-1576, 2020 08.
Article in English | MEDLINE | ID: mdl-32319865

ABSTRACT

Anticipation of an impending stimulus shapes the state of the sensory systems, optimizing neural and behavioral responses. Here, we studied the role of brain oscillations in mediating spatial and temporal anticipations. Because spatial attention and temporal expectation are often associated with visual and auditory processing, respectively, we directly contrasted the visual and auditory modalities and asked whether these anticipatory mechanisms are similar in both domains. We recorded the magnetoencephalogram in healthy human participants performing an auditory and visual target discrimination task, in which cross-modal cues provided both temporal and spatial information with regard to upcoming stimulus presentation. Motivated by prior findings, we were specifically interested in delta (1-3 Hz) and alpha (8-13 Hz) band oscillatory state in anticipation of target presentation and their impact on task performance. Our findings support the view that spatial attention has a stronger effect in the visual domain, whereas temporal expectation effects are more prominent in the auditory domain. For the spatial attention manipulation, we found a typical pattern of alpha lateralization in the visual system, which correlated with response speed. Providing a rhythmic temporal cue led to increased postcue synchronization of low-frequency rhythms, although this effect was more broadband in nature, suggesting a general phase reset rather than frequency-specific neural entrainment. In addition, we observed delta-band synchronization with a frontal topography, which correlated with performance, especially in the auditory task. Combined, these findings suggest that spatial and temporal anticipations operate via a top-down modulation of the power and phase of low-frequency oscillations, respectively.


Subject(s)
Alpha Rhythm , Motivation , Acoustic Stimulation , Attention , Auditory Perception , Humans , Photic Stimulation
5.
Brain Connect ; 7(10): 648-660, 2017 12.
Article in English | MEDLINE | ID: mdl-28978234

ABSTRACT

Brain stimulation is increasingly viewed as an effective approach to treat neuropsychiatric disease. The brain's organization in distributed networks suggests that the activity of a remote brain structure could be modulated by stimulating cortical areas that strongly connect to the target. Most connections between cerebral areas are asymmetric, and a better understanding of the relative direction of information flow along connections could improve the targeting of stimulation to influence deep brain structures. The hippocampus and amygdala, two deep-situated structures that are crucial to memory and emotions, respectively, have been implicated in multiple neurological and psychiatric disorders. We explored the directed connectivity between the hippocampus and amygdala and the cerebral cortex in patients implanted with intracranial electrodes using corticocortical evoked potentials (CCEPs) evoked by single-pulse electrical stimulation. The hippocampus and amygdala were connected with most of the cortical mantle, either directly or indirectly, with the inferior temporal cortex being most directly connected. Because CCEPs assess the directionality of connections, we could determine that incoming connections from cortex to hippocampus were more direct than outgoing connections from hippocampus to cortex. We found a similar, although smaller, tendency for connections between the amygdala and cortex. Our results support the roles of the hippocampus and amygdala to be integrators of widespread cortical influence. These results can inform the targeting of noninvasive neurostimulation to influence hippocampus and amygdala function.


Subject(s)
Amygdala/physiopathology , Brain Mapping , Epilepsy/pathology , Evoked Potentials/physiology , Hippocampus/physiopathology , Neural Pathways/physiopathology , Adolescent , Adult , Cerebral Cortex , Electric Stimulation , Electrodes, Implanted , Electroencephalography , Female , Functional Laterality , Humans , Male , Middle Aged , Nerve Net/physiopathology , Young Adult
6.
J Neurosci Methods ; 281: 40-48, 2017 Apr 01.
Article in English | MEDLINE | ID: mdl-28192130

ABSTRACT

BACKGROUND: Intracranial electrical recordings (iEEG) and brain stimulation (iEBS) are invaluable human neuroscience methodologies. However, the value of such data is often unrealized as many laboratories lack tools for localizing electrodes relative to anatomy. To remedy this, we have developed a MATLAB toolbox for intracranial electrode localization and visualization, iELVis. NEW METHOD: iELVis uses existing tools (BioImage Suite, FSL, and FreeSurfer) for preimplant magnetic resonance imaging (MRI) segmentation, neuroimaging coregistration, and manual identification of electrodes in postimplant neuroimaging. Subsequently, iELVis implements methods for correcting electrode locations for postimplant brain shift with millimeter-scale accuracy and provides interactive visualization on 3D surfaces or in 2D slices with optional functional neuroimaging overlays. iELVis also localizes electrodes relative to FreeSurfer-based atlases and can combine data across subjects via the FreeSurfer average brain. RESULTS: It takes 30-60min of user time and 12-24h of computer time to localize and visualize electrodes from one brain. We demonstrate iELVis's functionality by showing that three methods for mapping primary hand somatosensory cortex (iEEG, iEBS, and functional MRI) provide highly concordant results. COMPARISON WITH EXISTING METHODS: iELVis is the first public software for electrode localization that corrects for brain shift, maps electrodes to an average brain, and supports neuroimaging overlays. Moreover, its interactive visualizations are powerful and its tutorial material is extensive. CONCLUSIONS: iELVis promises to speed the progress and enhance the robustness of intracranial electrode research. The software and extensive tutorial materials are freely available as part of the EpiSurg software project: https://github.com/episurg/episurg.


Subject(s)
Algorithms , Brain/diagnostic imaging , Brain/physiology , Electrocorticography/instrumentation , Electrodes, Implanted , Magnetic Resonance Imaging/methods , Atlases as Topic , Brain/surgery , Electrocorticography/methods , Humans , Imaging, Three-Dimensional , Motion , Neuroimaging/methods , Pattern Recognition, Automated/methods , Postoperative Period , Preoperative Period , Software
7.
Brain Struct Funct ; 222(2): 1093-1107, 2017 03.
Article in English | MEDLINE | ID: mdl-27318997

ABSTRACT

The main model of visual processing in primates proposes an anatomo-functional distinction between the dorsal stream, specialized in spatio-temporal information, and the ventral stream, processing essentially form information. However, these two pathways also communicate to share much visual information. These dorso-ventral interactions have been studied using form-from-motion (FfM) stimuli, revealing that FfM perception first activates dorsal regions (e.g., MT+/V5), followed by successive activations of ventral regions (e.g., LOC). However, relatively little is known about the implications of focal brain damage of visual areas on these dorso-ventral interactions. In the present case report, we investigated the dynamics of dorsal and ventral activations related to FfM perception (using topographical ERP analysis and electrical source imaging) in a patient suffering from a deficit in FfM perception due to right extrastriate brain damage in the ventral stream. Despite the patient's FfM impairment, both successful (observed for the highest level of FfM signal) and absent/failed FfM perception evoked the same temporal sequence of three processing states observed previously in healthy subjects. During the first period, brain source localization revealed cortical activations along the dorsal stream, currently associated with preserved elementary motion processing. During the latter two periods, the patterns of activity differed from normal subjects: activations were observed in the ventral stream (as reported for normal subjects), but also in the dorsal pathway, with the strongest and most sustained activity localized in the parieto-occipital regions. On the other hand, absent/failed FfM perception was characterized by weaker brain activity, restricted to the more lateral regions. This study shows that in the present case report, successful FfM perception, while following the same temporal sequence of processing steps as in normal subjects, evoked different patterns of brain activity. By revealing a brain circuit involving the most rostral part of the dorsal pathway, this study provides further support for neuro-imaging studies and brain lesion investigations that have suggested the existence of different brain circuits associated with different profiles of interaction between the dorsal and the ventral streams.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Perceptual Disorders/physiopathology , Visual Cortex/physiopathology , Aged , Electroencephalography , Evoked Potentials, Visual , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted , Perceptual Disorders/pathology , Photic Stimulation , Visual Cortex/pathology , Visual Pathways/pathology , Visual Pathways/physiopathology
8.
Neuroimage ; 147: 219-232, 2017 02 15.
Article in English | MEDLINE | ID: mdl-27554533

ABSTRACT

While there is a strong interest in meso-scale field potential recording using intracranial electroencephalography with penetrating depth electrodes (i.e. stereotactic EEG or S-EEG) in humans, the signal recorded in the white matter remains ignored. White matter is generally considered electrically neutral and often included in the reference montage. Moreover, re-referencing electrophysiological data is a critical preprocessing choice that could drastically impact signal content and consequently the results of any given analysis. In the present stereotactic electroencephalography study, we first illustrate empirically the consequences of commonly used references (subdermal, white matter, global average, local montage) on inter-electrode signal correlation. Since most of these reference montages incorporate white matter signal, we next consider the difference between signals recorded in cortical gray matter and white matter. Our results reveal that electrode contacts located in the white matter record a mixture of activity, with part arising from the volume conduction (zero time delay) of activity from nearby gray matter. Furthermore, our analysis shows that white matter signal may be correlated with distant gray matter signal. While residual passive electrical spread from nearby matter may account for this relationship, our results suggest the possibility that this long distance correlation arises from the white matter fiber tracts themselves (i.e. activity from distant gray matter traveling along axonal fibers with time lag larger than zero); yet definitive conclusions about the origin of the white matter signal would require further experimental substantiation. By characterizing the properties of signals recorded in white matter and in gray matter, this study illustrates the importance of including anatomical prior knowledge when analyzing S-EEG data.


Subject(s)
Electroencephalography/methods , Gray Matter/physiology , White Matter/physiology , Adult , Electrodes, Implanted , Epilepsy/diagnosis , Epilepsy/physiopathology , Epilepsy/surgery , Female , Humans , Male , Stereotaxic Techniques , Young Adult
9.
J Neurosci ; 35(22): 8546-57, 2015 Jun 03.
Article in English | MEDLINE | ID: mdl-26041921

ABSTRACT

Even simple tasks rely on information exchange between functionally distinct and often relatively distant neuronal ensembles. Considerable work indicates oscillatory synchronization through phase alignment is a major agent of inter-regional communication. In the brain, different oscillatory phases correspond to low- and high-excitability states. Optimally aligned phases (or high-excitability states) promote inter-regional communication. Studies have also shown that sensory stimulation can modulate or reset the phase of ongoing cortical oscillations. For example, auditory stimuli can reset the phase of oscillations in visual cortex, influencing processing of a simultaneous visual stimulus. Such cross-regional phase reset represents a candidate mechanism for aligning oscillatory phase for inter-regional communication. Here, we explored the role of local and inter-regional phase alignment in driving a well established behavioral correlate of multisensory integration: the redundant target effect (RTE), which refers to the fact that responses to multisensory inputs are substantially faster than to unisensory stimuli. In a speeded detection task, human epileptic patients (N = 3) responded to unisensory (auditory or visual) and multisensory (audiovisual) stimuli with a button press, while electrocorticography was recorded over auditory and motor regions. Visual stimulation significantly modulated auditory activity via phase reset in the delta and theta bands. During the period between stimulation and subsequent motor response, transient synchronization between auditory and motor regions was observed. Phase synchrony to multisensory inputs was faster than to unisensory stimulation. This sensorimotor phase alignment correlated with behavior such that stronger synchrony was associated with faster responses, linking the commonly observed RTE with phase alignment across a sensorimotor network.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Cerebral Cortex/physiopathology , Epilepsy/pathology , Evoked Potentials/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Reaction Time/physiology
10.
Eur J Neurosci ; 41(7): 925-39, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25688539

ABSTRACT

When sensory inputs are presented serially, response amplitudes to stimulus repetitions generally decrease as a function of presentation rate, diminishing rapidly as inter-stimulus intervals (ISIs) fall below 1 s. This 'adaptation' is believed to represent mechanisms by which sensory systems reduce responsivity to consistent environmental inputs, freeing resources to respond to potentially more relevant inputs. While auditory adaptation functions have been relatively well characterized, considerably less is known about visual adaptation in humans. Here, high-density visual-evoked potentials (VEPs) were recorded while two paradigms were used to interrogate visual adaptation. The first presented stimulus pairs with varying ISIs, comparing VEP amplitude to the second stimulus with that of the first (paired-presentation). The second involved blocks of stimulation (N = 100) at various ISIs and comparison of VEP amplitude between blocks of differing ISIs (block-presentation). Robust VEP modulations were evident as a function of presentation rate in the block-paradigm, with strongest modulations in the 130-150 ms and 160-180 ms visual processing phases. In paired-presentations, with ISIs of just 200-300 ms, an enhancement of VEP was evident when comparing S2 with S1, with no significant effect of presentation rate. Importantly, in block-presentations, adaptation effects were statistically robust at the individual participant level. These data suggest that a more taxing block-presentation paradigm is better suited to engage visual adaptation mechanisms than a paired-presentation design. The increased sensitivity of the visual processing metric obtained in the block-paradigm has implications for the examination of visual processing deficits in clinical populations.


Subject(s)
Adaptation, Physiological/physiology , Brain/physiology , Evoked Potentials, Visual/physiology , Visual Pathways/physiology , Visual Perception/physiology , Adult , Brain Mapping , Female , Humans , Male , Photic Stimulation , Time Factors
11.
Brain Struct Funct ; 219(4): 1369-83, 2014 Jul.
Article in English | MEDLINE | ID: mdl-23708059

ABSTRACT

The auditory system is organized such that progressively more complex features are represented across successive cortical hierarchical stages. Just when and where the processing of phonemes, fundamental elements of the speech signal, is achieved in this hierarchy remains a matter of vigorous debate. Non-invasive measures of phonemic representation have been somewhat equivocal. While some studies point to a primary role for middle/anterior regions of the superior temporal gyrus (STG), others implicate the posterior STG. Differences in stimulation, task and inter-individual anatomical/functional variability may account for these discrepant findings. Here, we sought to clarify this issue by mapping phonemic representation across left perisylvian cortex, taking advantage of the excellent sampling density afforded by intracranial recordings in humans. We asked whether one or both major divisions of the STG were sensitive to phonemic transitions. The high signal-to-noise characteristics of direct intracranial recordings allowed for analysis at the individual participant level, circumventing issues of inter-individual anatomic and functional variability that may have obscured previous findings at the group level of analysis. The mismatch negativity (MMN), an electrophysiological response elicited by changes in repetitive streams of stimulation, served as our primary dependent measure. Oddball configurations of pairs of phonemes, spectro-temporally matched non-phonemes, and simple tones were presented. The loci of the MMN clearly differed as a function of stimulus type. Phoneme representation was most robust over middle/anterior STG/STS, but was also observed over posterior STG/SMG. These data point to multiple phonemic processing zones along perisylvian cortex, both anterior and posterior to primary auditory cortex. This finding is considered within the context of a dual stream model of auditory processing in which functionally distinct ventral and dorsal auditory processing pathways may be engaged by speech stimuli.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Speech/physiology , Temporal Lobe/physiology , Acoustic Stimulation , Adolescent , Brain Mapping/methods , Electroencephalography , Female , Functional Laterality , Humans , Language , Male , Young Adult
12.
Neuroimage ; 90: 360-73, 2014 Apr 15.
Article in English | MEDLINE | ID: mdl-24365674

ABSTRACT

The adult human visual system can efficiently fill-in missing object boundaries when low-level information from the retina is incomplete, but little is known about how these processes develop across childhood. A decade of visual-evoked potential (VEP) studies has produced a theoretical model identifying distinct phases of contour completion in adults. The first, termed a perceptual phase, occurs from approximately 100-200 ms and is associated with automatic boundary completion. The second is termed a conceptual phase occurring between 230 and 400 ms. The latter has been associated with the analysis of ambiguous objects which seem to require more effort to complete. The electrophysiological markers of these phases have both been localized to the lateral occipital complex, a cluster of ventral visual stream brain regions associated with object-processing. We presented Kanizsa-type illusory contour stimuli, often used for exploring contour completion processes, to neurotypical persons ages 6-31 (N=63), while parametrically varying the spatial extent of these induced contours, in order to better understand how filling-in processes develop across childhood and adolescence. Our results suggest that, while adults complete contour boundaries in a single discrete period during the automatic perceptual phase, children display an immature response pattern-engaging in more protracted processing across both timeframes and appearing to recruit more widely distributed regions which resemble those evoked during adult processing of higher-order ambiguous figures. However, children older than 5years of age were remarkably like adults in that the effects of contour processing were invariant to manipulation of contour extent.


Subject(s)
Brain Mapping/methods , Evoked Potentials, Visual/physiology , Form Perception/physiology , Occipital Lobe/growth & development , Occipital Lobe/physiology , Adolescent , Adult , Child , Female , Humans , Male , Photic Stimulation , Signal Processing, Computer-Assisted , Young Adult
13.
J Neurosci ; 33(48): 18849-54, 2013 Nov 27.
Article in English | MEDLINE | ID: mdl-24285891

ABSTRACT

Neocortical neuronal activity is characterized by complex spatiotemporal dynamics. Although slow oscillations have been shown to travel over space in terms of consistent phase advances, it is unknown how this phenomenon relates to neuronal activity in other frequency bands. We here present electrocorticographic data from three male and one female human subject and demonstrate that gamma power is phase locked to traveling alpha waves. Given that alpha activity has been proposed to coordinate neuronal processing reflected in the gamma band, we suggest that alpha waves are involved in coordinating neuronal processing in both space and time.


Subject(s)
Alpha Rhythm/physiology , Electroencephalography , Neocortex/physiology , Adult , Data Interpretation, Statistical , Electroencephalography Phase Synchronization , Female , Humans , Male , Neocortex/cytology , Neurons/physiology
14.
Neuroimage ; 79: 19-29, 2013 Oct 01.
Article in English | MEDLINE | ID: mdl-23624493

ABSTRACT

Findings in animal models demonstrate that activity within hierarchically early sensory cortical regions can be modulated by cross-sensory inputs through resetting of the phase of ongoing intrinsic neural oscillations. Here, subdural recordings evaluated whether phase resetting by auditory inputs would impact multisensory integration processes in human visual cortex. Results clearly showed auditory-driven phase reset in visual cortices and, in some cases, frank auditory event-related potentials (ERP) were also observed over these regions. Further, when audiovisual bisensory stimuli were presented, this led to robust multisensory integration effects which were observed in both the ERP and in measures of phase concentration. These results extend findings from animal models to human visual cortices, and highlight the impact of cross-sensory phase resetting by a non-primary stimulus on multisensory integration in ostensibly unisensory cortices.


Subject(s)
Acoustic Stimulation/methods , Auditory Perception/physiology , Brain Mapping/methods , Electroencephalography/methods , Evoked Potentials, Auditory/physiology , Evoked Potentials, Visual/physiology , Visual Cortex/physiology , Biological Clocks/physiology , Cues , Humans
15.
J Neurosci ; 32(44): 15338-44, 2012 Oct 31.
Article in English | MEDLINE | ID: mdl-23115172

ABSTRACT

The frequency of environmental vibrations is sampled by two of the major sensory systems, audition and touch, notwithstanding that these signals are transduced through very different physical media and entirely separate sensory epithelia. Psychophysical studies have shown that manipulating frequency in audition or touch can have a significant cross-sensory impact on perceived frequency in the other sensory system, pointing to intimate links between these senses during computation of frequency. In this regard, the frequency of a vibratory event can be thought of as a multisensory perceptual construct. In turn, electrophysiological studies point to temporally early multisensory interactions that occur in hierarchically early sensory regions where convergent inputs from the auditory and somatosensory systems are to be found. A key question pertains to the level of processing at which the multisensory integration of featural information, such as frequency, occurs. Do the sensory systems calculate frequency independently before this information is combined, or is this feature calculated in an integrated fashion during preattentive sensory processing? The well characterized mismatch negativity, an electrophysiological response that indexes preattentive detection of a change within the context of a regular pattern of stimulation, served as our dependent measure. High-density electrophysiological recordings were made in humans while they were presented with separate blocks of somatosensory, auditory, and audio-somatosensory "standards" and "deviants," where the deviant differed in frequency. Multisensory effects were identified beginning at ∼200 ms, with the multisensory mismatch negativity (MMN) significantly different from the sum of the unisensory MMNs. This provides compelling evidence for preattentive coupling between the somatosensory and auditory channels in the cortical representation of frequency.


Subject(s)
Brain Mapping , Hearing/physiology , Touch/physiology , Adult , Cluster Analysis , Electroencephalography , Electrophysiological Phenomena , Female , Humans , Male , Photic Stimulation , Pitch Perception/physiology , Vibration , Young Adult
16.
J Neurosci ; 31(27): 9971-81, 2011 Jul 06.
Article in English | MEDLINE | ID: mdl-21734288

ABSTRACT

The simultaneous presentation of a stimulus in one sensory modality often enhances target detection in another sensory modality, but the neural mechanisms that govern these effects are still under investigation. Here, we test a hypothesis proposed in the neurophysiological literature: that auditory facilitation of visual-target detection operates through cross-sensory phase reset of ongoing neural oscillations (Lakatos et al., 2009). To date, measurement limitations have prevented this potentially powerful neural mechanism from being directly linked with its predicted behavioral consequences. The present experiment uses a psychophysical approach in humans to demonstrate, for the first time, stimulus-locked periodicity in visual-target detection, following a temporally informative sound. Our data further demonstrate that periodicity in behavioral performance is strongly influenced by the probability of audiovisual co-occurrence. We argue that fluctuations in visual-target detection result from cross-sensory phase reset, both at the moment it occurs and persisting for seconds thereafter. The precise frequency at which this periodicity operates remains to be determined through a method that allows for a higher sampling rate.


Subject(s)
Auditory Perception/physiology , Periodicity , Reaction Time/physiology , Set, Psychology , Signal Detection, Psychological/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Fourier Analysis , Humans , Male , Models, Biological , Photic Stimulation/methods , Psychophysics , Young Adult
17.
J Neurosci ; 31(9): 3400-6, 2011 Mar 02.
Article in English | MEDLINE | ID: mdl-21368051

ABSTRACT

Certain features of objects or events can be represented by more than a single sensory system, such as roughness of a surface (sight, sound, and touch), the location of a speaker (audition and sight), and the rhythm or duration of an event (by all three major sensory systems). Thus, these properties can be said to be sensory-independent or amodal. A key question is whether common multisensory cortical regions process these amodal features, or does each sensory system contain its own specialized region(s) for processing common features? We tackled this issue by investigating simple duration-detection mechanisms across audition and touch; these systems were chosen because fine duration discriminations are possible in both. The mismatch negativity (MMN) component of the human event-related potential provides a sensitive metric of duration processing and has been elicited independently during both auditory and somatosensory investigations. Employing high-density electroencephalographic recordings in conjunction with intracranial subdural recordings, we asked whether fine duration discriminations, represented by the MMN, were generated in the same cortical regions regardless of the sensory modality being probed. Scalp recordings pointed to statistically distinct MMN topographies across senses, implying differential underlying cortical generator configurations. Intracranial recordings confirmed these noninvasive findings, showing generators of the auditory MMN along the superior temporal gyrus with no evidence of a somatosensory MMN in this region, whereas a robust somatosensory MMN was recorded from postcentral gyrus in the absence of an auditory MMN. The current data clearly argue against a common circuitry account for amodal duration processing.


Subject(s)
Auditory Perception/physiology , Nerve Net/physiology , Psychomotor Performance/physiology , Touch/physiology , Acoustic Stimulation/methods , Adult , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Time Factors , Young Adult
18.
J Neurosci ; 30(21): 7202-14, 2010 May 26.
Article in English | MEDLINE | ID: mdl-20505087

ABSTRACT

Substantial data from the cognitive neurosciences point to the importance of bodily processing for the development of a comprehensive theory of the self. A key aspect of the bodily self is self-location, the experience that the self is localized at a specific position in space within one's bodily borders (embodied self-location). Although the neural mechanisms of self-location have been studied by manipulating the spatial location of one's visual perspective during mental imagery, such experiments were conducted in constrained, explicit, and unecological contexts such as explicit instructions in a prone/seated position, although most human interactions occur spontaneously while standing/walking. Using a motor paradigm, we investigated the behavioral and neural mechanisms of spontaneous self-location and mental body transformations during active human interaction. Using own-body imagery using spontaneous and explicit changes in self-location in standing participants, we report that spontaneous interactions with an avatar are neurally indistinguishable from explicit own-body transformation with disembodied self-location but differ from explicit own-body transformation with embodied self-location at 400-600 ms after stimulus onset. We discuss these findings with respect to the neural mechanisms of perspective-taking and self-location in spontaneous human interaction.


Subject(s)
Body Image , Imagination/physiology , Motor Activity/physiology , Self Concept , Space Perception/physiology , Adult , Analysis of Variance , Brain Mapping , Electroencephalography/methods , Evoked Potentials/physiology , Female , Functional Laterality/physiology , Humans , Male , Models, Neurological , Photic Stimulation/methods , Psychomotor Performance , Psychophysiology , Reaction Time/physiology , Young Adult
19.
Neuroimage ; 48(2): 405-14, 2009 Nov 01.
Article in English | MEDLINE | ID: mdl-19540924

ABSTRACT

When presented with dynamic scenes, the brain integrates visual elements across space and time. Such non-retinotopic processing has been intensively studied from a psychophysical point of view, but little is known about the underlying neural processes. Here we used high-density EEG to reveal neural correlates of non-retinotopic feature integration. In an offset-discrimination task we presented sequences of lines for which feature integration depended on a small, endogenous shift of attention. Attention effects were observed in the stimulus-locked evoked potentials but non-retinotopic feature integration was reflected in voltage topographies time-locked to the behavioral response, lasting for about 400 ms. Statistical parametric mapping of estimated current densities revealed that this integration reduced electrical activity in an extensive network of brain areas, with the effects progressing from high-level visual, via frontal, to central ones. The results suggest that endogenously timed neural processes, rather than bottom-up ones, underlie non-retinotopic feature integration.


Subject(s)
Attention/physiology , Brain/physiology , Visual Perception/physiology , Adult , Analysis of Variance , Brain Mapping , Electroencephalography , Evoked Potentials , Female , Humans , Male , Neuropsychological Tests , Photic Stimulation , Psychophysics , Reaction Time , Task Performance and Analysis , Time Factors , Video Recording , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...