Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
Sci Rep ; 14(1): 2353, 2024 01 29.
Article in English | MEDLINE | ID: mdl-38287084

ABSTRACT

Visual hallucinations can be phenomenologically divided into those of a simple or complex nature. Both simple and complex hallucinations can occur in pathological and non-pathological states, and can also be induced experimentally by visual stimulation or deprivation-for example using a high-frequency, eyes-open flicker (Ganzflicker) and perceptual deprivation (Ganzfeld). Here we leverage the differences in visual stimulation that these two techniques involve to investigate the role of bottom-up and top-down processes in shifting the complexity of visual hallucinations, and to assess whether these techniques involve a shared underlying hallucinatory mechanism despite their differences. For each technique, we measured the frequency and complexity of the hallucinations produced, utilising button presses, retrospective drawing, interviews, and questionnaires. For both experimental techniques, simple hallucinations were more common than complex hallucinations. Crucially, we found that Ganzflicker was more effective than Ganzfeld at eliciting simple hallucinations, while complex hallucinations remained equivalent across the two conditions. As a result, the likelihood that an experienced hallucination was complex was higher during Ganzfeld. Despite these differences, we found a correlation between the frequency and total time spent hallucinating in Ganzflicker and Ganzfeld conditions, suggesting some shared mechanisms between the two methodologies. We attribute the tendency to experience frequent simple hallucinations in both conditions to a shared low-level core hallucinatory mechanism, such as excitability of visual cortex, potentially amplified in Ganzflicker compared to Ganzfeld due to heightened bottom-up input. The tendency to experience complex hallucinations, in contrast, may be related to top-down processes less affected by visual stimulation.


Subject(s)
Hallucinations , Visual Cortex , Humans , Retrospective Studies , Hallucinations/etiology
2.
Neurosci Biobehav Rev ; 140: 104772, 2022 09.
Article in English | MEDLINE | ID: mdl-35835286

ABSTRACT

Most research on the neurobiology of language ignores consciousness and vice versa. Here, language, with an emphasis on inner speech, is hypothesised to generate and sustain self-awareness, i.e., higher-order consciousness. Converging evidence supporting this hypothesis is reviewed. To account for these findings, a 'HOLISTIC' model of neurobiology of language, inner speech, and consciousness is proposed. It involves a 'core' set of inner speech production regions that initiate the experience of feeling and hearing words. These take on affective qualities, deriving from activation of associated sensory, motor, and emotional representations, involving a largely unconscious dynamic 'periphery', distributed throughout the whole brain. Responding to those words forms the basis for sustained network activity, involving 'default mode' activation and prefrontal and thalamic/brainstem selection of contextually relevant responses. Evidence for the model is reviewed, supporting neuroimaging meta-analyses conducted, and comparisons with other theories of consciousness made. The HOLISTIC model constitutes a more parsimonious and complete account of the 'neural correlates of consciousness' that has implications for a mechanistic account of mental health and wellbeing.


Subject(s)
Consciousness , Language , Brain , Humans , Mouth , Neurobiology
3.
Neuropsychologia ; 169: 108194, 2022 05 03.
Article in English | MEDLINE | ID: mdl-35245529

ABSTRACT

Rodent and human studies have implicated an amygdala-prefrontal circuit during threat processing. One possibility is that while amygdala activity underlies core features of anxiety (e.g. detection of salient information), prefrontal cortices (i.e. dorsomedial prefrontal/anterior cingulate cortex) entrain its responsiveness. To date, this has been established in tightly controlled paradigms (predominantly using static face perception tasks) but has not been extended to more naturalistic settings. Consequently, using 'movie fMRI'-in which participants watch ecologically-rich movie stimuli rather than constrained cognitive tasks-we sought to test whether individual differences in anxiety correlate with the degree of face-dependent amygdala-prefrontal coupling in two independent samples. Analyses suggested increased face-dependent superior parietal activation and decreased speech-dependent auditory cortex activation as a function of anxiety. However, we failed to find evidence for anxiety-dependent connectivity, neither in our stimulus-dependent or -independent analyses. Our findings suggest that work using experimentally constrained tasks may not replicate in more ecologically valid settings and, moreover, highlight the importance of testing the generalizability of neuroimaging findings outside of the original context.


Subject(s)
Amygdala , Motion Pictures , Amygdala/diagnostic imaging , Anxiety/diagnostic imaging , Anxiety Disorders , Humans , Magnetic Resonance Imaging/methods , Neural Pathways/diagnostic imaging , Prefrontal Cortex
4.
IEEE Trans Med Imaging ; 41(6): 1431-1442, 2022 06.
Article in English | MEDLINE | ID: mdl-34968175

ABSTRACT

We consider the challenges in extracting stimulus-related neural dynamics from other intrinsic processes and noise in naturalistic functional magnetic resonance imaging (fMRI). Most studies rely on inter-subject correlations (ISC) of low-level regional activity and neglect varying responses in individuals. We propose a novel, data-driven approach based on low-rank plus sparse ( [Formula: see text]) decomposition to isolate stimulus-driven dynamic changes in brain functional connectivity (FC) from the background noise, by exploiting shared network structure among subjects receiving the same naturalistic stimuli. The time-resolved multi-subject FC matrices are modeled as a sum of a low-rank component of correlated FC patterns across subjects, and a sparse component of subject-specific, idiosyncratic background activities. To recover the shared low-rank subspace, we introduce a fused version of principal component pursuit (PCP) by adding a fusion-type penalty on the differences between the columns of the low-rank matrix. The method improves the detection of stimulus-induced group-level homogeneity in the FC profile while capturing inter-subject variability. We develop an efficient algorithm via a linearized alternating direction method of multipliers to solve the fused-PCP. Simulations show accurate recovery by the fused-PCP even when a large fraction of FC edges are severely corrupted. When applied to natural fMRI data, our method reveals FC changes that were time-locked to auditory processing during movie watching, with dynamic engagement of sensorimotor systems for speech-in-noise. It also provides a better mapping to auditory content in the movie than ISC.


Subject(s)
Brain , Magnetic Resonance Imaging , Algorithms , Brain/diagnostic imaging , Brain/physiology , Brain Mapping/methods , Humans , Magnetic Resonance Imaging/methods , Motion Pictures
5.
Cereb Cortex ; 32(11): 2447-2468, 2022 05 31.
Article in English | MEDLINE | ID: mdl-34585723

ABSTRACT

It is assumed that there are a static set of "language regions" in the brain. Yet, language comprehension engages regions well beyond these, and patients regularly produce familiar "formulaic" expressions when language regions are severely damaged. These suggest that the neurobiology of language is not fixed but varies with experiences, like the extent of word sequence learning. We hypothesized that perceiving overlearned sentences is supported by speech production and not putative language regions. Participants underwent 2 sessions of behavioral testing and functional magnetic resonance imaging (fMRI). During the intervening 15 days, they repeated 2 sentences 30 times each, twice a day. In both fMRI sessions, they "passively" listened to those sentences, novel sentences, and produced sentences. Behaviorally, evidence for overlearning included a 2.1-s decrease in reaction times to predict the final word in overlearned sentences. This corresponded to the recruitment of sensorimotor regions involved in sentence production, inactivation of temporal and inferior frontal regions involved in novel sentence listening, and a 45% change in global network organization. Thus, there was a profound whole-brain reorganization following sentence overlearning, out of "language" and into sensorimotor regions. The latter are generally preserved in aphasia and Alzheimer's disease, perhaps explaining residual abilities with formulaic expressions in both.


Subject(s)
Language , Speech Perception , Brain Mapping , Comprehension/physiology , Humans , Magnetic Resonance Imaging/methods , Overlearning , Speech/physiology , Speech Perception/physiology
6.
J Cogn Neurosci ; 33(8): 1517-1534, 2021 07 01.
Article in English | MEDLINE | ID: mdl-34496370

ABSTRACT

The role of the cerebellum in speech perception remains a mystery. Given its uniform architecture, we tested the hypothesis that it implements a domain-general predictive mechanism whose role in speech is determined by connectivity. We collated all neuroimaging studies reporting cerebellar activity in the Neurosynth database (n = 8206). From this set, we found all studies involving passive speech and sound perception (n = 72, 64% speech, 12.5% sounds, 12.5% music, and 11% tones) and speech production and articulation (n = 175). Standard and coactivation neuroimaging meta-analyses were used to compare cerebellar and associated cortical activations between passive perception and production. We found distinct regions of perception- and production-related activity in the cerebellum and regions of perception-production overlap. Each of these regions had distinct patterns of cortico-cerebellar connectivity. To test for domain-generality versus specificity, we identified all psychological and task-related terms in the Neurosynth database that predicted activity in cerebellar regions associated with passive perception and production. Regions in the cerebellum activated by speech perception were associated with domain-general terms related to prediction. One hallmark of predictive processing is metabolic savings (i.e., decreases in neural activity when events are predicted). To test the hypothesis that the cerebellum plays a predictive role in speech perception, we examined cortical activation between studies reporting cerebellar activation and those without cerebellar activation during speech perception. When the cerebellum was active during speech perception, there was far less cortical activation than when it was inactive. The results suggest that the cerebellum implements a domain-general mechanism related to prediction during speech perception.


Subject(s)
Music , Speech Perception , Cerebellum/diagnostic imaging , Humans , Speech
7.
Proc Biol Sci ; 288(1955): 20210500, 2021 07 28.
Article in English | MEDLINE | ID: mdl-34284631

ABSTRACT

The ecology of human language is face-to-face interaction, comprising cues such as prosody, co-speech gestures and mouth movements. Yet, the multimodal context is usually stripped away in experiments as dominant paradigms focus on linguistic processing only. In two studies we presented video-clips of an actress producing naturalistic passages to participants while recording their electroencephalogram. We quantified multimodal cues (prosody, gestures, mouth movements) and measured their effect on a well-established electroencephalographic marker of processing load in comprehension (N400). We found that brain responses to words were affected by informativeness of co-occurring multimodal cues, indicating that comprehension relies on linguistic and non-linguistic cues. Moreover, they were affected by interactions between the multimodal cues, indicating that the impact of each cue dynamically changes based on the informativeness of other cues. Thus, results show that multimodal cues are integral to comprehension, hence, our theories must move beyond the limited focus on speech and linguistic processing.


Subject(s)
Comprehension , Speech Perception , Electroencephalography , Evoked Potentials , Female , Gestures , Humans , Language , Male , Speech
8.
Proc Natl Acad Sci U S A ; 117(51): 32791-32798, 2020 12 22.
Article in English | MEDLINE | ID: mdl-33293422

ABSTRACT

It is well established that speech perception is improved when we are able to see the speaker talking along with hearing their voice, especially when the speech is noisy. While we have a good understanding of where speech integration occurs in the brain, it is unclear how visual and auditory cues are combined to improve speech perception. One suggestion is that integration can occur as both visual and auditory cues arise from a common generator: the vocal tract. Here, we investigate whether facial and vocal tract movements are linked during speech production by comparing videos of the face and fast magnetic resonance (MR) image sequences of the vocal tract. The joint variation in the face and vocal tract was extracted using an application of principal components analysis (PCA), and we demonstrate that MR image sequences can be reconstructed with high fidelity using only the facial video and PCA. Reconstruction fidelity was significantly higher when images from the two sequences corresponded in time, and including implicit temporal information by combining contiguous frames also led to a significant increase in fidelity. A "Bubbles" technique was used to identify which areas of the face were important for recovering information about the vocal tract, and vice versa, on a frame-by-frame basis. Our data reveal that there is sufficient information in the face to recover vocal tract shape during speech. In addition, the facial and vocal tract regions that are important for reconstruction are those that are used to generate the acoustic speech signal.


Subject(s)
Face , Speech Perception , Vocal Cords , Adult , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Nontherapeutic Human Experimentation , Principal Component Analysis , Speech Acoustics , Visual Perception
9.
Sci Data ; 7(1): 347, 2020 10 13.
Article in English | MEDLINE | ID: mdl-33051448

ABSTRACT

Neuroimaging has advanced our understanding of human psychology using reductionist stimuli that often do not resemble information the brain naturally encounters. It has improved our understanding of the network organization of the brain mostly through analyses of 'resting-state' data for which the functions of networks cannot be verifiably labelled. We make a 'Naturalistic Neuroimaging Database' (NNDb v1.0) publically available to allow for a more complete understanding of the brain under more ecological conditions during which networks can be labelled. Eighty-six participants underwent behavioural testing and watched one of 10 full-length movies while functional magnetic resonance imaging was acquired. Resulting timeseries data are shown to be of high quality, with good signal-to-noise ratio, few outliers and low movement. Data-driven functional analyses provide further evidence of data quality. They also demonstrate accurate timeseries/movie alignment and how movie annotations might be used to label networks. The NNDb can be used to answer questions previously unaddressed with standard neuroimaging approaches, progressing our knowledge of how the brain works in the real world.


Subject(s)
Brain Mapping , Brain/physiology , Magnetic Resonance Imaging , Databases, Factual , Humans
10.
Sci Rep ; 10(1): 11298, 2020 07 09.
Article in English | MEDLINE | ID: mdl-32647183

ABSTRACT

Stories play a fundamental role in human culture. They provide a mechanism for sharing cultural identity, imparting knowledge, revealing beliefs, reinforcing social bonds and providing entertainment that is central to all human societies. Here we investigated the extent to which the delivery medium of a story (audio or visual) affected self-reported and physiologically measured engagement with the narrative. Although participants self-reported greater involvement for watching video relative to listening to auditory scenes, stronger physiological responses were recorded for auditory stories. Sensors placed at their wrists showed higher and more variable heart rates, greater electrodermal activity, and even higher body temperatures. We interpret these findings as evidence that the stories were more cognitively and emotionally engaging at a physiological level when presented in an auditory format. This may be because listening to a story, rather than watching a video, is a more active process of co-creation, and that this imaginative process in the listener's mind is detectable on the skin at their wrist.


Subject(s)
Auditory Perception , Narration , Visual Perception , Adolescent , Adult , Body Temperature , Emotions , Heart Rate , Humans , Middle Aged , Self Report , Young Adult
12.
Brain Lang ; 164: 77-105, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27821280

ABSTRACT

Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.


Subject(s)
Hearing/physiology , Psychomotor Performance/physiology , Speech Perception/physiology , Speech/physiology , Tongue/physiology , Animals , Brain/physiology , Humans
13.
Philos Trans R Soc Lond B Biol Sci ; 369(1651): 20130297, 2014 Sep 19.
Article in English | MEDLINE | ID: mdl-25092665

ABSTRACT

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we 'hear' during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


Subject(s)
Auditory Cortex/physiology , Gestures , Language , Models, Psychological , Neuroimaging/methods , Speech Perception/physiology , Auditory Perception/physiology , Brain Mapping , Databases, Factual , Humans , Likelihood Functions , Models, Neurological
14.
Dev Psychobiol ; 54(3): 332-42, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22415920

ABSTRACT

In this review, we consider the literature on sensitive periods for language acquisition from the perspective of the stroke recovery literature treated in this Special Issue. Conceptually, the two areas of study are linked in a number of ways. For example, the fact that learning itself can set the stage for future failures to learn (in second language learning) or to remediate (as described in constraint therapy) is an important insight in both areas, as is the increasing awareness that limits on learning can be overcome by creating the appropriate environmental context. Similar practical issues, such as distinguishing native-like language acquisition or recovery of function from compensatory mechanisms, arise in both areas as well.


Subject(s)
Critical Period, Psychological , Language , Neuronal Plasticity/physiology , Recovery of Function/physiology , Stroke/physiopathology , Humans
15.
Q J Exp Psychol (Hove) ; 64(7): 1442-56, 2011 Jul.
Article in English | MEDLINE | ID: mdl-21604232

ABSTRACT

During a conversation, we hear the sound of the talker as well as the intended message. Traditional models of speech perception posit that acoustic details of a talker's voice are not encoded with the message whereas more recent models propose that talker identity is automatically encoded. When shadowing speech, listeners often fail to detect a change in talker identity. The present study was designed to investigate whether talker changes would be detected when listeners are actively engaged in a normal conversation, and visual information about the speaker is absent. Participants were called on the phone, and during the conversation the experimenter was surreptitiously replaced by another talker. Participants rarely noticed the change. However, when explicitly monitoring for a change, detection increased. Voice memory tests suggested that participants remembered only coarse information about both voices, rather than fine details. This suggests that although listeners are capable of change detection, voice information is not continuously monitored at a fine-grain level of acoustic representation during natural conversation and is not automatically encoded. Conversational expectations may shape the way we direct attention to voice characteristics and perceive differences in voice.


Subject(s)
Deafness/physiopathology , Language , Recognition, Psychology/physiology , Speech Acoustics , Speech Perception/physiology , Telephone , Acoustic Stimulation/methods , Female , Humans , Male , Phonetics , Young Adult
16.
J Neurosci ; 30(3): 1110-7, 2010 Jan 20.
Article in English | MEDLINE | ID: mdl-20089919

ABSTRACT

Functional magnetic resonance imaging (fMRI) studies of speech sound categorization often compare conditions in which a stimulus is presented repeatedly to conditions in which multiple stimuli are presented. This approach has established that a set of superior temporal and inferior parietal regions respond more strongly to conditions containing stimulus change. Here, we examine whether this contrast is driven by habituation to a repeating condition or by selective responding to change. Experiment 1 directly tests this by comparing the observed response to long trains of stimuli against a constructed hemodynamic response modeling the hypothesis that no habituation occurs. The results are consistent with the view that enhanced response to conditions involving phonemic variability reflect change detection. In a second experiment, the specificity of these responses to linguistically relevant stimulus variability was studied by including a condition in which the talker, rather than phonemic category, was variable from stimulus to stimulus. In this context, strong change detection responses were observed to changes in talker, but not to changes in phoneme category. The results prompt a reconsideration of two assumptions common to fMRI studies of speech sound categorization: they suggest that temporoparietal responses in passive paradigms such as those used here are better characterized as reflecting change detection than habituation, and that their apparent selectivity to speech sound categories may reflect a more general preference for variability in highly salient or behaviorally relevant stimulus dimensions.


Subject(s)
Brain Mapping , Inhibition, Psychological , Magnetic Resonance Imaging , Parietal Lobe/blood supply , Speech Perception/physiology , Temporal Lobe/blood supply , Acoustic Stimulation/methods , Female , Humans , Image Processing, Computer-Assisted/methods , Oxygen/blood , Parietal Lobe/physiology , Phonetics , Psycholinguistics/methods , Statistics as Topic , Temporal Lobe/physiology , Time Factors
17.
Hum Brain Mapp ; 30(11): 3509-26, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19384890

ABSTRACT

Everyday communication is accompanied by visual information from several sources, including co-speech gestures, which provide semantic information listeners use to help disambiguate the speaker's message. Using fMRI, we examined how gestures influence neural activity in brain regions associated with processing semantic information. The BOLD response was recorded while participants listened to stories under three audiovisual conditions and one auditory-only (speech alone) condition. In the first audiovisual condition, the storyteller produced gestures that naturally accompany speech. In the second, the storyteller made semantically unrelated hand movements. In the third, the storyteller kept her hands still. In addition to inferior parietal and posterior superior and middle temporal regions, bilateral posterior superior temporal sulcus and left anterior inferior frontal gyrus responded more strongly to speech when it was further accompanied by gesture, regardless of the semantic relation to speech. However, the right inferior frontal gyrus was sensitive to the semantic import of the hand movements, demonstrating more activity when hand movements were semantically unrelated to the accompanying speech. These findings show that perceiving hand movements during speech modulates the distributed pattern of neural activation involved in both biological motion perception and discourse comprehension, suggesting listeners attempt to find meaning, not only in the words speakers produce, but also in the hand movements that accompany speech.


Subject(s)
Brain Mapping , Brain/blood supply , Brain/physiology , Gestures , Semantics , Speech/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Female , Humans , Image Processing, Computer-Assisted/methods , Linear Models , Magnetic Resonance Imaging/methods , Male , Motion Perception/physiology , Oxygen/blood , Photic Stimulation/methods , Time Factors , Young Adult
18.
Curr Biol ; 19(8): 661-7, 2009 Apr 28.
Article in English | MEDLINE | ID: mdl-19327997

ABSTRACT

Although the linguistic structure of speech provides valuable communicative information, nonverbal behaviors can offer additional, often disambiguating cues. In particular, being able to see the face and hand movements of a speaker facilitates language comprehension [1]. But how does the brain derive meaningful information from these movements? Mouth movements provide information about phonological aspects of speech [2-3]. In contrast, cospeech gestures display semantic information relevant to the intended message [4-6]. We show that when language comprehension is accompanied by observable face movements, there is strong functional connectivity between areas of cortex involved in motor planning and production and posterior areas thought to mediate phonological aspects of speech perception. In contrast, language comprehension accompanied by cospeech gestures is associated with tuning of and strong functional connectivity between motor planning and production areas and anterior areas thought to mediate semantic aspects of language comprehension. These areas are not tuned to hand and arm movements that are not meaningful. Results suggest that when gestures accompany speech, the motor system works with language comprehension areas to determine the meaning of those gestures. Results also suggest that the cortical networks underlying language comprehension, rather than being fixed, are dynamically organized by the type of contextual information available to listeners during face-to-face communication.


Subject(s)
Comprehension/physiology , Gestures , Language , Nerve Net , Adolescent , Communication , Female , Hand , Humans , Magnetic Resonance Imaging , Movement , Nerve Net/anatomy & histology , Nerve Net/physiology , Semantics , Verbal Behavior , Young Adult
19.
Neuroimage ; 39(2): 693-706, 2008 Jan 15.
Article in English | MEDLINE | ID: mdl-17964812

ABSTRACT

The increasingly complex research questions addressed by neuroimaging research impose substantial demands on computational infrastructures. These infrastructures need to support management of massive amounts of data in a way that affords rapid and precise data analysis, to allow collaborative research, and to achieve these aims securely and with minimum management overhead. Here we present an approach that overcomes many current limitations in data analysis and data sharing. This approach is based on open source database management systems that support complex data queries as an integral part of data analysis, flexible data sharing, and parallel and distributed data processing using cluster computing and Grid computing resources. We assess the strengths of these approaches as compared to current frameworks based on storage of binary or text files. We then describe in detail the implementation of such a system and provide a concrete description of how it was used to enable a complex analysis of fMRI time series data.


Subject(s)
Computer Communication Networks/statistics & numerical data , Database Management Systems , Image Processing, Computer-Assisted/methods , Information Storage and Retrieval/methods , Nervous System/anatomy & histology , Humans , Image Processing, Computer-Assisted/statistics & numerical data , Magnetic Resonance Imaging/statistics & numerical data , Nervous System/pathology , Positron-Emission Tomography/statistics & numerical data
20.
Neuron ; 56(6): 1116-26, 2007 Dec 20.
Article in English | MEDLINE | ID: mdl-18093531

ABSTRACT

Is there a neural representation of speech that transcends its sensory properties? Using fMRI, we investigated whether there are brain areas where neural activity during observation of sublexical audiovisual input corresponds to a listener's speech percept (what is "heard") independent of the sensory properties of the input. A target audiovisual stimulus was preceded by stimuli that (1) shared the target's auditory features (auditory overlap), (2) shared the target's visual features (visual overlap), or (3) shared neither the target's auditory or visual features but were perceived as the target (perceptual overlap). In two left-hemisphere regions (pars opercularis, planum polare), the target invoked less activity when it was preceded by the perceptually overlapping stimulus than when preceded by stimuli that shared one of its sensory components. This pattern of neural facilitation indicates that these regions code sublexical speech at an abstract level corresponding to that of the speech percept.


Subject(s)
Brain Mapping , Brain/physiology , Mental Processes , Pattern Recognition, Visual/physiology , Speech Perception/physiology , Acoustic Stimulation/methods , Brain/blood supply , Electroencephalography , Functional Laterality/physiology , Humans , Image Processing, Computer-Assisted/methods , Individuality , Magnetic Resonance Imaging/methods , Oxygen/blood , Photic Stimulation/methods , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...