Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Hum Brain Mapp ; 30(9): 2804-12, 2009 Sep.
Article in English | MEDLINE | ID: mdl-19117274

ABSTRACT

Neural correlates of driving and of decision making have been investigated separately, but little is known about the underlying neural mechanisms of decision making in driving. Previous research discusses two types of decision making: reward-weighted decision making and cost-weighted decision making. There are many reward-weighted decision making neuroimaging studies but there are few cost-weighted studies. Considering that driving involves serious risk, it is assumed that decision making in driving is cost weighted. Therefore, neural substrates of cost-weighted decision making can be assessed by investigation of driver's decision making. In this study, neural correlates of resolving uncertainty in driver's decision making were investigated. Turning right in left-hand traffic at a signalized intersection was simulated by computer graphic animation based videos. When the driver's view was occluded by a big truck, the uncertainty of the oncoming traffic was resolved by an in-car video assist system that presented the driver's occluded view. Resolving the uncertainty reduced activity in a distributed area including the amygdala and anterior cingulate. These results implicate the amygdala and anterior cingulate as serving a role in cost-weighted decision making.


Subject(s)
Automobile Driving/psychology , Brain/physiology , Cognition/physiology , Decision Making/physiology , Mental Processes/physiology , Psychomotor Performance/physiology , Adult , Amygdala/anatomy & histology , Amygdala/physiology , Brain/anatomy & histology , Brain Mapping , Evoked Potentials/physiology , Female , Functional Laterality/physiology , Gyrus Cinguli/anatomy & histology , Gyrus Cinguli/physiology , Humans , Magnetic Resonance Imaging , Male , Middle Aged , Neural Pathways/anatomy & histology , Neural Pathways/physiology , Neurons/physiology , Neuropsychological Tests , Photic Stimulation , Risk Assessment/methods , Young Adult
2.
Neuroreport ; 17(12): 1353-7, 2006 Aug 21.
Article in English | MEDLINE | ID: mdl-16951584

ABSTRACT

Neural processes underlying identification of durational contrasts were studied by comparing English and Japanese speakers for Japanese short/long vowel identification relative to consonant identification. Enhanced activities for non-native contrast (Japanese short/long vowel identification by English speakers) were observed in brain regions involved with articulatory-auditory mapping (Broca's area, superior temporal gyrus, planum temporale, and cerebellum), but not in the supramarginal gyrus. Greater activity in the supramarginal gyrus found for the consonant identification over short/long vowel identification by Japanese speakers implies that it is more important for phonetic contrasts differing in place of articulation than for vowel duration. These results support the hypothesis that neural processes used to facilitate perception depend on the relative contribution of information important for articulatory planning control.


Subject(s)
Brain Mapping , Brain/physiology , Multilingualism , Phonetics , Speech Perception/physiology , Adult , Brain/anatomy & histology , Brain/blood supply , Female , Functional Laterality , Humans , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Male , Middle Aged , Oxygen/blood
3.
Neuroimage ; 31(3): 1327-42, 2006 Jul 01.
Article in English | MEDLINE | ID: mdl-16546406

ABSTRACT

This 3-T fMRI study investigates brain regions similarly and differentially involved with listening and covert production of singing relative to speech. Given the greater use of auditory-motor self-monitoring and imagery with respect to consonance in singing, brain regions involved with these processes are predicted to be differentially active for singing more than for speech. The stimuli consisted of six Japanese songs. A block design was employed in which the tasks for the subject were to listen passively to singing of the song lyrics, passively listen to speaking of the song lyrics, covertly sing the song lyrics visually presented, covertly speak the song lyrics visually presented, and to rest. The conjunction of passive listening and covert production tasks used in this study allow for general neural processes underlying both perception and production to be discerned that are not exclusively a result of stimulus induced auditory processing nor to low level articulatory motor control. Brain regions involved with both perception and production for singing as well as speech were found to include the left planum temporale/superior temporal parietal region, as well as left and right premotor cortex, lateral aspect of the VI lobule of posterior cerebellum, anterior superior temporal gyrus, and planum polare. Greater activity for the singing over the speech condition for both the listening and covert production tasks was found in the right planum temporale. Greater activity in brain regions involved with consonance, orbitofrontal cortex (listening task), subcallosal cingulate (covert production task) were also present for singing over speech. The results are consistent with the PT mediating representational transformation across auditory and motor domains in response to consonance for singing over that of speech. Hemispheric laterality was assessed by paired t tests between active voxels in the contrast of interest relative to the left-right flipped contrast of interest calculated from images normalized to the left-right reflected template. Consistent with some hypotheses regarding hemispheric specialization, a pattern of differential laterality for speech over singing (both covert production and listening tasks) occurs in the left temporal lobe, whereas, singing over speech (listening task only) occurs in right temporal lobe.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Image Processing, Computer-Assisted , Imagination/physiology , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Music , Speech Perception/physiology , Speech/physiology , Voice/physiology , Adult , Brain/physiology , Brain Mapping , Cerebellum/physiology , Dominance, Cerebral/physiology , Female , Gyrus Cinguli/physiology , Humans , Male , Middle Aged , Nerve Net/physiology , Practice, Psychological , Temporal Lobe/physiology
4.
Neuroimage ; 28(3): 553-62, 2005 Nov 15.
Article in English | MEDLINE | ID: mdl-16055350

ABSTRACT

Left fusiform gyrus and left angular gyrus are considered to be respectively involved with visual form processing and associating visual and auditory (phonological) information in reading. However, there are a number of studies that fail to show the contribution of these regions in carrying out these aspects of reading. Considerable differences in the type of stimuli and tasks used in the various studies may account for the discrepancy in results. This functional magnetic resonance imaging (fMRI) study attempts to control aspects of experimental stimuli and tasks to specifically investigate brain regions involved with visual form processing and character-to-phonological (i.e., simple grapheme-to-phonological) conversion processing for single letters. Subjects performed a two-back identification task using known Japanese, and previously unknown Korean, and Thai phonograms before and after training on one of the unknown language orthographies. Japanese subjects learned either five Korean or five Thai phonograms. Brain regions related to visual form processing were assessed by comparing activity related to native (Japanese) phonograms with that of non-native (Korean and Thai) phonograms. There was no significant differential brain activity for visual form processing. Brain regions related to character-to-phonological conversion processing were assessed by comparing pre- and post-tests of trained non-native phonograms with that of native phonograms and non-trained non-native phonograms. Significant differential activation post-relative to pre-training exclusively for the trained non-native phonograms was found in left angular gyrus. In addition, psychophysiologic interaction (PPI) analysis revealed greater integration of left angular gyrus with primary visual cortex as well as with superior temporal gyrus for the trained phonograms post-relative to pre-training. The results suggest that left angular gyrus is involved with character-to-phonological conversion in letter perception.


Subject(s)
Cerebral Cortex/physiology , Learning/physiology , Reading , Adult , Cerebral Cortex/anatomy & histology , Female , Humans , Image Processing, Computer-Assisted , Language , Magnetic Resonance Imaging , Male , Psychomotor Performance/physiology , Temporal Lobe/anatomy & histology , Temporal Lobe/physiology , Visual Cortex/anatomy & histology , Visual Cortex/physiology
5.
J Cogn Neurosci ; 16(5): 805-16, 2004 Jun.
Article in English | MEDLINE | ID: mdl-15200708

ABSTRACT

Perception of speech is improved when presentation of the audio signal is accompanied by concordant visual speech gesture information. This enhancement is most prevalent when the audio signal is degraded. One potential means by which the brain affords perceptual enhancement is thought to be through the integration of concordant information from multiple sensory channels in a common site of convergence, multisensory integration (MSI) sites. Some studies have identified potential sites in the superior temporal gyrus/sulcus (STG/S) that are responsive to multisensory information from the auditory speech signal and visual speech movement. One limitation of these studies is that they do not control for activity resulting from attentional modulation cued by such things as visual information signaling the onsets and offsets of the acoustic speech signal, as well as activity resulting from MSI of properties of the auditory speech signal with aspects of gross visual motion that are not specific to place of articulation information. This fMRI experiment uses spatial wavelet bandpass filtered Japanese sentences presented with background multispeaker audio noise to discern brain activity reflecting MSI induced by auditory and visual correspondence of place of articulation information that controls for activity resulting from the above-mentioned factors. The experiment consists of a low-frequency (LF) filtered condition containing gross visual motion of the lips, jaw, and head without specific place of articulation information, a midfrequency (MF) filtered condition containing place of articulation information, and an unfiltered (UF) condition. Sites of MSI selectively induced by auditory and visual correspondence of place of articulation information were determined by the presence of activity for both the MF and UF conditions relative to the LF condition. Based on these criteria, sites of MSI were found predominantly in the left middle temporal gyrus (MTG), and the left STG/S (including the auditory cortex). By controlling for additional factors that could also induce greater activity resulting from visual motion information, this study identifies potential MSI sites that we believe are involved with improved speech perception intelligibility.


Subject(s)
Auditory Perception/physiology , Cerebral Cortex/physiology , Gestures , Speech Perception/physiology , Visual Perception/physiology , Adult , Brain Mapping , Cerebral Cortex/anatomy & histology , Female , Functional Laterality/physiology , Humans , Magnetic Resonance Imaging/methods , Male , Physical Stimulation/methods , Psychomotor Performance/physiology
6.
Neuroimage ; 22(3): 1182-94, 2004 Jul.
Article in English | MEDLINE | ID: mdl-15219590

ABSTRACT

This experiment investigates neural processes underlying perceptual identification of the same phonemes for native- and second-language speakers. A model is proposed implicating the use of articulatory-auditory and articulatory-orosensory mappings to facilitate perceptual identification under conditions in which the phonetic contrast is ambiguous, as in the case of second-language speakers. In contrast, native-language speakers are predicted to use auditory-based phonetic representations to a greater extent for perceptual identification than second-language speakers. The English /r-l/ phonetic contrast, although easy for native English speakers, is extremely difficult for native Japanese speakers who learned English as a second language after childhood. Twenty-two native English and twenty-two native Japanese speakers participated in this study. While undergoing event-related fMRI, subjects were aurally presented with syllables starting with a /r/, /l/, or a vowel and were required to rapidly identify the phoneme perceived by pushing one of three buttons with the left thumb. Consistent with the proposed model, the results show greater activity for second- over native-language speakers during perceptual identification of /r/ and /l/ relative to vowels in brain regions implicated with instantiating forward and inverse articulatory-auditory articulatory-orosensory models [Broca's area, anterior insula, anterior superior temporal sulcus/gyrus (STS/G), planum temporale (PT), superior temporal parietal area (Stp), SMG, and cerebellum]. The results further show that activity in brain regions implicated with instantiating these internal models is correlated with better /r/ and /l/ identification performance for second-language speakers. Greater activity found for native-language speakers especially in the anterior STG/S for /r/ and /l/ perceptual identification is consistent with the hypothesis that native-language speakers use auditory phonetic representations more extensively than second-language speakers.


Subject(s)
Auditory Perception/physiology , Brain Mapping , Brain/physiology , Multilingualism , Phonetics , Speech Perception/physiology , Adult , Auditory Pathways/physiology , Canada , Female , Humans , Japan , Magnetic Resonance Imaging , Male , Mouth/physiology , Sensation/physiology , Speech Articulation Tests
7.
Neuroreport ; 14(17): 2213-8, 2003 Dec 02.
Article in English | MEDLINE | ID: mdl-14625450

ABSTRACT

This fMRI study explores brain regions involved with perceptual enhancement afforded by observation of visual speech gesture information. Subjects passively identified words presented in the following conditions: audio-only, audiovisual, audio-only with noise, audiovisual with noise, and visual only. The brain may use concordant audio and visual information to enhance perception by integrating the information in a converging multisensory site. Consistent with response properties of multisensory integration sites, enhanced activity in middle and superior temporal gyrus/sulcus was greatest when concordant audiovisual stimuli were presented with acoustic noise. Activity found in brain regions involved with planning and execution of speech production in response to visual speech presented with degraded or absent auditory stimulation, is consistent with the use of an additional pathway through which speech perception is facilitated by a process of internally simulating the intended speech act of the observed speaker.


Subject(s)
Acoustic Stimulation/methods , Brain/physiology , Photic Stimulation/methods , Speech Perception/physiology , Visual Perception/physiology , Adult , Humans , Magnetic Resonance Imaging/methods , Male , Middle Aged , Neural Pathways/physiology , Psychomotor Performance/physiology
8.
Neuroimage ; 19(1): 113-24, 2003 May.
Article in English | MEDLINE | ID: mdl-12781731

ABSTRACT

Adult native Japanese speakers have difficulty perceiving the English /r-l/ phonetic contrast even after years of exposure. However, after extensive perceptual identification training, long-lasting improvement in identification performance can be attained. This fMRI study investigates localized changes in brain activity associated with 1 month of extensive feedback-based perceptual identification training by native Japanese speakers learning the English /r-l/ phonetic contrast. Before and after training, separate functional brain imaging sessions were conducted for identification of the English /r-l/ contrast (difficult for Japanese speakers), /b-g/ contrast (easy), and /b-v/ contrast (difficult), in which signal-correlated noise served as the reference control condition. Neural plasticity, denoted by exclusive enhancement in brain activity for the /r-l/ contrast, does not involve only reorganization in brain regions concerned with acoustic-phonetic processing (superior and medial temporal areas) but also the recruitment of additional bilateral cortical (supramarginal gyrus, planum temporale, Broca's area, premotor cortex, supplementary motor area) and subcortical regions (cerebellum, basal ganglia, substantia nigra) involved with auditory-articulatory (perceptual-motor) mappings related to verbal speech processing and learning. Contrary to what one may expect, brain activity for perception of a difficult contrast does not come to resemble that of an easy contrast as learning proceeds. Rather, the results support the hypothesis that improved identification performance may be due to the acquisition of auditory-articulatory mappings allowing for perception to be made in reference to potential action.


Subject(s)
Language , Learning/physiology , Neuronal Plasticity , Phonetics , Speech Perception/physiology , Brain/physiology , Brain Mapping , Feedback , Female , Humans , Magnetic Resonance Imaging , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...