Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34
Filter
1.
Hear Res ; 422: 108546, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35660125

ABSTRACT

The gap transfer illusion is an auditory illusion in which a temporal gap in a long glide is perceived as if it had transferred to a physically continuous shorter glide. The illusion typically occurs when the long and the shorter glide cross each other at their temporal midpoints, where the long glide is physically divided by the gap. The occurrence of the gap transfer illusion was investigated in stimuli in which the duration and the slope of the long glide were 5000 ms and ∼0.8 oct/s. The shorter glide was given different frequency ranges and different temporal ranges, and thus its time-frequency slope was also varied. The overlap configuration of these crossing glides was varied as well. As control stimuli, we used stimuli in which a continuous long glide crossed a shorter glide with a gap, i.e., the opposite configuration of the gap-transfer stimuli as above, as well as stimuli in which both crossing glides were continuous. The perception of two crossing tones tended to be facilitated when the glides differed in duration and/or slope. When the glides were relatively similar in duration and slope, however, bouncing percepts appeared more often. Similarity between the crossing tones thus promoted auditory bouncing, while dissimilarity between them facilitated the crossing percept. If the crossing percept dominated in gap-transfer stimuli, the gap transfer illusion took place in a typical manner, but the illusory transfer of the gap could occur even when the crossing percept was not dominant. When the shorter glide was as short as 500 ms, the crossing percept and the gap transfer illusion were robust. The mechanism of the illusion was examined in terms of factors that can influence the perceptual integration of auditory stimulus edges, i.e., onsets and offsets, of physically different sounds. Much like the perceptual construction of speech units, we suggest that the auditory system utilizes a rough time window of several hundreds of milliseconds to construct an initial skeleton percept of auditory events. The present data indicated the importance of the temporal proximity, rather than the frequency proximity, between sound edges in the illusory tone construction.


Subject(s)
Illusions , Time Perception , Humans , Auditory Perception , Sound , Speech , Acoustic Stimulation
2.
Front Psychol ; 13: 778018, 2022.
Article in English | MEDLINE | ID: mdl-35222184

ABSTRACT

The purpose of this study was to investigate how the subjective impression of English speech would change when pause duration at punctuation marks was varied. Two listening experiments were performed in which written English speech segments were rated on a variety of evaluation items by both native-English speakers and non-native speakers (native-Chinese speakers and native-Japanese speakers). The ratings were then subjected to factor analysis. In the first experiment, the pauses in three segments were made into the same durations, from 0.075 to 4.8 s. Participants rated the segments on 23 evaluation items on a rating scale from 1 to 10. A varimax rotation after PCA (principal component analysis) led to two factors that were related to speech style. These two factors could be interpreted as representing speech naturalness and speech rate. Speech segments with a pause duration of 0.6 s received the highest naturalness evaluation, while perceived speech rate decreased as the physical pause duration increased, without any changes in utterance segments. In the second experiment, a full-factorial design of pause durations (0.15, 0.3, 0.6, 1.2, and 2.4 s) within and between sentences, i.e., for commas and for periods, was implemented in two speech segments. The original speech segments and speech segments without any pauses were also included as control conditions. From ratings on 12 evaluation items, similar to Experiment 1, two factors representing speech naturalness and speech rate were obtained. The results showed again that the perceived speech rate decreased with an increase only in pause duration. As for speech naturalness, the highest evaluations occurred when pause durations were 0.6 s within sentences, and either 0.6 or 1.2 s between sentences. This recommends fixing all pause durations to 0.6 s as a practical way to train non-native speakers to make their spoken English appear more natural.

3.
Sci Rep ; 12(1): 3002, 2022 02 22.
Article in English | MEDLINE | ID: mdl-35194098

ABSTRACT

The present investigation focused on how temporal degradation affected intelligibility in two types of languages, i.e., a tonal language (Mandarin Chinese) and a non-tonal language (Japanese). The temporal resolution of common daily-life sentences spoken by native speakers was systematically degraded with mosaicking (mosaicising), in which the power of original speech in each of regularly spaced time-frequency unit was averaged and temporal fine structure was removed. The results showed very similar patterns of variations in intelligibility for these two languages over a wide range of temporal resolution, implying that temporal degradation crucially affected speech cues other than tonal cues in degraded speech without temporal fine structure. Specifically, the intelligibility of both languages maintained a ceiling up to about the 40-ms segment duration, then the performance gradually declined with increasing segment duration, and reached a floor at about the 150-ms segment duration or longer. The same limitations for the ceiling performance up to 40 ms appeared for the other method of degradation, i.e., local time-reversal, implying that a common temporal processing mechanism was related to the limitations. The general tendency fitted to a dual time-window model of speech processing, in which a short (~ 20-30 ms) and a long (~ 200 ms) time-window run in parallel.

4.
Iperception ; 11(5): 2041669520958430, 2020.
Article in English | MEDLINE | ID: mdl-33149877

ABSTRACT

To create a self-motion (vection) situation in three-dimensional computer graphics (CG), there are mainly two ways: moving a camera toward an object ("camera moving") or by moving the object and its surrounding environment toward the camera ("object moving"). As both methods vary considerably in the amount of computer calculations involved in generating CG, knowing how each method affects self-motion perception should be important to CG-creators and psychologists. Here, we simulated self-motion in a virtual three-dimensional CG-world, without stereoscopic disparity, which correctly reflected the lighting and glare. Self-motion was induced by "camera moving" or by "object moving," which in the present experiments was done by moving a tunnel surrounding the camera toward the camera. This produced two retinal images that were virtually identical in Experiment 1 and very similar in Experiments 2 and 3. The stimuli were presented on a large plasma display to 15 naive participants and induced substantial vection. Three experiments comparing vection strength between the two methods found weak but significant differences. The results suggest that when creating CG visual experiences, "camera-moving" induces stronger vection.

5.
Multisens Res ; : 1-21, 2020 Nov 26.
Article in English | MEDLINE | ID: mdl-33535165

ABSTRACT

Experiments that focus on how humans perceive temporal, spatial or synaesthetic congruency in audiovisual sensory information have often employed stimuli consisting of a Gabor patch and an amplitude (AM) or frequency (FM)-modulated sound. Introducing similarity between the static and dynamic features of the Gabor patch and the (carrier) frequency or modulation frequency of the sound is often assumed to be effective enough to induce congruency. However, comparative empirical data on perceived congruency of various stimulus parameters are not readily available, and in particular with respect to sound modulation, it is still not clear which type (AM or FM) induces perceived congruency best in tandem with various patch parameters. In two experiments, we examined Gabor patches of various spatial frequencies with flickering (2, 3 and 4 flickers/s) or drifting (0.5, 1.0 and 1.5 degrees/s) gratings in combinations with AM or FM tones of 2-, 3- and 4-Hz modulation and 500-, 1000- and 2000-Hz carrier frequencies. Perceived congruency ratings were obtained by asking participants to rate stimulus (in)congruency from 1 (incongruent) to 7 (congruent). The data showed that varying the spatial frequency of the Gabor patch and the carrier frequency of the modulated tone had comparatively little impact on perceived congruency. Similar to previous findings, similarity between the temporal frequency of the Gabor patch and the modulated tone effectively promoted perceived congruency. Furthermore, direct comparisons convincingly showed that AM tones in combination with flickering Gabor patches received significantly higher audiovisual congruency ratings compared to FM tones.

6.
Iperception ; 9(4): 2041669518777259, 2018.
Article in English | MEDLINE | ID: mdl-30090320

ABSTRACT

When the objects in a typical stream-bounce stimulus are made to rotate on a circular trajectory, not two but four percepts can be observed: streaming, bouncing, clockwise rotation, and counterclockwise rotation, often with spontaneous reversals between them. When streaming or bouncing is perceived, the objects seem to move on individual, opposite trajectories. When rotation is perceived, however, the objects seem to move in unison on the same circular trajectory, as if constituting the edges of a virtual pane that pivots around its axis. We called this stimulus the Polka Dance stimulus. Experiments showed that with some viewing experience, the viewer can "hold" the rotation percepts. Yet even when doing so, a short sound at the objects' point of coincidence can induce a bouncing percept. Besides this fast percept switching from rotation to bouncing, an external stimulus might also induce slower rotation direction switches, from clockwise to counterclockwise, or vice versa.

7.
Front Hum Neurosci ; 12: 149, 2018.
Article in English | MEDLINE | ID: mdl-29740295

ABSTRACT

Temporal resolution needed for Japanese speech communication was measured. A new experimental paradigm that can reflect the spectro-temporal resolution necessary for healthy listeners to perceive speech is introduced. As a first step, we report listeners' intelligibility scores of Japanese speech with a systematically degraded temporal resolution, so-called "mosaic speech": speech mosaicized in the coordinates of time and frequency. The results of two experiments show that mosaic speech cut into short static segments was almost perfectly intelligible with a temporal resolution of 40 ms or finer. Intelligibility dropped for a temporal resolution of 80 ms, but was still around 50%-correct level. The data are in line with previous results showing that speech signals separated into short temporal segments of <100 ms can be remarkably robust in terms of linguistic-content perception against drastic manipulations in each segment, such as partial signal omission or temporal reversal. The human perceptual system thus can extract meaning from unexpectedly rough temporal information in speech. The process resembles that of the visual system stringing together static movie frames of ~40 ms into vivid motion.

8.
Sci Rep ; 7(1): 17116, 2017 12 07.
Article in English | MEDLINE | ID: mdl-29215027

ABSTRACT

The inferior frontal and superior temporal areas in the left hemisphere are crucial for human language processing. In the present study, we investigated the magnetic mismatch field (MMF) evoked by voice stimuli in 3- to 5-year-old typically developing (TD) children and children with autism spectrum disorder (ASD) using child-customized magnetoencephalography (MEG). The children with ASD exhibited significantly decreased activation in the left superior temporal gyrus compared with the TD children for the MMF amplitude. If we classified the children with ASD according to the presence of a speech onset delay (ASD - SOD and ASD - NoSOD, respectively) and compared them with the TD children, both ASD groups exhibited decreased activation in the left superior temporal gyrus compared with the TD children. In contrast, the ASD - SOD group exhibited increased activity in the left frontal cortex (i.e., pars orbitalis) compared with the other groups. For all children with ASD, there was a significant negative correlation between the MMF amplitude in the left pars orbitalis and language performance. This investigation is the first to show a significant difference in two distinct MMF regions in ASD - SOD children compared with TD children.


Subject(s)
Autism Spectrum Disorder/physiopathology , Frontal Lobe/physiopathology , Language Development Disorders/physiopathology , Language Development , Speech Perception , Autism Spectrum Disorder/complications , Child, Preschool , Evoked Potentials, Auditory , Female , Humans , Language Development Disorders/etiology , Magnetoencephalography , Male , Temporal Lobe/physiopathology
9.
J Speech Lang Hear Res ; 60(2): 465-470, 2017 02 01.
Article in English | MEDLINE | ID: mdl-28114676

ABSTRACT

Purpose: The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for whispering. Method: Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children. Results: A profound difference in oxygenated hemoglobin levels between the speech modes was found over left ventral sensorimotor cortex. In particular, over areas that represent speech articulatory body parts and motion, such as the larynx, lips, and jaw, oxygenated hemoglobin was higher for whisper than for normal speech. The weaker stimulus, in terms of sound energy, thus induced the more profound hemodynamic response. This, moreover, occurred over areas involved in speech articulation, even though the children did not overtly articulate speech during measurements. Conclusion: Because whisper is a special form of communication not often used in daily life, we suggest that the hemodynamic response difference over left ventral sensorimotor cortex resulted from inner (covert) practice or imagination of the different articulatory actions necessary to produce whisper as opposed to normal speech.


Subject(s)
Cerebral Cortex/physiology , Cerebrovascular Circulation/physiology , Speech Perception/physiology , Child , Child, Preschool , Female , Hemoglobins/metabolism , Humans , Male , Oxygen/blood , Sound Spectrography , Spectroscopy, Near-Infrared , Speech Acoustics
10.
Neuroimage Clin ; 12: 300-5, 2016.
Article in English | MEDLINE | ID: mdl-27551667

ABSTRACT

The auditory-evoked P1m, recorded by magnetoencephalography, reflects a central auditory processing ability in human children. One recent study revealed that asynchrony of P1m between the right and left hemispheres reflected a central auditory processing disorder (i.e., attention deficit hyperactivity disorder, ADHD) in children. However, to date, the relationship between auditory P1m right-left hemispheric synchronization and the comorbidity of hyperactivity in children with autism spectrum disorder (ASD) is unknown. In this study, based on a previous report of an asynchrony of P1m in children with ADHD, to clarify whether the P1m right-left hemispheric synchronization is related to the symptom of hyperactivity in children with ASD, we investigated the relationship between voice-evoked P1m right-left hemispheric synchronization and hyperactivity in children with ASD. In addition to synchronization, we investigated the right-left hemispheric lateralization. Our findings failed to demonstrate significant differences in these values between ASD children with and without the symptom of hyperactivity, which was evaluated using the Autism Diagnostic Observational Schedule, Generic (ADOS-G) subscale. However, there was a significant correlation between the degrees of hemispheric synchronization and the ability to keep still during 12-minute MEG recording periods. Our results also suggested that asynchrony in the bilateral brain auditory processing system is associated with ADHD-like symptoms in children with ASD.


Subject(s)
Attention Deficit Disorder with Hyperactivity/diagnosis , Attention Deficit Disorder with Hyperactivity/etiology , Autism Spectrum Disorder/complications , Electroencephalography Phase Synchronization/physiology , Evoked Potentials, Auditory/physiology , Acoustic Stimulation , Analysis of Variance , Child , Child, Preschool , Electroencephalography , Female , Functional Laterality/physiology , Humans , Magnetoencephalography , Male , Reaction Time/physiology , Statistics as Topic
11.
Front Psychol ; 7: 517, 2016.
Article in English | MEDLINE | ID: mdl-27199790

ABSTRACT

Factor analysis (principal component analysis followed by varimax rotation) had shown that 3 common factors appear across 20 critical-band power fluctuations derived from spoken sentences of eight different languages [Ueda et al. (2010). Fechner Day 2010, Padua]. The present study investigated the contributions of such power-fluctuation factors to speech intelligibility. The method of factor analysis was modified to obtain factors suitable for resynthesizing speech sounds as 20-critical-band noise-vocoded speech. The resynthesized speech sounds were used for an intelligibility test. The modification of factor analysis ensured that the resynthesized speech sounds were not accompanied by a steady background noise caused by the data reduction procedure. Spoken sentences of British English, Japanese, and Mandarin Chinese were subjected to this modified analysis. Confirming the earlier analysis, indeed 3-4 factors were common to these languages. The number of power-fluctuation factors needed to make noise-vocoded speech intelligible was then examined. Critical-band power fluctuations of the Japanese spoken sentences were resynthesized from the obtained factors, resulting in noise-vocoded-speech stimuli, and the intelligibility of these speech stimuli was tested by 12 native Japanese speakers. Japanese mora (syllable-like phonological unit) identification performances were measured when the number of factors was 1-9. Statistically significant improvement in intelligibility was observed when the number of factors was increased stepwise up to 6. The 12 listeners identified 92.1% of the morae correctly on average in the 6-factor condition. The intelligibility improved sharply when the number of factors changed from 2 to 3. In this step, the cumulative contribution ratio of factors improved only by 10.6%, from 37.3 to 47.9%, but the average mora identification leaped from 6.9 to 69.2%. The results indicated that, if the number of factors is 3 or more, elementary linguistic information is preserved in such noise-vocoded speech.

12.
PLoS One ; 11(3): e0151374, 2016.
Article in English | MEDLINE | ID: mdl-27003807

ABSTRACT

Expectancy for an upcoming musical chord, harmonic expectancy, is supposedly based on automatic activation of tonal knowledge. Since previous studies implicitly relied on interpretations based on Western music theory, the underlying computational processes involved in harmonic expectancy and how it relates to tonality need further clarification. In particular, short chord sequences which cannot lead to unique keys are difficult to interpret in music theory. In this study, we examined effects of preceding chords on harmonic expectancy from a computational perspective, using stochastic modeling. We conducted a behavioral experiment, in which participants listened to short chord sequences and evaluated the subjective relatedness of the last chord to the preceding ones. Based on these judgments, we built stochastic models of the computational process underlying harmonic expectancy. Following this, we compared the explanatory power of the models. Our results imply that, even when listening to short chord sequences, internally constructed and updated tonal assumptions determine the expectancy of the upcoming chord.


Subject(s)
Auditory Perception/physiology , Pitch Perception/physiology , Acoustic Stimulation/methods , Adult , Female , Humans , Male , Music , Probability , Psychoacoustics , Reaction Time/physiology , Young Adult
13.
Autism Res ; 9(11): 1216-1226, 2016 11.
Article in English | MEDLINE | ID: mdl-26808455

ABSTRACT

The P1m component of the auditory evoked magnetic field is the earliest cortical response associated with language acquisition. However, the growth curve of the P1m component is unknown in both typically developing (TD) and atypically developing children. The aim of this study is to clarify the developmental pattern of this component when evoked by binaural human voice stimulation using child-customized magnetoencephalography. A total of 35 young TD children (32-121 months of age) and 35 children with autism spectrum disorder (ASD) (38-111 months of age) participated in this study. This is the first report to demonstrate an inverted U-shaped growth curve for the P1m dipole intensity in the left hemisphere in TD children. In addition, our results revealed a more diversified age-related distribution of auditory brain responses in 3- to 9-year-old children with ASD. These results demonstrate the diversified growth curve of the P1m component in ASD during young childhood, which is a crucial period for first language acquisition. Autism Res 2016, 9: 1216-1226. © 2016 International Society for Autism Research, Wiley Periodicals, Inc.


Subject(s)
Autism Spectrum Disorder/physiopathology , Brain/physiopathology , Child Development , Acoustic Stimulation/methods , Child , Child, Preschool , Cross-Sectional Studies , Female , Humans , Magnetoencephalography , Male
14.
Neuroimage ; 101: 440-7, 2014 Nov 01.
Article in English | MEDLINE | ID: mdl-25067819

ABSTRACT

The relationship between language development in early childhood and the maturation of brain functions related to the human voice remains unclear. Because the development of the auditory system likely correlates with language development in young children, we investigated the relationship between the auditory evoked field (AEF) and language development using non-invasive child-customized magnetoencephalography (MEG) in a longitudinal design. Twenty typically developing children were recruited (aged 36-75 months old at the first measurement). These children were re-investigated 11-25 months after the first measurement. The AEF component P1m was examined to investigate the developmental changes in each participant's neural brain response to vocal stimuli. In addition, we examined the relationships between brain responses and language performance. P1m peak amplitude in response to vocal stimuli significantly increased in both hemispheres in the second measurement compared to the first measurement. However, no differences were observed in P1m latency. Notably, our results reveal that children with greater increases in P1m amplitude in the left hemisphere performed better on linguistic tests. Thus, our results indicate that P1m evoked by vocal stimuli is a neurophysiological marker for language development in young children. Additionally, MEG is a technique that can be used to investigate the maturation of the auditory cortex based on auditory evoked fields in young children. This study is the first to demonstrate a significant relationship between the development of the auditory processing system and the development of language abilities in young children.


Subject(s)
Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Language Development , Magnetoencephalography/methods , Biomarkers , Child , Child, Preschool , Female , Functional Laterality/physiology , Humans , Longitudinal Studies , Magnetoencephalography/instrumentation , Male , Speech Perception/physiology
15.
Front Hum Neurosci ; 8: 170, 2014.
Article in English | MEDLINE | ID: mdl-24715860

ABSTRACT

A child-customized magnetoencephalography system was used to investigate somatosensory evoked field (SEF) in 3- to 4-year-old children. Three stimulus conditions were used in which the children received tactile-only stimulation to their left index finger or visuotactile stimulation. In the two visuotactile conditions, the children received tactile stimulation to their finger while they watched a video of tactile stimulation applied either to someone else's finger (the finger-touch condition) or to someone else's toe (the toe-touch condition). The latencies and source strengths of equivalent current dipoles (ECDs) over contralateral (right) somatosensory cortex were analyzed. In the preschoolers who provided valid ECDs, the stimulus conditions induced an early-latency ECD occurring between 60 and 68 ms mainly with an anterior direction. We further identified a middle-latency ECD between 97 and 104 ms, which predominantly had a posterior direction. Finally, initial evidence was found for a late-latency ECD at about 139-151 ms again more often with an anterior direction. Differences were found in the source strengths of the middle-latency ECDs among the stimulus conditions. For the paired comparisons that could be formed, ECD source strength was more pronounced in the finger-touch condition than in the tactile-only and the toe-touch conditions. Although more research is necessary to expand the data set, this suggests that visual information modulated preschool SEF. The finding that ECD source strength was higher when seen and felt touch occurred to the same body part, as compared to a different body part, might further indicate that connectivity between visual and tactile information is indexed in preschool somatosensory cortical activity, already in a somatotopic way.

16.
PLoS One ; 8(11): e80126, 2013.
Article in English | MEDLINE | ID: mdl-24278247

ABSTRACT

Optimal brain sensitivity to the fundamental frequency (F0) contour changes in the human voice is important for understanding a speaker's intonation, and consequently, the speaker's attitude. However, whether sensitivity in the brain's response to a human voice F0 contour change varies with an interaction between an individual's traits (i.e., autistic traits) and a human voice element (i.e., presence or absence of communicative action such as calling) has not been investigated. In the present study, we investigated the neural processes involved in the perception of F0 contour changes in the Japanese monosyllables "ne" and "nu." "Ne" is an interjection that means "hi" or "hey" in English; pronunciation of "ne" with a high falling F0 contour is used when the speaker wants to attract a listener's attention (i.e., social intonation). Meanwhile, the Japanese concrete noun "nu" has no communicative meaning. We applied an adaptive spatial filtering method to the neuromagnetic time course recorded by whole-head magnetoencephalography (MEG) and estimated the spatiotemporal frequency dynamics of event-related cerebral oscillatory changes in beta band during the oddball paradigm. During the perception of the F0 contour change when "ne" was presented, there was event-related de-synchronization (ERD) in the right temporal lobe. In contrast, during the perception of the F0 contour change when "nu" was presented, ERD occurred in the left temporal lobe and in the bilateral occipital lobes. ERD that occurred during the social stimulus "ne" in the right hemisphere was significantly correlated with a greater number of autistic traits measured according to the Autism Spectrum Quotient (AQ), suggesting that the differences in human voice processing are associated with higher autistic traits, even in non-clinical subjects.


Subject(s)
Autistic Disorder/physiopathology , Brain/physiopathology , Voice , Adult , Female , Humans , Magnetoencephalography , Male , Young Adult
17.
Neuroimage Clin ; 2: 394-401, 2013.
Article in English | MEDLINE | ID: mdl-24179793

ABSTRACT

Autism spectrum disorder (ASD) is often described as a disorder of aberrant neural connectivity and/or aberrant hemispheric lateralization. Although it is important to study the pathophysiology of the developing ASD cortex, the physiological connectivity of the brain in young children with ASD under conscious conditions has not yet been described. Magnetoencephalography (MEG) is a noninvasive brain imaging technique that is practical for use in young children. MEG produces a reference-free signal and is, therefore, an ideal tool for computing the coherence between two distant cortical rhythms. Using a custom child-sized MEG, we recently reported that 5- to 7-year-old children with ASD (n = 26) have inherently different neural pathways than typically developing (TD) children that contribute to their relatively preserved performance of visual tasks. In this study, we performed non-invasive measurements of the brain activity of 70 young children (3-7 years old, of which 18 were aged 3-4 years), a sample consisting of 35 ASD children and 35 TD children. Physiological connectivity and the laterality of physiological connectivity were assessed using intrahemispheric coherence for 9 frequency bands. As a result, significant rightward connectivity between the parietotemporal areas, via gamma band oscillations, was found in the ASD group. As we obtained the non-invasive measurements using a custom child-sized MEG, this is the first study to demonstrate a rightward-lateralized neurophysiological network in conscious young children (including children aged 3-4 years) with ASD.

18.
Mol Autism ; 4(1): 38, 2013 Oct 08.
Article in English | MEDLINE | ID: mdl-24103585

ABSTRACT

BACKGROUND: Magnetoencephalography (MEG) is used to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. In young children, however, the simultaneous quantification of the bilateral auditory-evoked response during binaural hearing is difficult using conventional adult-sized MEG systems. Recently, a child-customised MEG device has facilitated the acquisition of bi-hemispheric recordings, even in young children. Using the child-customised MEG device, we previously reported that language-related performance was reflected in the strength of the early component (P50m) of the auditory evoked magnetic field (AEF) in typically developing (TD) young children (2 to 5 years old) [Eur J Neurosci 2012, 35:644-650]. The aim of this study was to investigate how this neurophysiological index in each hemisphere is correlated with language performance in autism spectrum disorder (ASD) and TD children. METHODS: We used magnetoencephalography (MEG) to measure the auditory evoked magnetic field (AEF), which reflects language-related performance. We investigated the P50m that is evoked by voice stimuli (/ne/) bilaterally in 33 young children (3 to 7 years old) with ASD and in 30 young children who were typically developing (TD). The children were matched according to their age (in months) and gender. Most of the children with ASD were high-functioning subjects. RESULTS: The results showed that the children with ASD exhibited significantly less leftward lateralisation in their P50m intensity compared with the TD children. Furthermore, the results of a multiple regression analysis indicated that a shorter P50m latency in both hemispheres was specifically correlated with higher language-related performance in the TD children, whereas this latency was not correlated with non-verbal cognitive performance or chronological age. The children with ASD did not show any correlation between P50m latency and language-related performance; instead, increasing chronological age was a significant predictor of shorter P50m latency in the right hemisphere. CONCLUSIONS: Using a child-customised MEG device, we studied the P50m component that was evoked through binaural human voice stimuli in young ASD and TD children to examine differences in auditory cortex function that are associated with language development. Our results suggest that there is atypical brain function in the auditory cortex in young children with ASD, regardless of language development.

19.
PLoS One ; 8(2): e56087, 2013.
Article in English | MEDLINE | ID: mdl-23418517

ABSTRACT

Socio-communicative impairments are salient features of autism spectrum disorder (ASD) from a young age. The anterior prefrontal cortex (aPFC), or Brodmann area 10, is a key processing area for social function, and atypical development of this area is thought to play a role in the social deficits in ASD. It is important to understand these brain functions in developing children with ASD. However, these brain functions have not yet been well described under conscious conditions in young children with ASD. In the present study, we focused on the brain hemodynamic functional connectivity between the right and the left aPFC in children with ASD and typically developing (TD) children and investigated whether there was a correlation between this connectivity and social ability. Brain hemodynamic fluctuations were measured non-invasively by near-infrared spectroscopy (NIRS) in 3- to 7-year-old children with ASD (n = 15) and gender- and age-matched TD children (n = 15). The functional connectivity between the right and the left aPFC was assessed by measuring the coherence for low-frequency spontaneous fluctuations (0.01-0.10 Hz) during a narrated picture-card show. Coherence analysis demonstrated that children with ASD had a significantly higher inter-hemispheric connectivity with 0.02-Hz fluctuations, whereas a power analysis did not demonstrate significant differences between the two groups in terms of low frequency fluctuations (0.01-0.10 Hz). This aberrant higher connectivity in children with ASD was positively correlated with the severity of social deficit, as scored with the Autism Diagnostic Observation Schedule. This is the first study to demonstrate aberrant brain functional connectivity between the right and the left aPFC under conscious conditions in young children with ASD.


Subject(s)
Child Development Disorders, Pervasive/physiopathology , Child Development/physiology , Hemodynamics , Prefrontal Cortex/physiopathology , Child , Child, Preschool , Consciousness , Female , Functional Laterality/physiology , Humans , Male , Spectroscopy, Near-Infrared , Visual Perception/physiology
20.
Sci Rep ; 3: 1139, 2013.
Article in English | MEDLINE | ID: mdl-23355952

ABSTRACT

A subset of individuals with autism spectrum disorder (ASD) performs more proficiently on certain visual tasks than may be predicted by their general cognitive performances. However, in younger children with ASD (aged 5 to 7), preserved ability in these tasks and the neurophysiological correlates of their ability are not well documented. In the present study, we used a custom child-sized magnetoencephalography system and demonstrated that preserved ability in the visual reasoning task was associated with rightward lateralisation of the neurophysiological connectivity between the parietal and temporal regions in children with ASD. In addition, we demonstrated that higher reading/decoding ability was also associated with the same lateralisation in children with ASD. These neurophysiological correlates of visual tasks are considerably different from those that are observed in typically developing children. These findings indicate that children with ASD have inherently different neural pathways that contribute to their relatively preserved ability in visual tasks.


Subject(s)
Autistic Disorder/physiopathology , Brain/physiology , Autistic Disorder/psychology , Brain Mapping , Child , Child, Preschool , Female , Humans , Magnetoencephalography , Male , Reading , Task Performance and Analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...