Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Hum Brain Mapp ; 44(17): 6149-6172, 2023 12 01.
Article in English | MEDLINE | ID: mdl-37818940

ABSTRACT

The brain tracks and encodes multi-level speech features during spoken language processing. It is evident that this speech tracking is dominant at low frequencies (<8 Hz) including delta and theta bands. Recent research has demonstrated distinctions between delta- and theta-band tracking but has not elucidated how they differentially encode speech across linguistic levels. Here, we hypothesised that delta-band tracking encodes prediction errors (enhanced processing of unexpected features) while theta-band tracking encodes neural sharpening (enhanced processing of expected features) when people perceive speech with different linguistic contents. EEG responses were recorded when normal-hearing participants attended to continuous auditory stimuli that contained different phonological/morphological and semantic contents: (1) real-words, (2) pseudo-words and (3) time-reversed speech. We employed multivariate temporal response functions to measure EEG reconstruction accuracies in response to acoustic (spectrogram), phonetic and phonemic features with the partialling procedure that singles out unique contributions of individual features. We found higher delta-band accuracies for pseudo-words than real-words and time-reversed speech, especially during encoding of phonetic features. Notably, individual time-lag analyses showed that significantly higher accuracies for pseudo-words than real-words started at early processing stages for phonetic encoding (<100 ms post-feature) and later stages for acoustic and phonemic encoding (>200 and 400 ms post-feature, respectively). Theta-band accuracies, on the other hand, were higher when stimuli had richer linguistic content (real-words > pseudo-words > time-reversed speech). Such effects also started at early stages (<100 ms post-feature) during encoding of all individual features or when all features were combined. We argue these results indicate that delta-band tracking may play a role in predictive coding leading to greater tracking of pseudo-words due to the presence of unexpected/unpredicted semantic information, while theta-band tracking encodes sharpened signals caused by more expected phonological/morphological and semantic contents. Early presence of these effects reflects rapid computations of sharpening and prediction errors. Moreover, by measuring changes in EEG alpha power, we did not find evidence that the observed effects can be solitarily explained by attentional demands or listening efforts. Finally, we used directed information analyses to illustrate feedforward and feedback information transfers between prediction errors and sharpening across linguistic levels, showcasing how our results fit with the hierarchical Predictive Coding framework. Together, we suggest the distinct roles of delta and theta neural tracking for sharpening and predictive coding of multi-level speech features during spoken language processing.


Subject(s)
Auditory Cortex , Speech Perception , Humans , Speech/physiology , Electroencephalography/methods , Acoustic Stimulation/methods , Speech Perception/physiology , Auditory Cortex/physiology
3.
Behav Brain Sci ; 44: e95, 2021 09 30.
Article in English | MEDLINE | ID: mdl-34588046

ABSTRACT

We extend Savage et al.'s music and social bonding hypothesis by examining it in the context of Chinese music. First, top-down functions such as music as political instrument should receive more attention. Second, solo performance can serve as important cues for social identity. Third, a right match between the tones in lyrics and music contributes also to social bonding.


Subject(s)
Music , Cross-Cultural Comparison , Humans , Social Identification
4.
Front Psychol ; 9: 1982, 2018.
Article in English | MEDLINE | ID: mdl-30405478

ABSTRACT

Speech variability facilitates non-tonal language speakers' lexical tone learning. However, it remains unknown whether tonal language speakers can also benefit from speech variability while learning second language (L2) lexical tones. Researchers also reported that the effectiveness of speech variability was only shown on learning new items. Considering that the first language (L1) and L2 probably share similar tonal categories, the present study hypothesizes that speech variability only promotes the tonal language speakers' acquisition of L2 tones that are different from the tones in their L1. To test this hypothesis, the present study trained native Mandarin (a tonal language) speakers to learn Cantonese tones with either high variability (HV) or low variability (LV) speech materials, and then compared their learning performance. The results partially supported this hypothesis: only Mandarin subjects' productions of Cantonese low level and mid level tones benefited from the speech variability. They probably relied on the mental representations in L1 to learn the Cantonese tones that had similar Mandarin counterparts. This learning strategy limited the impact of speech variability. Furthermore, the results also revealed a discrepancy between L2 perception and production. The perception improvement may not necessarily lead to an improvement in production.

6.
Neuropsychologia ; 97: 18-28, 2017 03.
Article in English | MEDLINE | ID: mdl-28153640

ABSTRACT

Congenital amusia is a lifelong neurodevelopmental disorder of fine-grained pitch processing. In this fMRI study, we examined the neural bases of congenial amusia in speakers of a tonal language - Cantonese. Previous studies on non-tonal language speakers suggest that the neural deficits of congenital amusia lie in the music-selective neural circuitry in the right inferior frontal gyrus (IFG). However, it is unclear whether this finding can generalize to congenital amusics in tonal languages. Tonal language experience has been reported to shape the neural processing of pitch, which raises the question of how tonal language experience affects the neural bases of congenital amusia. To investigate this question, we examined the neural circuitries sub-serving the processing of relative pitch interval in pitch-matched Cantonese level tone and musical stimuli in 11 Cantonese-speaking amusics and 11 musically intact controls. Cantonese-speaking amusics exhibited abnormal brain activities in a widely distributed neural network during the processing of lexical tone and musical stimuli. Whereas the controls exhibited significant activation in the right superior temporal gyrus (STG) in the lexical tone condition and in the cerebellum regardless of the lexical tone and music conditions, no activation was found in the amusics in those regions, which likely reflects a dysfunctional neural mechanism of relative pitch processing in the amusics. Furthermore, the amusics showed abnormally strong activation of the right middle frontal gyrus and precuneus when the pitch stimuli were repeated, which presumably reflect deficits of attending to repeated pitch stimuli or encoding them into working memory. No significant group difference was found in the right IFG in either the whole-brain analysis or region-of-interest analysis. These findings imply that the neural deficits in tonal language speakers might differ from those in non-tonal language speakers, and overlap partly with the neural circuitries of lexical tone processing (e.g. right STG).


Subject(s)
Auditory Perceptual Disorders/physiopathology , Brain/physiopathology , Language , Speech Perception/physiology , Adolescent , Adult , Female , Hong Kong , Humans , Magnetic Resonance Imaging , Male , Young Adult
7.
Neuroimage ; 133: 516-528, 2016 06.
Article in English | MEDLINE | ID: mdl-26931813

ABSTRACT

A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (ß, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, ß, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant ß-related effects, suggesting tracking of EEG to the acoustic stimulus (high-ß EAE), memory processing (θ-low-ß CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG signatures relevant to δ, θ, ß, and γ oscillations can index phonological and semantic/syntactic organizations separately in auditory sentence processing, compatible with the view that phonological and higher-level linguistic processing engage distinct neural networks.


Subject(s)
Auditory Cortex/physiology , Auditory Perception/physiology , Biological Clocks/physiology , Brain Waves/physiology , Electroencephalography/methods , Nerve Net/physiology , Phonetics , Adult , Female , Humans , Male , Reproducibility of Results , Semantics , Sensitivity and Specificity , Young Adult
8.
Neuroimage ; 124(Pt A): 536-549, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26343322

ABSTRACT

Speech signals contain information of both linguistic content and a talker's voice. Conventionally, linguistic and talker processing are thought to be mediated by distinct neural systems in the left and right hemispheres respectively, but there is growing evidence that linguistic and talker processing interact in many ways. Previous studies suggest that talker-related vocal tract changes are processed integrally with phonetic changes in the bilateral posterior superior temporal gyrus/superior temporal sulcus (STG/STS), because the vocal tract parameter influences the perception of phonetic information. It is yet unclear whether the bilateral STG is also activated by the integral processing of another parameter - pitch, which influences the perception of lexical tone information and is related to talker differences in tone languages. In this study, we conducted separate functional magnetic resonance imaging (fMRI) and event-related potential (ERP) experiments to examine the spatial and temporal loci of interactions of lexical tone and talker-related pitch processing in Cantonese. We found that the STG was activated bilaterally during the processing of talker changes when listeners attended to lexical tone changes in the stimuli and during the processing of lexical tone changes when listeners attended to talker changes, suggesting that lexical tone and talker processing are functionally integrated in the bilateral STG. It extends the previous study, providing evidence for a general neural mechanism of integral phonetic and talker processing in the bilateral STG. The ERP results show interactions of lexical tone and talker processing 500-800ms after auditory word onset (a simultaneous posterior P3b and a frontal negativity). Moreover, there is some asymmetry in the interaction, such that unattended talker changes affect linguistic processing more than vice versa, which may be related to the ambiguity that talker changes cause in speech perception and/or attention bias to talker changes. Our findings have implications for understanding the neural encoding of linguistic and talker information.


Subject(s)
Phonetics , Pitch Perception/physiology , Speech Perception/physiology , Adult , Brain Mapping , Evoked Potentials, Auditory , Female , Humans , Magnetic Resonance Imaging , Male , Young Adult
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 859-862, 2016 Aug.
Article in English | MEDLINE | ID: mdl-28268459

ABSTRACT

The question of how to separate individual brain and non-brain signals, mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings, is a significant problem in contemporary neuroscience. This study proposes and evaluates a novel EEG Blind Source Separation (BSS) algorithm based on a weak exclusion principle (WEP). The chief point in which it differs from most previous EEG BSS algorithms is that the proposed algorithm is not based upon the hypothesis that the sources are statistically independent. Our first step was to investigate algorithm performance on simulated signals which have ground truth. The purpose of this simulation is to illustrate the proposed algorithm's efficacy. The results show that the proposed algorithm has good separation performance. Then, we used the proposed algorithm to separate real EEG signals from a memory study using a revised version of Sternberg Task. The results show that the proposed algorithm can effectively separate the non-brain and brain sources.


Subject(s)
Algorithms , Electroencephalography/methods , Signal Processing, Computer-Assisted , Brain/physiology , Computer Simulation , Female , Humans , Middle Aged , Models, Theoretical , Neuropsychological Tests
10.
Annu Int Conf IEEE Eng Med Biol Soc ; 2015: 2848-51, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26736885

ABSTRACT

Biometrics is a growing field, which permits identification of individuals by means of unique physical features. Electroencephalography (EEG)-based biometrics utilizes the small intra-personal differences and large inter-personal differences between individuals' brainwave patterns. In the past, such methods have used features derived from manually-designed procedures for this purpose. Another possibility is to use convolutional neural networks (CNN) to automatically extract an individual's best and most unique neural features and conduct classification, using EEG data derived from both Resting State with Open Eyes (REO) and Resting State with Closed Eyes (REC). Results indicate that this CNN-based joint-optimized EEG-based Biometric System yields a high degree of accuracy of identification (88%) for 10-class classification. Furthermore, rich inter-personal difference can be found using a very low frequency band (0-2Hz). Additionally, results suggest that the temporal portions over which subjects can be individualized is less than 200 ms.


Subject(s)
Electroencephalography , Biometry , Brain Waves , Neural Networks, Computer
11.
Comput Math Methods Med ; 2014: 961563, 2014.
Article in English | MEDLINE | ID: mdl-25254067

ABSTRACT

This study investigates the effect of tone inventories on brain activities underlying pitch without focal attention. We find that the electrophysiological responses to across-category stimuli are larger than those to within-category stimuli when the pitch contours are superimposed on nonspeech stimuli; however, there is no electrophysiological response difference associated with category status in speech stimuli. Moreover, this category effect in nonspeech stimuli is stronger for Cantonese speakers. Results of previous and present studies lead us to conclude that brain activities to the same native lexical tone contrasts are modulated by speakers' language experiences not only in active phonological processing but also in automatic feature detection without focal attention. In contrast to the condition with focal attention, where phonological processing is stronger for speech stimuli, the feature detection (pitch contours in this study) without focal attention as shaped by language background is superior in relatively regular stimuli, that is, the nonspeech stimuli. The results suggest that Cantonese listeners outperform Mandarin listeners in automatic detection of pitch features because of the denser Cantonese tone system.


Subject(s)
Evoked Potentials , Language , Pitch Perception , Speech Perception/physiology , Acoustic Stimulation/methods , Adult , Attention , Behavior , China , Electronic Data Processing , Electrophysiology , Female , Humans , Male , Reproducibility of Results , Signal Processing, Computer-Assisted , Speech , Speech Acoustics , Young Adult
13.
Brain Lang ; 126(2): 193-202, 2013 Aug.
Article in English | MEDLINE | ID: mdl-23792769

ABSTRACT

This event-related potential (ERP) study examines the time course of context-dependent talker normalization in spoken word identification. We found three ERP components, the N1 (100-220 ms), the N400 (250-500 ms) and the Late Positive Component (500-800 ms), which are conjectured to involve (a) auditory processing, (b) talker normalization and lexical retrieval, and (c) decisional process/lexical selection respectively. Talker normalization likely occurs in the time window of the N400 and overlaps with the lexical retrieval process. Compared with the nonspeech context, the speech contexts, no matter whether they have semantic content or not, enable listeners to tune to a talker's pitch range. In this way, speech contexts induce more efficient talker normalization during the activation of potential lexical candidates and lead to more accurate selection of the intended word in spoken word identification.


Subject(s)
Brain/physiology , Evoked Potentials/physiology , Speech Perception/physiology , Electroencephalography , Female , Humans , Male , Signal Processing, Computer-Assisted , Time Factors , Young Adult
14.
J Acoust Soc Am ; 132(2): 1088-99, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22894228

ABSTRACT

Context is important for recovering language information from talker-induced variability in acoustic signals. In tone perception, previous studies reported similar effects of speech and nonspeech contexts in Mandarin, supporting a general perceptual mechanism underlying tone normalization. However, no supportive evidence was obtained in Cantonese, also a tone language. Moreover, no study has compared speech and nonspeech contexts in the multi-talker condition, which is essential for exploring the normalization mechanism of inter-talker variability in speaking F0. The other question is whether a talker's full F0 range and mean F0 equally facilitate normalization. To answer these questions, this study examines the effects of four context conditions (speech/nonspeech × F0 contour/mean F0) in the multi-talker condition in Cantonese. Results show that raising and lowering the F0 of speech contexts change the perception of identical stimuli from mid level tone to low and high level tone, whereas nonspeech contexts only mildly increase the identification preference. It supports the speech-specific mechanism of tone normalization. Moreover, speech context with flattened F0 trajectory, which neutralizes cues of a talker's full F0 range, fails to facilitate normalization in some conditions, implying that a talker's mean F0 is less efficient for minimizing talker-induced lexical ambiguity in tone perception.


Subject(s)
Cues , Language , Pitch Perception , Speech Acoustics , Speech Perception , Voice Quality , Acoustic Stimulation , Adult , Audiometry, Speech , Female , Humans , Male , Phonetics , Signal Detection, Psychological , Sound Spectrography , Time Factors , Young Adult
15.
J Speech Lang Hear Res ; 55(2): 579-95, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22207701

ABSTRACT

PURPOSE: This study investigates the impact of intertalker variations on the process of mapping acoustic variations on tone categories in two different tone languages. METHOD: Pitch stimuli manipulated from four voice ranges were presented in isolation through a blocked-talker design. Listeners were instructed to identify the stimuli that they heard as lexical tones in their native language. RESULTS: Tone identification of Mandarin listeners exhibited relatively stable normalization regardless of the voice, whereas tone identification of Cantonese listeners was unstable and susceptible to the influence of intertalker variations. In the case of Cantonese listeners, intertalker variations had a larger effect on the perception of F0 height dimension than of F0 slope dimension. CONCLUSION: The comparison between Cantonese and Mandarin listeners' performances reveals an interaction of intertalker variations and the types of tone contrasts in each language. For Cantonese tones, which depend heavily on F0 height distinctions, intertalker variations result in F0 overlapping and, consequently, ambiguities among them in isolated tone perception. For Mandarin tones, which are distinctive in terms of their F0 contours, the differences in F0 contours alone seem sufficient to elicit reliable tone identification. Intertalker variations therefore have relatively limited effect on Mandarin tone perception.


Subject(s)
Asian People/psychology , Language , Pitch Perception , Psychoacoustics , Speech Perception , Acoustic Stimulation/methods , Adolescent , Female , Humans , Male , Models, Psychological , Phonetics , Speech , Speech Production Measurement , Young Adult
16.
Neuropsychologia ; 49(7): 1981-6, 2011 Jun.
Article in English | MEDLINE | ID: mdl-21439988

ABSTRACT

It has been generally accepted that the left hemisphere is more functionally specialized for language than the right hemisphere for right-handed monolinguals. But more and more studies have also demonstrated right hemisphere advantage for some language tasks with certain participants. A recent comprehensive survey has shown that hemisphere lateralization of language depends on the bilingual status of the participants, with bilateral hemispheric involvement for both languages of early bilinguals, who acquired both languages by age of 6, left hemisphere dominance for language of monolinguals, and also left hemisphere dominance for both languages of late bilinguals, who acquired the second language after age of 6. We propose a preliminary model which takes into account both composition of stimulus words and bilingual status of participants to resolve the apparent controversies regarding hemisphere lateralization of various reading experiments in the literature with focus on Chinese characters, and to predict lateralization patterns for future experiments in Chinese word reading. The bilingual status includes early bilingual, late bilingual and monolingual. However, we have tested this model only with late Chinese-English bilingual participants by using a Stroop paradigm in this paper, though the aim of our model is to disentangle the controversies in the lateralization effect of Chinese character reading. We show here with stimuli written in Chinese single characters that the Stroop effect was stronger when the stimuli were presented to the right than to the left visual field, implying that the language information and color identification/naming may interact more strongly in the left hemisphere. Therefore, our experimental results indicate left hemisphere dominance for Chinese character processing, providing evidence for one part of our model.


Subject(s)
Functional Laterality/physiology , Multilingualism , Aging/psychology , China , England , Female , Humans , Language , Male , Models, Neurological , Photic Stimulation , Reading , Stroop Test , Visual Fields/physiology , Visual Perception , Young Adult
17.
Proc Natl Acad Sci U S A ; 106(20): 8140-5, 2009 May 19.
Article in English | MEDLINE | ID: mdl-19416812

ABSTRACT

The effect of language on the categorical perception of color is stronger for stimuli in the right visual field (RVF) than in the left visual field, but the neural correlates of the behavioral RVF advantage are unknown. Here we present brain activation maps revealing how language is differentially engaged in the discrimination of colored stimuli presented in either visual hemifield. In a rapid, event-related functional MRI study, we measured subjects' brain activity while they performed a visual search task. Compared with colors from the same lexical category, discrimination of colors from different linguistic categories provoked stronger and faster responses in the left hemisphere language regions, particularly when the colors were presented in the RVF. In addition, activation of visual areas 2/3, responsible for color perception, was much stronger for RVF stimuli from different linguistic categories than for stimuli from the same linguistic category. Notably, the enhanced activity of visual areas 2/3 coincided with the enhanced activity of the left posterior temporoparietal language region, suggesting that this language region may serve as a top-down control source that modulates the activation of the visual cortex. These findings shed light on the brain mechanisms that underlie the hemifield- dependent effect of language on visual perception.


Subject(s)
Brain Mapping , Color Perception/physiology , Functional Laterality , Language , Female , Humans , Magnetic Resonance Imaging , Male , Photic Stimulation , Visual Cortex/physiology , Visual Fields , Young Adult
18.
Trends Ecol Evol ; 20(5): 263-9, 2005 May.
Article in English | MEDLINE | ID: mdl-16701378

ABSTRACT

Research into the emergence and evolution of human language has received unprecedented attention during the past 15 years. Efforts to better understand the processes of language emergence and evolution have proceeded in two main directions: from the top-down (linguists) and from the bottom-up (cognitive scientists). Language can be viewed as an invading process that has had profound impact on the human phenotype at all levels, from the structure of the brain to modes of cultural interaction. In our view, the most effective way to form a connection between the two efforts (essential if theories for language evolution are to reflect the constraints imposed on language by the brain) lies in computational modelling, an approach that enables numerous hypotheses to be explored and tested against objective criteria and which suggest productive paths for empirical researchers to then follow. Here, with the aim of promoting the cross-fertilization of ideas across disciplines, we review some of the recent research that has made use of computational methods in three principal areas of research into language evolution: language emergence, language change, and language death.

19.
Proc Natl Acad Sci U S A ; 101(15): 5692-5, 2004 Apr 13.
Article in English | MEDLINE | ID: mdl-15056764

ABSTRACT

The Kusunda people of central Nepal have long been regarded as a relic tribe of South Asia. They are, or were until recently, seminomadic hunter-gatherers, living in jungles and forests, with a language that shows no similarities to surrounding languages. They are often described as shorter and darker than neighboring tribes. Our research indicates that the Kusunda language is a member of the Indo-Pacific family. This is a surprising finding inasmuch as the Indo-Pacific family is located on New Guinea and surrounding islands. The possibility that Kusunda is a remnant of the migration that led to the initial peopling of New Guinea and Australia warrants additional investigation from both a linguistic and genetic perspective.


Subject(s)
Language , Native Hawaiian or Other Pacific Islander , Population Groups , Evolution, Molecular , Genetics, Population , Humans , Indian Ocean , Native Hawaiian or Other Pacific Islander/ethnology , Native Hawaiian or Other Pacific Islander/genetics , Nepal , Pacific Ocean , Population Groups/ethnology , Population Groups/genetics
SELECTION OF CITATIONS
SEARCH DETAIL
...