Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Acta Psychol (Amst) ; 241: 104061, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37924575

ABSTRACT

Fluent reading and writing rely on well-developed orthographic representations stored in memory. According to the self-teaching hypothesis (Share, D. L. (1995). Phonological recoding and self-teaching: Sine qua non of reading acquisition. Cognition, 55(2), 151-218), children acquire orthographic representations through phonological decoding. However, it is not clear to what extent phonological decoding facilitates orthographic learning in adult readers. Across two experiments, we manipulated access to phonology during overt (aloud) and covert (silent) reading of monosyllabic and multisyllabic pseudowords by English-speaking undergraduate students. Additionally, Experiment 2 tested whether concurrent articulation during covert reading leads to poorer learning due to the suppression of subvocalization. The amount of incidental orthographic learning through reading exposure was measured a week later with a choice task, a spelling task, and a naming task. Overt reading, which leveraged phonological decoding, led to better recognition and recall of pseudowords compared to when readers read silently. Unlike in previous reports of child orthographic learning, concurrent articulation during covert reading did not reduce learning outcomes in adults, suggesting that adult readers may rely upon other processing strategies during covert reading, e.g., direct orthographic processing or lexicalized phonological decoding. This is consistent with claims that with increasing orthographic knowledge reading mechanisms shift from being more phonologically-based to more visually-based.


Subject(s)
Phonetics , Reading , Adult , Humans , Learning , Mental Recall , Recognition, Psychology
2.
Neurobiol Lang (Camb) ; 4(1): 53-80, 2023.
Article in English | MEDLINE | ID: mdl-37229140

ABSTRACT

Speech requires successful information transfer within cortical-basal ganglia loop circuits to produce the desired acoustic output. For this reason, up to 90% of Parkinson's disease patients experience impairments of speech articulation. Deep brain stimulation (DBS) is highly effective in controlling the symptoms of Parkinson's disease, sometimes alongside speech improvement, but subthalamic nucleus (STN) DBS can also lead to decreases in semantic and phonological fluency. This paradox demands better understanding of the interactions between the cortical speech network and the STN, which can be investigated with intracranial EEG recordings collected during DBS implantation surgery. We analyzed the propagation of high-gamma activity between STN, superior temporal gyrus (STG), and ventral sensorimotor cortices during reading aloud via event-related causality, a method that estimates strengths and directionalities of neural activity propagation. We employed a newly developed bivariate smoothing model based on a two-dimensional moving average, which is optimal for reducing random noise while retaining a sharp step response, to ensure precise embedding of statistical significance in the time-frequency space. Sustained and reciprocal neural interactions between STN and ventral sensorimotor cortex were observed. Moreover, high-gamma activity propagated from the STG to the STN prior to speech onset. The strength of this influence was affected by the lexical status of the utterance, with increased activity propagation during word versus pseudoword reading. These unique data suggest a potential role for the STN in the feedforward control of speech.

3.
J Neurosci ; 42(15): 3228-3240, 2022 04 13.
Article in English | MEDLINE | ID: mdl-35232766

ABSTRACT

To explore whether the thalamus participates in lexical status (word vs nonword) processing during spoken word production, we recorded local field potentials from the ventral lateral thalamus in 11 essential tremor patients (three females) undergoing thalamic deep-brain stimulation lead implantation during a visually cued word and nonword reading-aloud task. We observed task-related beta (12-30 Hz) activity decreases that were preferentially time locked to stimulus presentation, and broadband gamma (70-150 Hz) activity increases, which are thought to index increased multiunit spiking activity, occurring shortly before and predominantly time locked to speech onset. We further found that thalamic beta activity decreases bilaterally were greater when nonwords were read, demonstrating bilateral sensitivity to lexical status that likely reflects the tracking of task effort; in contrast, greater nonword-related increases in broadband gamma activity were observed only on the left, demonstrating lateralization of thalamic broadband gamma selectivity for lexical status. In addition, this lateralized lexicality effect on broadband gamma activity was strongest in more anterior thalamic locations, regions which are more likely to receive basal ganglia than cerebellar afferents and have extensive connections with prefrontal cortex including Brodmann's areas 44 and 45, regions consistently associated with grapheme-to-phoneme conversions. These results demonstrate active thalamic participation in reading aloud and provide direct evidence from intracranial thalamic recordings for the lateralization and topography of subcortical lexical status processing.SIGNIFICANCE STATEMENT Despite the corticocentric focus of most experimental work and accompanying models, there is increasing recognition of the role of subcortical structures in speech and language. Using local field potential recordings in neurosurgical patients, we demonstrated that the thalamus participates in lexical status (word vs nonword) processing during spoken word production, in a lateralized and region-specific manner. These results provide direct evidence from intracranial thalamic recordings for the lateralization and topography of subcortical lexical status processing.


Subject(s)
Essential Tremor , Reading , Female , Humans , Language , Speech/physiology , Thalamus
4.
Neuroimage ; 250: 118962, 2022 04 15.
Article in English | MEDLINE | ID: mdl-35121181

ABSTRACT

There is great interest in identifying the neurophysiological underpinnings of speech production. Deep brain stimulation (DBS) surgery is unique in that it allows intracranial recordings from both cortical and subcortical regions in patients who are awake and speaking. The quality of these recordings, however, may be affected to various degrees by mechanical forces resulting from speech itself. Here we describe the presence of speech-induced artifacts in local-field potential (LFP) recordings obtained from mapping electrodes, DBS leads, and cortical electrodes. In addition to expected physiological increases in high gamma (60-200 Hz) activity during speech production, time-frequency analysis in many channels revealed a narrowband gamma component that exhibited a pattern similar to that observed in the speech audio spectrogram. This component was present to different degrees in multiple types of neural recordings. We show that this component tracks the fundamental frequency of the participant's voice, correlates with the power spectrum of speech and has coherence with the produced speech audio. A vibration sensor attached to the stereotactic frame recorded speech-induced vibrations with the same pattern observed in the LFPs. No corresponding component was identified in any neural channel during the listening epoch of a syllable repetition task. These observations demonstrate how speech-induced vibrations can create artifacts in the primary frequency band of interest. Identifying and accounting for these artifacts is crucial for establishing the validity and reproducibility of speech-related data obtained from intracranial recordings during DBS surgery.


Subject(s)
Artifacts , Deep Brain Stimulation , Electrocorticography , Speech , Aged , Auditory Perception , Female , Humans , Intraoperative Period , Male , Parkinson Disease/surgery
5.
PLoS One ; 16(11): e0258946, 2021.
Article in English | MEDLINE | ID: mdl-34793469

ABSTRACT

The lack of standardized language assessment tools in Russian impedes clinical work, evidence-based practice, and research in Russian-speaking clinical populations. To address this gap in assessment of neurogenic language disorders, we developed and standardized a new comprehensive assessment instrument-the Russian Aphasia Test (RAT). The principal novelty of the RAT is that each subtest corresponds to a specific level of linguistic processing (phonological, lexical-semantic, syntactic, and discourse) in different domains: auditory comprehension, repetition, and oral production. In designing the test, we took into consideration various (psycho)linguistic factors known to influence language performance, as well as specific properties of Russian. The current paper describes the development of the RAT and reports its psychometric properties. A tablet-based version of the RAT was administered to 85 patients with different types and severity of aphasia and to 106 age-matched neurologically healthy controls. We established cutoff values for each subtest indicating deficit in a given task and cutoff values for aphasia based on the Receiver Operating Characteristic curve analysis of the composite score. The RAT showed very high sensitivity (> .93) and specificity (> .96), substantiating its validity for determining presence of aphasia. The test's high construct validity was evidenced by strong correlations between subtests measuring similar linguistic processes. The concurrent validity of the test was also strong as demonstrated by a high correlation with an existing aphasia battery. Overall high internal, inter-rater, and test-retest reliability were obtained. The RAT is the first comprehensive aphasia language battery in Russian with properly established psychometric properties. It is sensitive to a wide range of language deficits in aphasia and can reliably characterize individual profiles of language impairments. Notably, the RAT is the first comprehensive aphasia test in any language to be fully automatized for administration on a tablet, maximizing further standardization of presentation and scoring procedures.


Subject(s)
Aphasia/diagnosis , Language Tests/standards , Language , Psychometrics , Adolescent , Adult , Aphasia/epidemiology , Aphasia/pathology , Aphasia/psychology , Comprehension/physiology , Computers , Female , Humans , Male , Middle Aged , Reference Standards , Russia/epidemiology , Semantics , Young Adult
6.
Front Psychol ; 12: 732030, 2021.
Article in English | MEDLINE | ID: mdl-35027898

ABSTRACT

We propose the fuzzy lexical representations (FLRs) hypothesis that regards fuzziness as a core property of nonnative (L2) lexical representations (LRs). Fuzziness refers to imprecise encoding at different levels of LRs and interacts with input frequency during lexical processing and learning in adult L2 speakers. The FLR hypothesis primarily focuses on the encoding of spoken L2 words. We discuss the causes of fuzzy encoding of phonological form and meaning as well as fuzzy form-meaning mappings and the consequences of fuzzy encoding for word storage and retrieval. A central factor contributing to the fuzziness of L2 LRs is the fact that the L2 lexicon is acquired when the L1 lexicon is already in place. There are two immediate consequences of such sequential learning. First, L2 phonological categorization difficulties lead to fuzzy phonological form encoding. Second, the acquisition of L2 word forms subsequently to their meanings, which had already been acquired together with the L1 word forms, leads to weak L2 form-meaning mappings. The FLR hypothesis accounts for a range of phenomena observed in L2 lexical processing, including lexical confusions, slow lexical access, retrieval of incorrect lexical entries, weak lexical competition, reliance on sublexical rather than lexical heuristics in word recognition, the precedence of word form over meaning, and the prominence of detailed, even if imprecisely encoded, information about LRs in episodic memory. The main claim of the FLR hypothesis - that the quality of lexical encoding is a product of a complex interplay between fuzziness and input frequency - can contribute to increasing the efficiency of the existing models of LRs and lexical access.

7.
Q J Exp Psychol (Hove) ; 73(8): 1173-1188, 2020 Aug.
Article in English | MEDLINE | ID: mdl-31931667

ABSTRACT

We report results from a self-paced silent-reading study and a self-paced reading-aloud study examining ambiguous forms (heteronyms) of Russian animate and inanimate nouns which are differentiated in speech through word stress, for example, uCHItelja.TEACHER.GEN/ACC.SG and uchiteLJA.TEACHERS.NOM.PL.1 During reading, the absence of the auditory cue (word stress) to word identification results in morphologically ambiguous forms since both words have the same inflectional marking, -ja. Because word inflection is a reliable cue to syntactic role assignment, the ambiguity affects the level of morphology and of syntactic structure. However, word order constraints and frequency advantage of the GEN over both the NOM and the ACC noun forms with the -a/-ja inflection should pre-empt two different syntactic parses (OVS vs. SVO) when the heteronym is sentence-initial. We inquired into whether the parser is aware of the multi-level ambiguity and whether selected conflicting cues (case, word order, animacy) can prime parallel access to several structural parses. We found that animate and inanimate nouns patterned differently. The difference was consistent across the experiments. Against the backdrop of classical sentence processing dichotomies, the emergent pattern fits with the serial interactive or the parallel modular parser hypothesis.


Subject(s)
Conflict, Psychological , Cues , Psycholinguistics , Reading , Speech/physiology , Adult , Female , Humans , Male , Russia , Young Adult
8.
J Neurosci ; 39(14): 2698-2708, 2019 04 03.
Article in English | MEDLINE | ID: mdl-30700532

ABSTRACT

The sensorimotor cortex is somatotopically organized to represent the vocal tract articulators such as lips, tongue, larynx, and jaw. How speech and articulatory features are encoded at the subcortical level, however, remains largely unknown. We analyzed LFP recordings from the subthalamic nucleus (STN) and simultaneous electrocorticography recordings from the sensorimotor cortex of 11 human subjects (1 female) with Parkinson's disease during implantation of deep-brain stimulation (DBS) electrodes while they read aloud three-phoneme words. The initial phonemes involved either articulation primarily with the tongue (coronal consonants) or the lips (labial consonants). We observed significant increases in high-gamma (60-150 Hz) power in both the STN and the sensorimotor cortex that began before speech onset and persisted for the duration of speech articulation. As expected from previous reports, in the sensorimotor cortex, the primary articulators involved in the production of the initial consonants were topographically represented by high-gamma activity. We found that STN high-gamma activity also demonstrated specificity for the primary articulator, although no clear topography was observed. In general, subthalamic high-gamma activity varied along the ventral-dorsal trajectory of the electrodes, with greater high-gamma power recorded in the dorsal locations of the STN. Interestingly, the majority of significant articulator-discriminative activity in the STN occurred before that in sensorimotor cortex. These results demonstrate that articulator-specific speech information is contained within high-gamma activity of the STN, but with different spatial and temporal organization compared with similar information encoded in the sensorimotor cortex.SIGNIFICANCE STATEMENT Clinical and electrophysiological evidence suggest that the subthalamic nucleus (STN) is involved in speech; however, this important basal ganglia node is ignored in current models of speech production. We previously showed that STN neurons differentially encode early and late aspects of speech production, but no previous studies have examined subthalamic functional organization for speech articulators. Using simultaneous LFP recordings from the sensorimotor cortex and the STN in patients with Parkinson's disease undergoing deep-brain stimulation surgery, we discovered that STN high-gamma activity tracks speech production at the level of vocal tract articulators before the onset of vocalization and often before related cortical encoding.


Subject(s)
Brain Mapping/methods , Electrocorticography/methods , Photic Stimulation/methods , Sensorimotor Cortex/physiology , Speech/physiology , Subthalamic Nucleus/physiology , Aged , Female , Humans , Male , Middle Aged
9.
J Speech Lang Hear Res ; 57(4): 1468-79, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24686836

ABSTRACT

PURPOSE: This study investigated how listeners' native language affects their weighting of acoustic cues (such as vowel quality, pitch, duration, and intensity) in the perception of contrastive word stress. METHOD: Native speakers (N = 45) of typologically diverse languages (English, Russian, and Mandarin) performed a stress identification task on nonce disyllabic words with fully crossed combinations of each of the 4 cues in both syllables. RESULTS: The results revealed that although the vowel quality cue was the strongest cue for all groups of listeners, pitch was the second strongest cue for the English and the Mandarin listeners but was virtually disregarded by the Russian listeners. Duration and intensity cues were used by the Russian listeners to a significantly greater extent compared with the English and Mandarin participants. Compared with when cues were noncontrastive across syllables, cues were stronger when they were in the iambic contour than when they were in the trochaic contour. CONCLUSIONS: Although both English and Russian are stress languages and Mandarin is a tonal language, stress perception performance of the Mandarin listeners but not of the Russian listeners is more similar to that of the native English listeners, both in terms of weighting of the acoustic cues and the cues' relative strength in different word positions. The findings suggest that tuning of second-language prosodic perceptions is not entirely predictable by prosodic similarities across languages.


Subject(s)
Language , Phonetics , Psychoacoustics , Speech Acoustics , Speech Perception , Adult , Cues , Female , Humans , Male , Middle Aged , Signal Detection, Psychological , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...