Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 114
Filtrar
1.
Neuropsychologia ; 204: 108973, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39151687

RESUMO

The goal of this study was to investigate the impact of the age of acquisition (AoA) on functional brain representations of sign language in two exceptional groups of hearing bimodal bilinguals: native signers (simultaneous bilinguals since early childhood) and late signers (proficient sequential bilinguals, who learnt a sign language after puberty). We asked whether effects of AoA would be present across languages - signed and audiovisual spoken - and thus observed only in late signers as they acquired each language at different life stages, and whether effects of AoA would be present during sign language processing across groups. Moreover, we aimed to carefully control participants' level of sign language proficiency by implementing a battery of language tests developed for the purpose of the project, which confirmed that participants had high competences of sign language. Between-group analyses revealed a hypothesized modulatory effect of AoA in the right inferior parietal lobule (IPL) in native signers, compared to late signers. With respect to within-group differences across languages we observed greater involvement of the left IPL in response to sign language in comparison to spoken language in both native and late signers, indicating language modality effects. Overall, our results suggest that the neural underpinnings of language are molded by the linguistic characteristics of the language as well as by when in life the language is learnt.

2.
J Affect Disord ; 366: 290-299, 2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-39187178

RESUMO

BACKGROUND: Approximately 10% of mothers experience depression each year, which increases risk for depression in offspring. Currently no research has analysed the linguistic features of depressed mothers and their adolescent offspring during dyadic interactions. We examined the extent to which linguistic features of mothers' and adolescents' speech during dyadic interactional tasks could discriminate depressed from non-depressed mothers. METHODS: Computer-assisted linguistic analysis (Linguistic Inquiry and Word Count; LIWC) was applied to transcripts of low-income mother-adolescent dyads (N = 151) performing a lab-based problem-solving interaction task. One-way multivariate analyses were conducted to determine linguistic features hypothesized to be related to maternal depressive status that significantly differed in frequency between depressed and non-depressed mothers and higher and lower risk offspring. Logistic regression analyses were performed to classify between dyads belonging to the two groups. RESULTS: The results showed that linguistic features in mothers' and their adolescent offsprings' speech during problem-solving interactions discriminated between maternal depression status. Many, but not all effects, were consistent with those identified in previous research using primarily written text, highlighting the validity and reliability of language behaviour associated with depressive symptomatology across lab-based and natural environmental contexts. LIMITATIONS: Our analyses do not enable to ascertain how mothers' language behaviour may have influenced their offspring's communication patterns. We also cannot say how or whether these findings generalize to other contexts or populations. CONCLUSION: The findings extend the existing literature on linguistic features of depression by indicating that mothers' depression is associated with linguistic behaviour during mother-adolescent interaction.

3.
J Integr Neurosci ; 23(7): 139, 2024 Jul 25.
Artigo em Inglês | MEDLINE | ID: mdl-39082290

RESUMO

BACKGROUNDS: Segments and tone are important sub-syllabic units that play large roles in lexical processing in tonal languages. However, their roles in lexical processing remain unclear, and the event-related potential (ERP) technique will benefit the exploration of the cognitive mechanism in lexical processing. METHODS: The high temporal resolution of ERP enables the technique to interpret rapidly changing spoken language performances. The present ERP study examined the different roles of segments and tone in Mandarin Chinese lexical processing. An auditory priming experiment was designed that included five types of priming stimuli: consonant mismatch, vowel mismatch, tone mismatch, unrelated mismatch, and identity. Participants were asked to judge whether the target of the prime-target pair was a real Mandarin disyllabic word or not. RESULTS: Behavioral results including reaction time and response accuracy and ERP results were collected. Results were different from those of previous studies that showed the dominant role of consonants in lexical access in mainly non-tonal languages like English. Our results showed that consonants and vowels play comparable roles, whereas tone plays a less important role than do consonants and vowels in lexical processing in Mandarin. CONCLUSIONS: These results have implications for understanding the brain mechanisms in lexical processing of tonal languages.


Assuntos
Eletroencefalografia , Potenciais Evocados , Percepção da Fala , Humanos , Masculino , Feminino , Adulto Jovem , Percepção da Fala/fisiologia , Adulto , Potenciais Evocados/fisiologia , Tempo de Reação/fisiologia , Encéfalo/fisiologia , Potenciais Evocados Auditivos/fisiologia , Psicolinguística , Idioma
4.
JMIR Cancer ; 10: e43070, 2024 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-39037754

RESUMO

BACKGROUND: Commonly offered as supportive care, therapist-led online support groups (OSGs) are a cost-effective way to provide support to individuals affected by cancer. One important indicator of a successful OSG session is group cohesion; however, monitoring group cohesion can be challenging due to the lack of nonverbal cues and in-person interactions in text-based OSGs. The Artificial Intelligence-based Co-Facilitator (AICF) was designed to contextually identify therapeutic outcomes from conversations and produce real-time analytics. OBJECTIVE: The aim of this study was to develop a method to train and evaluate AICF's capacity to monitor group cohesion. METHODS: AICF used a text classification approach to extract the mentions of group cohesion within conversations. A sample of data was annotated by human scorers, which was used as the training data to build the classification model. The annotations were further supported by finding contextually similar group cohesion expressions using word embedding models as well. AICF performance was also compared against the natural language processing software Linguistic Inquiry Word Count (LIWC). RESULTS: AICF was trained on 80,000 messages obtained from Cancer Chat Canada. We tested AICF on 34,048 messages. Human experts scored 6797 (20%) of the messages to evaluate the ability of AICF to classify group cohesion. Results showed that machine learning algorithms combined with human input could detect group cohesion, a clinically meaningful indicator of effective OSGs. After retraining with human input, AICF reached an F1-score of 0.82. AICF performed slightly better at identifying group cohesion compared to LIWC. CONCLUSIONS: AICF has the potential to assist therapists by detecting discord in the group amenable to real-time intervention. Overall, AICF presents a unique opportunity to strengthen patient-centered care in web-based settings by attending to individual needs. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/21453.

5.
Neuroimage ; : 120720, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38971484

RESUMO

This meta-analysis summarizes evidence from 44 neuroimaging experiments and characterizes the general linguistic network in early deaf individuals. Meta-analytic comparisons with hearing individuals found that a specific set of regions (in particular the left inferior frontal gyrus and posterior middle temporal gyrus) participates in supramodal language processing. In addition to previously described modality-specific differences, the present study showed that the left calcarine gyrus and the right caudate were additionally recruited in deaf compared with hearing individuals. In addition, this study showed that the bilateral posterior superior temporal gyrus is shaped by cross-modal plasticity, whereas the left frontotemporal areas are shaped by early language experience. Although an overall left-lateralized pattern for language processing was observed in the early deaf individuals, regional lateralization was altered in the inferior frontal gyrus and anterior temporal lobe. These findings indicate that the core language network functions in a modality-independent manner, and provide a foundation for determining the contributions of sensory and linguistic experiences in shaping the neural bases of language processing.

6.
Neurobiol Lang (Camb) ; 5(2): 553-588, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38939730

RESUMO

We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18-24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18-21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere's classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.

7.
Orthop J Sports Med ; 12(6): 23259671241252936, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38881856

RESUMO

Background: Anterior cruciate ligament (ACL) injuries are one of the most common knee injuries in pediatric patients in the United States. The patient's primary spoken language may affect outcomes after ACL reconstruction (ACLR). Purpose/Hypothesis: The purpose of this study was to identify differences in ACLR outcomes between patients whose primary, preferred spoken language was either English or Spanish. It was hypothesized that there would be a difference in retear rates between patients preferring English versus Spanish. Study Design: Cohort study; Level of evidence, 3. Methods: A retrospective cohort study was performed on pediatric and adolescent patients who underwent ACLR at a single institution. Patients were divided into 2 cohorts based on their preferred spoken language: English or Spanish. All patients underwent either hamstring tendon or bone-patellar tendon-bone autograft ACLR performed by the same surgeon with the same postoperative rehabilitation protocols. Linear regression, chi-square tests, and multivariate logistic regression were used to determine if outcomes, graft tear, revision surgery, and contralateral injury differed between groups. Results: A total of 68 patients were identified: 33 patients whose preferred language was English and 35 patients whose preferred language was Spanish. The overall mean age of the patients was 16.4 ± 1.4 years (range, 13.2-20.5 years), and the mean follow-up time was 3.26 ± 1.98 years (range, 0.53-8.13 years). Patients who preferred Spanish were more likely than those who preferred English to experience graft tears requiring revision surgery after ACLR (P = .02; odds ratio [OR] = 5.81; adjusted OR = 1.94), at a tear rate of 14.3%. Conclusion: Patients who preferred to speak Spanish experienced higher graft tear rates when compared with patients who preferred speaking English, even after adjusting for sex, sport played, graft type, type of insurance, and time to surgery.

8.
Brain ; 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38889230

RESUMO

There is a rich tradition of research on the neuroanatomical correlates of spoken language production in aphasia using constrained tasks (e.g., picture naming), which offer controlled insights into the distinct processes that govern speech and language (i.e., lexical-semantic access, morphosyntactic construction, phonological encoding, speech motor programming/execution). Yet these tasks do not necessarily reflect everyday language use. In contrast, naturalistic language production (also referred to as connected speech or discourse) more closely approximates typical processing demands, requiring the dynamic integration of all aspects of speech and language. The brain bases of naturalistic language production remain relatively unknown, however, in part because of the difficulty in deriving features that are salient, quantifiable, and interpretable relative to both speech-language processes and the extant literature. The present cross-sectional observational study seeks to address these challenges by leveraging a validated and comprehensive auditory-perceptual measurement system that yields four explanatory dimensions of performance-Paraphasia (misselection of words and sounds), Logopenia (paucity of words), Agrammatism (grammatical omissions), and Motor speech (impaired speech motor programming/execution). We used this system to characterize naturalistic language production in a large and representative sample of individuals with acute post-stroke aphasia (n = 118). Scores on each of the four dimensions were correlated with lesion metrics, and multivariate associations among the dimensions and brain regions were then explored. Our findings revealed distinct yet overlapping neuroanatomical correlates throughout the left-hemisphere language network. Paraphasia and Logopenia were associated primarily with posterior regions, spanning both dorsal and ventral streams, which are critical for lexical-semantic access and phonological encoding. In contrast, Agrammatism and Motor speech were associated primarily with anterior regions of the dorsal stream that are involved in morphosyntactic construction and speech motor planning/execution respectively. Collectively, we view these results as constituting a brain-behavior model of naturalistic language production in aphasia, aligning with both historical and contemporary accounts of the neurobiology of spoken language production.

9.
J Child Lang ; : 1-22, 2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38362892

RESUMO

Children who receive cochlear implants develop spoken language on a protracted timescale. The home environment facilitates speech-language development, yet it is relatively unknown how the environment differs between children with cochlear implants and typical hearing. We matched eighteen preschoolers with implants (31-65 months) to two groups of children with typical hearing: by chronological age and hearing age. Each child completed a long-form, naturalistic audio recording of their home environment (appx. 16 hours/child; >730 hours of observation) to measure adult speech input, child vocal productivity, and caregiver-child interaction. Results showed that children with cochlear implants and typical hearing were exposed to and engaged in similar amounts of spoken language with caregivers. However, the home environment did not reflect developmental stages as closely for children with implants, or predict their speech outcomes as strongly. Home-based speech-language interventions should focus on the unique input-outcome relationships for this group of children with hearing loss.

10.
Autism Res ; 17(2): 419-431, 2024 02.
Artigo em Inglês | MEDLINE | ID: mdl-38348589

RESUMO

Speech ability may limit spoken language development in some minimally verbal autistic children. In this study, we aimed to determine whether an acoustic measure of speech production, vowel distinctiveness, is concurrently related to expressive language (EL) for autistic children. Syllables containing the vowels [i] and [a] were recorded remotely from 27 autistic children (4;1-7;11) with a range of spoken language abilities. Vowel distinctiveness was calculated using automatic formant tracking software. Robust hierarchical regressions were conducted with receptive language (RL) and vowel distinctiveness as predictors of EL. Hierarchical regressions were also conducted within a High EL and a Low EL subgroup. Vowel distinctiveness accounted for 29% of the variance in EL for the entire group, RL for 38%. For the Low EL group, only vowel distinctiveness was significant, accounting for 38% of variance in EL. Conversely, in the High EL group, only RL was significant and accounted for 26% of variance in EL. Replicating previous results, speech production and RL significantly predicted concurrent EL in autistic children, with speech production being the sole significant predictor for the Low EL group and RL the sole significant predictor for the High EL group. Further work is needed to determine whether vowel distinctiveness longitudinally, as well as concurrently, predicts EL. Findings have important implications for the early identification of language impairment and in developing language interventions for autistic children.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Transtornos da Linguagem , Criança , Humanos , Transtorno Autístico/complicações , Transtorno do Espectro Autista/complicações , Idioma , Fala , Fonética
11.
Atten Percept Psychophys ; 86(1): 339-353, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37872434

RESUMO

Listeners readily adapt to variation in non-native-accented speech, learning to disambiguate between talker-specific and accent-based variation. We asked (1) which linguistic and indexical features of the spoken utterance are relevant for this learning to occur and (2) whether task-driven attention to these features affects the extent to which learning generalizes to novel utterances and voices. In two experiments, listeners heard English sentences (Experiment 1) or words (Experiment 2) produced by Spanish-accented talkers during an exposure phase. Listeners' attention was directed to lexical content (transcription), indexical cues (talker identification), or both (transcription + talker identification). In Experiment 1, listeners' test transcription of novel English sentences spoken by Spanish-accented talkers showed generalized perceptual learning to previously unheard voices and utterances for all training conditions. In Experiment 2, generalized learning occurred only in the transcription + talker identification condition, suggesting that attention to both linguistic and indexical cues optimizes listeners' ability to distinguish between individual talker- and group-based variation, especially with the reduced availability of sentence-length prosodic information. Collectively, these findings highlight the role of attentional processes in the encoding of speech input and underscore the interdependency of indexical and lexical characteristics in spoken language processing.


Assuntos
Percepção da Fala , Fala , Humanos , Aprendizagem , Idioma , Linguística
12.
Neural Netw ; 169: 191-204, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37898051

RESUMO

This paper analyzes diverse features extracted from spoken language to select the most discriminative ones for dementia detection. We present a two-step feature selection (FS) approach: Step 1 utilizes filter methods to pre-screen features, and Step 2 uses a novel feature ranking (FR) method, referred to as dual dropout ranking (DDR), to rank the screened features and select spoken language biomarkers. The proposed DDR is based on a dual-net architecture that separates FS and dementia detection into two neural networks (namely, the operator and selector). The operator is trained on features obtained from the selector to reduce classification or regression loss. The selector is optimized to predict the operator's performance based on automatic regularization. Results show that the approach significantly reduces feature dimensionality while identifying small feature subsets that achieve comparable or superior performance compared with the full, default feature set. The Python codes are available at https://github.com/kexquan/dual-dropout-ranking.


Assuntos
Demência , Redes Neurais de Computação , Humanos , Biomarcadores , Demência/diagnóstico , Idioma
14.
Brain Sci ; 13(7)2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37508940

RESUMO

Traditionally, speech perception training paradigms have not adequately taken into account the possibility that there may be modality-specific requirements for perceptual learning with auditory-only (AO) versus visual-only (VO) speech stimuli. The study reported here investigated the hypothesis that there are modality-specific differences in how prior information is used by normal-hearing participants during vocoded versus VO speech training. Two different experiments, one with vocoded AO speech (Experiment 1) and one with VO, lipread, speech (Experiment 2), investigated the effects of giving different types of prior information to trainees on each trial during training. The training was for four ~20 min sessions, during which participants learned to label novel visual images using novel spoken words. Participants were assigned to different types of prior information during training: Word Group trainees saw a printed version of each training word (e.g., "tethon"), and Consonant Group trainees saw only its consonants (e.g., "t_th_n"). Additional groups received no prior information (i.e., Experiment 1, AO Group; Experiment 2, VO Group) or a spoken version of the stimulus in a different modality from the training stimuli (Experiment 1, Lipread Group; Experiment 2, Vocoder Group). That is, in each experiment, there was a group that received prior information in the modality of the training stimuli from the other experiment. In both experiments, the Word Groups had difficulty retaining the novel words they attempted to learn during training. However, when the training stimuli were vocoded, the Word Group improved their phoneme identification. When the training stimuli were visual speech, the Consonant Group improved their phoneme identification and their open-set sentence lipreading. The results are considered in light of theoretical accounts of perceptual learning in relationship to perceptual modality.

15.
J Neurosci ; 43(20): 3718-3732, 2023 05 17.
Artigo em Inglês | MEDLINE | ID: mdl-37059462

RESUMO

Brain oscillations are prevalent in all species and are involved in numerous perceptual operations. α oscillations are thought to facilitate processing through the inhibition of task-irrelevant networks, while ß oscillations are linked to the putative reactivation of content representations. Can the proposed functional role of α and ß oscillations be generalized from low-level operations to higher-level cognitive processes? Here we address this question focusing on naturalistic spoken language comprehension. Twenty-two (18 female) Dutch native speakers listened to stories in Dutch and French while MEG was recorded. We used dependency parsing to identify three dependency states at each word: the number of (1) newly opened dependencies, (2) dependencies that remained open, and (3) resolved dependencies. We then constructed forward models to predict α and ß power from the dependency features. Results showed that dependency features predict α and ß power in language-related regions beyond low-level linguistic features. Left temporal, fundamental language regions are involved in language comprehension in α, while frontal and parietal, higher-order language regions, and motor regions are involved in ß. Critically, α- and ß-band dynamics seem to subserve language comprehension tapping into syntactic structure building and semantic composition by providing low-level mechanistic operations for inhibition and reactivation processes. Because of the temporal similarity of the α-ß responses, their potential functional dissociation remains to be elucidated. Overall, this study sheds light on the role of α and ß oscillations during naturalistic spoken language comprehension, providing evidence for the generalizability of these dynamics from perceptual to complex linguistic processes.SIGNIFICANCE STATEMENT It remains unclear whether the proposed functional role of α and ß oscillations in perceptual and motor function is generalizable to higher-level cognitive processes, such as spoken language comprehension. We found that syntactic features predict α and ß power in language-related regions beyond low-level linguistic features when listening to naturalistic speech in a known language. We offer experimental findings that integrate a neuroscientific framework on the role of brain oscillations as "building blocks" with spoken language comprehension. This supports the view of a domain-general role of oscillations across the hierarchy of cognitive functions, from low-level sensory operations to abstract linguistic processes.


Assuntos
Compreensão , Percepção da Fala , Feminino , Humanos , Compreensão/fisiologia , Magnetoencefalografia , Encéfalo/fisiologia , Idioma , Linguística , Mapeamento Encefálico/métodos , Percepção da Fala/fisiologia
16.
Sensors (Basel) ; 23(5)2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36905052

RESUMO

The comprehension of spoken language is a crucial aspect of dialogue systems, encompassing two fundamental tasks: intent classification and slot filling. Currently, the joint modeling approach for these two tasks has emerged as the dominant method in spoken language understanding modeling. However, the existing joint models have limitations in terms of their relevancy and utilization of contextual semantic features between the multiple tasks. To address these limitations, a joint model based on BERT and semantic fusion (JMBSF) is proposed. The model employs pre-trained BERT to extract semantic features and utilizes semantic fusion to associate and integrate this information. The results of experiments on two benchmark datasets, ATIS and Snips, in spoken language comprehension demonstrate that the proposed JMBSF model attains 98.80% and 99.71% intent classification accuracy, 98.25% and 97.24% slot-filling F1-score, and 93.40% and 93.57% sentence accuracy, respectively. These results reveal a significant improvement compared to other joint models. Furthermore, comprehensive ablation studies affirm the effectiveness of each component in the design of JMBSF.


Assuntos
Idioma , Semântica , Processamento de Linguagem Natural , Intenção , Estimulação Acústica
17.
Brain Sci ; 13(3)2023 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-36979322

RESUMO

Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes-those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals' transcription of missing phonemes often defaulted to '/d/t/th/', the same phonemes often experienced during the McGurk illusion. Importantly, individuals' default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.

18.
Lang Resour Eval ; : 1-38, 2023 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-36845276

RESUMO

In this paper, we present a corpus for heritage Bosnian/Croatian/Montenegrin/Serbian (BCMS) spoken in German-speaking Switzerland. The corpus consists of elicited conversations between 29 second-generation speakers originating from different regions of former Yugoslavia. In total, the corpus contains 30 turn-aligned transcripts with an average length of 6 min. It is enriched with extensive speakers' metadata, annotations, and pre-calculated corpus counts. The corpus can be accessed through an interactive corpus platform that allows for browsing, querying, and filtering, but also for creating and sharing custom annotations. Principal user groups we address with this corpus are researchers of heritage BCMS, as well as students and teachers of BCMS living in diaspora. In addition to introducing the corpus platform and the workflows we adopted to create it, we also present a case study on BCMS spoken by a pair of siblings who participated in the map task, and discuss advantages and challenges of using this corpus platform for linguistic research.

19.
Audiol Res ; 13(1): 151-159, 2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36825953

RESUMO

The impact of language input on children's speech, language, and brain development was borne out of Hart and Risley's famous "30-million-word gap". A perspective bolstered by many studies in the last decade relates higher socio-economic status (SES) to better qualitative and quantitative differences in children's speech. The logic chains found in these studies suggest that literacy development depends on language and brain development. Thus, brain building develops based on environmental experience and language input depends on the brain's perception of the auditory information. This essay uses the latest published peer-reviewed research to outline the current landscape of the role of SES in the development of speech and language skills among children with hearing loss (HL) who are enrolled in auditory-driven habilitation programs. This essay argues that low SES families may provide sufficient input for their children. The outcome of auditory-driven programs implemented by speech-language pathologists (SLPs) seems to be detached from SES. The role of SES on this developmental trajectory remains unclear, and clinical practice may be related to other validated and robust parameters related to hearing loss.

20.
Front Hum Neurosci ; 17: 1079493, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36742356

RESUMO

Negation is frequently used in natural language, yet relatively little is known about its processing. More importantly, what is known regarding the neurophysiological processing of negation is mostly based on results of studies using written stimuli (the word-by-word paradigm). While the results of these studies have suggested processing costs in connection to negation (increased negativities in brain responses), it is difficult to know how this translates into processing of spoken language. We therefore developed an auditory paradigm based on a previous visual study investigating processing of affirmatives, sentential negation (not), and prefixal negation (un-). The findings of processing costs were replicated but differed in the details. Importantly, the pattern of ERP effects suggested less effortful processing for auditorily presented negated forms (restricted to increased anterior and posterior positivities) in comparison to visually presented negated forms. We suggest that the natural flow of spoken language reduces variability in processing and therefore results in clearer ERP patterns.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA