Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Infant Behav Dev ; 76: 101959, 2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38781790

RESUMO

Werker and Tees (1984) prompted decades of research attempting to detail the paths infants take towards specialisation for the sounds of their native language(s). Most of this research has examined the trajectories of monolingual children. However, it has also been proposed that bilinguals, who are exposed to greater phonetic variability than monolinguals and must learn the rules of two languages, may remain perceptually open to non-native language sounds later into life than monolinguals. Using a visual habituation paradigm, the current study tests this question by comparing 15- to 18-month-old monolingual and bilingual children's developmental trajectories for non-native phonetic consonant contrast discrimination. A novel approach to the integration of stimulus presentation software with eye-tracking software was validated for objective measurement of infant looking time. The results did not support the hypothesis of a protracted period of sensitivity to non-native phonetic contrasts in bilingual compared to monolingual infants. Implications for diversification of perceptual narrowing research and implementation of increasingly sensitive measures are discussed.

2.
Biling (Camb Engl) ; 26(4): 835-844, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37636491

RESUMO

Bilingual infants rely differently than monolinguals on facial information, such as lip patterns, to differentiate their native languages. This may explain, at least in part, why young monolinguals and bilinguals show differences in social attention. For example, in the first year, bilinguals attend faster and more often to static faces over non-faces than do monolinguals (Mercure et al., 2018). However, the developmental trajectories of these differences are unknown. In this pre-registered study, data were collected from 15- to 18-month-old monolinguals (English) and bilinguals (English and another language) to test whether group differences in face-looking behaviour persist into the second year. We predicted that bilinguals would orient more rapidly and more often to static faces than monolinguals. Results supported the first but not the second hypothesis. This suggests that, even into the second year of life, toddlers' rapid visual orientation to static social stimuli is sensitive to early language experience.

3.
Cortex ; 154: 105-134, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35777191

RESUMO

BACKGROUND: Most people have strong left-brain lateralisation for language, with a minority showing right- or bilateral language representation. On some receptive language tasks, however, lateralisation appears to be reduced or absent. This contrasting pattern raises the question of whether and how language laterality may fractionate within individuals. Building on our prior work, we postulated (a) that there can be dissociations in lateralisation of different components of language, and (b) these would be more common in left-handers. A subsidiary hypothesis was that laterality indices will cluster according to two underlying factors corresponding to whether they involve generation of words or sentences, versus receptive language. METHODS: We tested these predictions in two stages: At Step 1 an online laterality battery (Dichotic listening, Rhyme Decision and Word Comprehension) was given to 621 individuals (56% left-handers); At Step 2, functional transcranial Doppler ultrasound (fTCD) was used with 230 of these individuals (51% left-handers). 108 left-handers and 101 right-handers gave useable data on a battery of three language generation and three receptive language tasks. RESULTS: Neither the online nor fTCD measures supported the notion of a single language laterality factor. In general, for both online and fTCD measures, tests of language generation were left-lateralised. In contrast, the receptive tasks were at best weakly left-lateralised or, in the case of Word Comprehension, slightly right-lateralised. The online measures were only weakly correlated, if at all, with fTCD measures. Most of the fTCD measures had split-half reliabilities of at least .7, and showed a distinctive pattern of intercorrelation, supporting a modified two-factor model in which Phonological Decision (generation) and Sentence Decision (reception) loaded on both factors. The same factor structure fitted data from left- and right-handers, but mean scores on the two factors were lower (less left-lateralised) in left-handers. CONCLUSIONS: There are at least two factors influencing language lateralization in individuals, but they do not correspond neatly to language generation and comprehension. Future fMRI studies could help clarify how far they reflect activity in specific brain regions.


Assuntos
Lateralidade Funcional , Idioma , Encéfalo , Circulação Cerebrovascular , Humanos , Ultrassonografia Doppler Transcraniana
4.
Dev Sci ; 24(6): e13124, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34060185

RESUMO

Visual information conveyed by a speaking face aids speech perception. In addition, children's ability to comprehend visual-only speech (speechreading ability) is related to phonological awareness and reading skills in both deaf and hearing children. We tested whether training speechreading would improve speechreading, phoneme blending, and reading ability in hearing children. Ninety-two hearing 4- to 5-year-old children were randomised into two groups: business-as-usual controls, and an intervention group, who completed three weeks of computerised speechreading training. The intervention group showed greater improvements in speechreading than the control group at post-test both immediately after training and 3 months later. This was the case for both trained and untrained words. There were no group effects on the phonological awareness or single-word reading tasks, although those with the lowest phoneme blending scores did show greater improvements in blending as a result of training. The improvement in speechreading in hearing children following brief training is encouraging. The results are also important in suggesting a hypothesis for future investigation: that a focus on visual speech information may contribute to phonological skills, not only in deaf children but also in hearing children who are at risk of reading difficulties. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=bBdpliGkbkY.


Assuntos
Surdez , Leitura Labial , Pré-Escolar , Audição , Humanos , Fonética , Leitura
5.
iScience ; 23(11): 101650, 2020 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-33103087

RESUMO

When people talk, they move their hands to enhance meaning. Using accelerometry, we measured whether people spontaneously use their artificial limbs (prostheses) to gesture, and whether this behavior relates to everyday prosthesis use and perceived embodiment. Perhaps surprisingly, one- and two-handed participants did not differ in the number of gestures they produced in gesture-facilitating tasks. However, they did differ in their gesture profile. One-handers performed more, and bigger, gesture movements with their intact hand relative to their prosthesis. Importantly, one-handers who gestured more similarly to their two-handed counterparts also used their prosthesis more in everyday life. Although collectively one-handers only marginally agreed that their prosthesis feels like a body part, one-handers who reported they embody their prosthesis also showed greater prosthesis use for communication and daily function. Our findings provide the first empirical link between everyday prosthesis use habits and perceived embodiment and a novel means for implicitly indexing embodiment.

6.
J Speech Lang Hear Res ; 63(11): 3775-3785, 2020 11 13.
Artigo em Inglês | MEDLINE | ID: mdl-33108258

RESUMO

Purpose Speechreading (lipreading) is a correlate of reading ability in both deaf and hearing children. We investigated whether the relationship between speechreading and single-word reading is mediated by phonological awareness in deaf and hearing children. Method In two separate studies, 66 deaf children and 138 hearing children, aged 5-8 years old, were assessed on measures of speechreading, phonological awareness, and single-word reading. We assessed the concurrent relationships between latent variables measuring speechreading, phonological awareness, and single-word reading. Results In both deaf and hearing children, there was a strong relationship between speechreading and single-word reading, which was fully mediated by phonological awareness. Conclusions These results are consistent with ideas from previous studies that visual speech information contributes to the development of phonological representations in both deaf and hearing children, which, in turn, support learning to read. Future longitudinal and training studies are required to establish whether these relationships reflect causal effects.


Assuntos
Surdez , Leitura Labial , Criança , Pré-Escolar , Audição , Humanos , Fonética , Leitura , Vocabulário
7.
Neurobiol Lang (Camb) ; 1(1): 9-32, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32274469

RESUMO

Recent neuroimaging studies suggest that monolingual infants activate a left-lateralized frontotemporal brain network in response to spoken language, which is similar to the network involved in processing spoken and signed language in adulthood. However, it is unclear how brain activation to language is influenced by early experience in infancy. To address this question, we present functional near-infrared spectroscopy (fNIRS) data from 60 hearing infants (4 to 8 months of age): 19 monolingual infants exposed to English, 20 unimodal bilingual infants exposed to two spoken languages, and 21 bimodal bilingual infants exposed to English and British Sign Language (BSL). Across all infants, spoken language elicited activation in a bilateral brain network including the inferior frontal and posterior temporal areas, whereas sign language elicited activation in the right temporoparietal area. A significant difference in brain lateralization was observed between groups. Activation in the posterior temporal region was not lateralized in monolinguals and bimodal bilinguals, but right lateralized in response to both language modalities in unimodal bilinguals. This suggests that the experience of two spoken languages influences brain activation for sign language when experienced for the first time. Multivariate pattern analyses (MVPAs) could classify distributed patterns of activation within the left hemisphere for spoken and signed language in monolinguals (proportion correct = 0.68; p = 0.039) but not in unimodal or bimodal bilinguals. These results suggest that bilingual experience in infancy influences brain activation for language and that unimodal bilingual experience has greater impact on early brain lateralization than bimodal bilingual experience.

8.
Neuroimage ; 209: 116411, 2020 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-31857205

RESUMO

Deaf late signers provide a unique perspective on the impact of impoverished early language exposure on the neurobiology of language: insights that cannot be gained from research with hearing people alone. Here we contrast the effect of age of sign language acquisition in hearing and congenitally deaf adults to examine the potential impact of impoverished early language exposure on the neural systems supporting a language learnt later in life. We collected fMRI data from deaf and hearing proficient users (N â€‹= â€‹52) of British Sign Language (BSL), who learnt BSL either early (native) or late (after the age of 15 years) whilst they watched BSL sentences or strings of meaningless nonsense signs. There was a main effect of age of sign language acquisition (late â€‹> â€‹early) across deaf and hearing signers in the occipital segment of the left intraparietal sulcus. This finding suggests that late learners of sign language may rely on visual processing more than early learners, when processing both linguistic and nonsense sign input - regardless of hearing status. Region-of-interest analyses in the posterior superior temporal cortices (STC) showed an effect of age of sign language acquisition that was specific to deaf signers. In the left posterior STC, activation in response to signed sentences was greater in deaf early signers than deaf late signers. Importantly, responses in the left posterior STC in hearing early and late signers did not differ, and were similar to those observed in deaf early signers. These data lend further support to the argument that robust early language experience, whether signed or spoken, is necessary for left posterior STC to show a 'native-like' response to a later learnt language.


Assuntos
Mapeamento Encefálico , Surdez/fisiopatologia , Desenvolvimento da Linguagem , Idioma , Plasticidade Neuronal/fisiologia , Língua de Sinais , Lobo Temporal/fisiologia , Adulto , Fatores Etários , Surdez/congênito , Humanos , Imageamento por Ressonância Magnética , Pessoa de Meia-Idade , Reconhecimento Visual de Modelos/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiopatologia , Adulto Jovem
9.
Curr Biol ; 29(21): 3739-3747.e5, 2019 11 04.
Artigo em Inglês | MEDLINE | ID: mdl-31668623

RESUMO

Conceptual knowledge is fundamental to human cognition. Yet, the extent to which it is influenced by language is unclear. Studies of semantic processing show that similar neural patterns are evoked by the same concepts presented in different modalities (e.g., spoken words and pictures or text) [1-3]. This suggests that conceptual representations are "modality independent." However, an alternative possibility is that the similarity reflects retrieval of common spoken language representations. Indeed, in hearing spoken language users, text and spoken language are co-dependent [4, 5], and pictures are encoded via visual and verbal routes [6]. A parallel approach investigating semantic cognition shows that bilinguals activate similar patterns for the same words in their different languages [7, 8]. This suggests that conceptual representations are "language independent." However, this has only been tested in spoken language bilinguals. If different languages evoke different conceptual representations, this should be most apparent comparing languages that differ greatly in structure. Hearing people with signing deaf parents are bilingual in sign and speech: languages conveyed in different modalities. Here, we test the influence of modality and bilingualism on conceptual representation by comparing semantic representations elicited by spoken British English and British Sign Language in hearing early, sign-speech bilinguals. We show that representations of semantic categories are shared for sign and speech, but not for individual spoken words and signs. This provides evidence for partially shared representations for sign and speech and shows that language acts as a subtle filter through which we understand and interact with the world.


Assuntos
Multilinguismo , Semântica , Língua de Sinais , Fala , Adulto , Inglaterra , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
10.
J Speech Lang Hear Res ; 62(8): 2882-2894, 2019 08 15.
Artigo em Inglês | MEDLINE | ID: mdl-31336055

RESUMO

Purpose We developed and evaluated in a randomized controlled trial a computerized speechreading training program to determine (a) whether it is possible to train speechreading in deaf children and (b) whether speechreading training results in improvements in phonological and reading skills. Previous studies indicate a relationship between speechreading and reading skill and further suggest this relationship may be mediated by improved phonological representations. This is important since many deaf children find learning to read to be very challenging. Method Sixty-six deaf 5- to 7-year-olds were randomized into speechreading and maths training arms. Each training program was composed of a 10-min sessions a day, 4 days a week for 12 weeks. Children were assessed on a battery of language and literacy measures before training, immediately after training, and 3 months and 11 months after training. Results We found no significant benefits for participants who completed the speechreading training, compared to those who completed the maths training, on the speechreading primary outcome measure. However, significantly greater gains were observed in the speechreading training group on one of the secondary measures of speechreading. There was also some evidence of beneficial effects of the speechreading training on phonological representations; however, these effects were weaker. No benefits were seen to word reading. Conclusions Speechreading skill is trainable in deaf children. However, to support early reading, training may need to be longer or embedded in a broader literacy program. Nevertheless, a training tool that can improve speechreading is likely to be of great interest to professionals working with deaf children. Supplemental Material https://doi.org/10.23641/asha.8856356.


Assuntos
Linguagem Infantil , Instrução por Computador/métodos , Surdez/reabilitação , Leitura Labial , Educação de Pacientes como Assunto/métodos , Criança , Pré-Escolar , Auxiliares de Comunicação para Pessoas com Deficiência , Surdez/psicologia , Feminino , Humanos , Testes de Linguagem , Alfabetização , Masculino , Fonética , Leitura
11.
Dev Cogn Neurosci ; 36: 100619, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30711882

RESUMO

The effect of sensory experience on hemispheric specialisation for language production is not well understood. Children born deaf, including those who have cochlear implants, have drastically different perceptual experiences of language than their hearing peers. Using functional transcranial Doppler sonography (fTCD), we measured lateralisation during language production in a heterogeneous group of 19 deaf children and in 19 hearing children, matched on language ability. In children born deaf, we observed significant left lateralisation during language production (British Sign Language, spoken English, or a combination of languages). There was no difference in the strength of lateralisation between deaf and hearing groups. Comparable proportions of children were categorised as left-, right-, or not significantly-lateralised in each group. Moreover, an exploratory subgroup analysis showed no significant difference in lateralisation between deaf children with cochlear implants and those without. These data suggest that the processes underpinning language production remain robustly left lateralised regardless of sensory language experience.


Assuntos
Surdez/fisiopatologia , Dominância Cerebral/fisiologia , Criança , Feminino , Humanos , Idioma , Masculino
12.
Dev Sci ; 22(1): e12701, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30014580

RESUMO

Infants as young as 2 months can integrate audio and visual aspects of speech articulation. A shift of attention from the eyes towards the mouth of talking faces occurs around 6 months of age in monolingual infants. However, it is unknown whether this pattern of attention during audiovisual speech processing is influenced by speech and language experience in infancy. The present study investigated this question by analysing audiovisual speech processing in three groups of 4- to 8-month-old infants who differed in their language experience: monolinguals, unimodal bilinguals (infants exposed to two or more spoken languages) and bimodal bilinguals (hearing infants with Deaf mothers). Eye-tracking was used to study patterns of face scanning while infants were viewing faces articulating syllables with congruent, incongruent and silent auditory tracks. Monolinguals and unimodal bilinguals increased their attention to the mouth of talking faces between 4 and 8 months, while bimodal bilinguals did not show any age difference in their scanning patterns. Moreover, older (6.6 to 8 months), but not younger, monolinguals (4 to 6.5 months) showed increased visual attention to the mouth of faces articulating audiovisually incongruent rather than congruent faces, indicating surprise or novelty. In contrast, no audiovisual congruency effect was found in unimodal or bimodal bilinguals. Results suggest that speech and language experience influences audiovisual integration in infancy. Specifically, reduced or more variable experience of audiovisual speech from the primary caregiver may lead to less sensitivity to the integration of audio and visual cues of speech articulation.


Assuntos
Multilinguismo , Percepção da Fala/fisiologia , Percepção Visual , Adulto , Atenção , Sinais (Psicologia) , Movimentos Oculares , Face , Feminino , Humanos , Lactente , Masculino , Boca
13.
Front Psychol ; 9: 1943, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30459671

RESUMO

Faces capture and maintain infants' attention more than other visual stimuli. The present study addresses the impact of early language experience on attention to faces in infancy. It was hypothesized that infants learning two spoken languages (unimodal bilinguals) and hearing infants of Deaf mothers learning British Sign Language and spoken English (bimodal bilinguals) would show enhanced attention to faces compared to monolinguals. The comparison between unimodal and bimodal bilinguals allowed differentiation of the effects of learning two languages, from the effects of increased visual communication in hearing infants of Deaf mothers. Data are presented for two independent samples of infants: Sample 1 included 49 infants between 7 and 10 months (26 monolinguals and 23 unimodal bilinguals), and Sample 2 included 87 infants between 4 and 8 months (32 monolinguals, 25 unimodal bilinguals, and 30 bimodal bilingual infants with a Deaf mother). Eye-tracking was used to analyze infants' visual scanning of complex arrays including a face and four other stimulus categories. Infants from 4 to 10 months (all groups combined) directed their attention to faces faster than to non-face stimuli (i.e., attention capture), directed more fixations to, and looked longer at faces than non-face stimuli (i.e., attention maintenance). Unimodal bilinguals demonstrated increased attention capture and attention maintenance by faces compared to monolinguals. Contrary to predictions, bimodal bilinguals did not differ from monolinguals in attention capture and maintenance by face stimuli. These results are discussed in relation to the language experience of each group and the close association between face processing and language development in social communication.

14.
Lang Learn ; 68(Suppl Suppl 1): 159-179, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29937576

RESUMO

For children who are born deaf, lipreading (speechreading) is an important source of access to spoken language. We used eye tracking to investigate the strategies used by deaf (n = 33) and hearing 5-8-year-olds (n = 59) during a sentence speechreading task. The proportion of time spent looking at the mouth during speech correlated positively with speechreading accuracy. In addition, all children showed a tendency to watch the mouth during speech and watch the eyes when the model was not speaking. The extent to which the children used this communicative pattern, which we refer to as social-tuning, positively predicted their speechreading performance, with the deaf children showing a stronger relationship than the hearing children. These data suggest that better speechreading skills are seen in those children, both deaf and hearing, who are able to guide their visual attention to the appropriate part of the image and in those who have a good understanding of conversational turn-taking.

15.
J Neurosci ; 37(39): 9564-9573, 2017 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-28821674

RESUMO

To investigate how hearing status, sign language experience, and task demands influence functional responses in the human superior temporal cortices (STC) we collected fMRI data from deaf and hearing participants (male and female), who either acquired sign language early or late in life. Our stimuli in all tasks were pictures of objects. We varied the linguistic and visuospatial processing demands in three different tasks that involved decisions about (1) the sublexical (phonological) structure of the British Sign Language (BSL) signs for the objects, (2) the semantic category of the objects, and (3) the physical features of the objects.Neuroimaging data revealed that in participants who were deaf from birth, STC showed increased activation during visual processing tasks. Importantly, this differed across hemispheres. Right STC was consistently activated regardless of the task whereas left STC was sensitive to task demands. Significant activation was detected in the left STC only for the BSL phonological task. This task, we argue, placed greater demands on visuospatial processing than the other two tasks. In hearing signers, enhanced activation was absent in both left and right STC during all three tasks. Lateralization analyses demonstrated that the effect of deafness was more task-dependent in the left than the right STC whereas it was more task-independent in the right than the left STC. These findings indicate how the absence of auditory input from birth leads to dissociable and altered functions of left and right STC in deaf participants.SIGNIFICANCE STATEMENT Those born deaf can offer unique insights into neuroplasticity, in particular in regions of superior temporal cortex (STC) that primarily respond to auditory input in hearing people. Here we demonstrate that in those deaf from birth the left and the right STC have altered and dissociable functions. The right STC was activated regardless of demands on visual processing. In contrast, the left STC was sensitive to the demands of visuospatial processing. Furthermore, hearing signers, with the same sign language experience as the deaf participants, did not activate the STCs. Our data advance current understanding of neural plasticity by determining the differential effects that hearing status and task demands can have on left and right STC function.


Assuntos
Percepção Auditiva , Surdez/fisiopatologia , Lateralidade Funcional , Memória de Curto Prazo , Língua de Sinais , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Estudos de Casos e Controles , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Semântica , Lobo Temporal/fisiopatologia , Percepção Visual
16.
Front Psychol ; 8: 106, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28223951

RESUMO

Previous research has provided evidence for a speechreading advantage in congenitally deaf adults compared to hearing adults. A 'perceptual compensation' account of this finding proposes that prolonged early onset deafness leads to a greater reliance on visual, as opposed to auditory, information when perceiving speech which in turn results in superior visual speech perception skills in deaf adults. In the current study we tested whether previous demonstrations of a speechreading advantage for profoundly congenitally deaf adults with hearing aids, or no amplificiation, were also apparent in adults with the same deafness profile but who have experienced greater access to the auditory elements of speech via a cochlear implant (CI). We also tested the prediction that, in line with the perceptual compensation account, receiving a CI at a later age is associated with superior speechreading skills due to later implanted individuals having experienced greater dependence on visual speech information. We designed a speechreading task in which participants viewed silent videos of 123 single words spoken by a model and were required to indicate which word they thought had been said via a free text response. We compared congenitally deaf adults who had received CIs in childhood or adolescence (N = 15) with a comparison group of hearing adults (N = 15) matched on age and education level. The adults with CI showed significantly better scores on the speechreading task than the hearing comparison group. Furthermore, within the group of adults with CI, there was a significant positive correlation between age at implantation and speechreading performance; earlier implantation was associated with lower speechreading scores. These results are both consistent with the hypothesis of perceptual compensation in the domain of speech perception, indicating that more prolonged dependence on visual speech information in speech perception may lead to improvements in the perception of visual speech. In addition our study provides metrics of the 'speechreadability' of 123 words produced in British English: one derived from hearing adults (N = 61) and one from deaf adults with CI (N = 15). Evidence for the validity of these 'speechreadability' metrics come from correlations with visual lexical competition data.

17.
Brain Lang ; 159: 109-17, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27388786

RESUMO

The neural systems supporting speech and sign processing are very similar, although not identical. In a previous fTCD study of hearing native signers (Gutierrez-Sigut, Daws, et al., 2015) we found stronger left lateralization for sign than speech. Given that this increased lateralization could not be explained by hand movement alone, the contribution of motor movement versus 'linguistic' processes to the strength of hemispheric lateralization during sign production remains unclear. Here we directly contrast lateralization strength of covert versus overt signing during phonological and semantic fluency tasks. To address the possibility that hearing native signers' elevated lateralization indices (LIs) were due to performing a task in their less dominant language, here we test deaf native signers, whose dominant language is British Sign Language (BSL). Signers were more strongly left lateralized for overt than covert sign generation. However, the strength of lateralization was not correlated with the amount of time producing movements of the right hand. Comparisons with previous data from hearing native English speakers suggest stronger laterality indices for sign than speech in both covert and overt tasks. This increased left lateralization may be driven by specific properties of sign production such as the increased use of self-monitoring mechanisms or the nature of phonological encoding of signs.


Assuntos
Lateralidade Funcional/fisiologia , Idioma , Movimento/fisiologia , Língua de Sinais , Adolescente , Adulto , Surdez/fisiopatologia , Feminino , Mãos/fisiologia , Audição/fisiologia , Humanos , Linguística , Masculino , Semântica , Fala/fisiologia , Fatores de Tempo , Adulto Jovem
18.
19.
Res Dev Disabil ; 48: 13-24, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26524726

RESUMO

BACKGROUND: Vocabulary knowledge and speechreading are important for deaf children's reading development but it is unknown whether they are independent predictors of reading ability. AIMS: This study investigated the relationships between reading, speechreading and vocabulary in a large cohort of deaf and hearing children aged 5 to 14 years. METHODS AND PROCEDURES: 86 severely and profoundly deaf children and 91 hearing children participated in this study. All children completed assessments of reading comprehension, word reading accuracy, speechreading and vocabulary. OUTCOMES AND RESULTS: Regression analyses showed that vocabulary and speechreading accounted for unique variance in both reading accuracy and comprehension for deaf children. For hearing children, vocabulary was an independent predictor of both reading accuracy and comprehension skills but speechreading only accounted for unique variance in reading accuracy. CONCLUSIONS AND IMPLICATIONS: Speechreading and vocabulary are important for reading development in deaf children. The results are interpreted within the Simple View of Reading framework and the theoretical implications for deaf children's reading are discussed.


Assuntos
Correção de Deficiência Auditiva , Surdez , Leitura Labial , Leitura , Vocabulário , Adolescente , Criança , Pré-Escolar , Correção de Deficiência Auditiva/métodos , Correção de Deficiência Auditiva/psicologia , Surdez/diagnóstico , Surdez/psicologia , Surdez/reabilitação , Educação de Pessoas com Deficiência Auditiva/métodos , Feminino , Auxiliares de Audição , Testes Auditivos/métodos , Humanos , Testes de Linguagem , Masculino , Pessoas com Deficiência Auditiva/psicologia , Fonética , Análise e Desempenho de Tarefas
20.
Wellcome Open Res ; 1: 15, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-34405116

RESUMO

Background: Lateralised representation of language in monolinguals is a well-established finding, but the situation is much less clear when there is more than one language. Studies to date have identified a number of factors that might influence the brain organisation of language in bilinguals. These include proficiency, age of acquisition and exposure to the second language. The question as to whether the cerebral lateralisation of first and second languages are the same or different is as yet unresolved. Methods: We used functional transcranial Doppler sonography (FTCD) to measure cerebral lateralisation in the first and second languages in 26 high proficiency bilinguals with German or French as their first language (L1) and English as their second language (L2). FTCD was used to measure task-dependent blood flow velocity changes in the left and right middle cerebral arteries during word generation cued by single letters. Language history measures and handedness were assessed through self-report questionnaires. Results:The majority of participants were significantly left lateralised for both L1 and L2, with no significant difference in the size of asymmetry indices between L1 and L2. Asymmetry indices for L1 and L2 were not related to language history, such as proficiency of the L2. Conclusion:  In highly proficient bilinguals, there is strong concordance for cerebral lateralisation of first and second languages.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...