Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
Add more filters










Publication year range
1.
J Speech Lang Hear Res ; 67(3): 870-885, 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38394239

ABSTRACT

PURPOSE: Children are assumed to acquire orthographic representations during autonomous reading by decoding new written words. The present study investigates how deaf and hard of hearing (DHH) children build new orthographic representations compared to typically hearing (TH) children. METHOD: Twenty-nine DHH children, from 7.8 to 13.5 years old, with moderate-to-profound hearing loss, matched for reading level and chronological age to TH controls, were exposed to 10 pseudowords (novel words) in written stories. Then, they performed a spelling task and an orthographic recognition task on these new words. RESULTS: In the spelling task, we found no difference in accuracy, but a difference in errors emerged between the two groups: Phonologically plausible errors were less common in DHH children than in TH children. In the recognition task, DHH children were better than TH children at recognizing target pseudowords. Phonological strategies seemed to be used less by DHH than by TH children who very often chose phonological distractors. CONCLUSIONS: Both groups created sufficiently detailed orthographic representations to complete the tasks, which support the self-teaching hypothesis. DHH children used phonological information in both tasks but could use more orthographic cues than TH children to build up orthographic representations. Using the combination of a spelling task and a recognition task, as well as analyzing the nature of errors, in this study, provides a methodological implication for further understanding of underlying cognitive processes.


Subject(s)
Persons With Hearing Impairments , Child , Humans , Adolescent , Phonetics , Learning , Language , Hearing
2.
Brain Sci ; 13(7)2023 Jul 07.
Article in English | MEDLINE | ID: mdl-37508968

ABSTRACT

Cued Speech (CS) is a communication system that uses manual gestures to facilitate lipreading. In this study, we investigated how CS information interacts with natural speech using Event-Related Potential (ERP) analyses in French-speaking, typically hearing adults (TH) who were either naïve or experienced CS producers. The audiovisual (AV) presentation of lipreading information elicited an amplitude attenuation of the entire N1 and P2 complex in both groups, accompanied by N1 latency facilitation in the group of CS producers. Adding CS gestures to lipread information increased the magnitude of effects observed at the N1 time window, but did not enhance P2 amplitude attenuation. Interestingly, presenting CS gestures without lipreading information yielded distinct response patterns depending on participants' experience with the system. In the group of CS producers, AV perception of CS gestures facilitated the early stage of speech processing, while in the group of naïve participants, it elicited a latency delay at the P2 time window. These results suggest that, for experienced CS users, the perception of gestures facilitates early stages of speech processing, but when people are not familiar with the system, the perception of gestures impacts the efficiency of phonological decoding.

3.
PLoS Biol ; 18(8): e3000840, 2020 08.
Article in English | MEDLINE | ID: mdl-32845876

ABSTRACT

Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.


Subject(s)
Cerebral Cortex/physiology , Noise , Reading , Speech/physiology , Behavior , Child , Dyslexia/physiopathology , Humans , Linear Models , Neuroimaging , Phonetics
4.
Front Psychol ; 8: 426, 2017.
Article in English | MEDLINE | ID: mdl-28424636

ABSTRACT

We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl's gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication.

5.
PLoS One ; 10(11): e0142191, 2015.
Article in English | MEDLINE | ID: mdl-26551648

ABSTRACT

Children with Autism Spectrum Disorder are often said to present a global pragmatic impairment. However, there is some observational evidence that context-based comprehension of indirect requests may be preserved in autism. In order to provide experimental confirmation to this hypothesis, indirect speech act comprehension was tested in a group of 15 children with autism between 7 and 12 years and a group of 20 typically developing children between 2:7 and 3:6 years. The aim of the study was to determine whether children with autism can display genuinely contextual understanding of indirect requests. The experiment consisted of a three-pronged semi-structured task involving Mr Potato Head. In the first phase a declarative sentence was uttered by one adult as an instruction to put a garment on a Mr Potato Head toy; in the second the same sentence was uttered as a comment on a picture by another speaker; in the third phase the same sentence was uttered as a comment on a picture by the first speaker. Children with autism complied with the indirect request in the first phase and demonstrated the capacity to inhibit the directive interpretation in phases 2 and 3. TD children had some difficulty in understanding the indirect instruction in phase 1. These results call for a more nuanced view of pragmatic dysfunction in autism.


Subject(s)
Autistic Disorder/psychology , Comprehension , Speech , Case-Control Studies , Child , Child, Preschool , Communication , Female , Humans , Language , Male , Speech Perception
6.
J Acoust Soc Am ; 136(4): 1918-31, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25324091

ABSTRACT

This study investigated the impact of aging on audio-visual speech integration. A syllable identification task was presented in auditory-only, visual-only, and audio-visual congruent and incongruent conditions. Visual cues were either degraded or unmodified. Stimuli were embedded in stationary noise alternating with modulated noise. Fifteen young adults and 15 older adults participated in this study. Results showed that older adults had preserved lipreading abilities when the visual input was clear but not when it was degraded. The impact of aging on audio-visual integration also depended on the quality of the visual cues. In the visual clear condition, the audio-visual gain was similar in both groups and analyses in the framework of the fuzzy-logical model of perception confirmed that older adults did not differ from younger adults in their audio-visual integration abilities. In the visual reduction condition, the audio-visual gain was reduced in the older group, but only when the noise was stationary, suggesting that older participants could compensate for the loss of lipreading abilities by using the auditory information available in the valleys of the noise. The fuzzy-logical model of perception confirmed the significant impact of aging on audio-visual integration by showing an increased weight of audition in the older group.


Subject(s)
Aging/psychology , Speech Perception , Visual Perception , Acoustic Stimulation , Age Factors , Aged , Audiometry, Pure-Tone , Audiometry, Speech , Auditory Threshold , Fuzzy Logic , Humans , Models, Psychological , Noise/adverse effects , Perceptual Masking , Photic Stimulation , Video Recording , Young Adult
7.
Front Psychol ; 5: 416, 2014.
Article in English | MEDLINE | ID: mdl-24904451

ABSTRACT

Speech perception for both hearing and deaf people involves an integrative process between auditory and lip-reading information. In order to disambiguate information from lips, manual cues from Cued Speech may be added. Cued Speech (CS) is a system of manual aids developed to help deaf people to clearly and completely understand speech visually (Cornett, 1967). Within this system, both labial and manual information, as lone input sources, remain ambiguous. Perceivers, therefore, have to combine both types of information in order to get one coherent percept. In this study, we examined how audio-visual (AV) integration is affected by the presence of manual cues and on which form of information (auditory, labial or manual) the CS receptors primarily rely. To address this issue, we designed a unique experiment that implemented the use of AV McGurk stimuli (audio /pa/ and lip-reading /ka/) which were produced with or without manual cues. The manual cue was congruent with either auditory information, lip information or the expected fusion. Participants were asked to repeat the perceived syllable aloud. Their responses were then classified into four categories: audio (when the response was /pa/), lip-reading (when the response was /ka/), fusion (when the response was /ta/) and other (when the response was something other than /pa/, /ka/ or /ta/). Data were collected from hearing impaired individuals who were experts in CS (all of which had either cochlear implants or binaural hearing aids; N = 8), hearing-individuals who were experts in CS (N = 14) and hearing-individuals who were completely naïve of CS (N = 15). Results confirmed that, like hearing-people, deaf people can merge auditory and lip-reading information into a single unified percept. Without manual cues, McGurk stimuli induced the same percentage of fusion responses in both groups. Results also suggest that manual cues can modify the AV integration and that their impact differs between hearing and deaf people.

8.
Front Psychol ; 5: 422, 2014.
Article in English | MEDLINE | ID: mdl-24904454

ABSTRACT

Audiovisual speech perception of children with specific language impairment (SLI) and children with typical language development (TLD) was compared in two experiments using /aCa/ syllables presented in the context of a masking release paradigm. Children had to repeat syllables presented in auditory alone, visual alone (speechreading), audiovisual congruent and incongruent (McGurk) conditions. Stimuli were masked by either stationary (ST) or amplitude modulated (AM) noise. Although children with SLI were less accurate in auditory and audiovisual speech perception, they showed similar auditory masking release effect than children with TLD. Children with SLI also had less correct responses in speechreading than children with TLD, indicating impairment in phonemic processing of visual speech information. In response to McGurk stimuli, children with TLD showed more fusions in AM noise than in ST noise, a consequence of the auditory masking release effect and of the influence of visual information. Children with SLI did not show this effect systematically, suggesting they were less influenced by visual speech. However, when the visual cues were easily identified, the profile of responses to McGurk stimuli was similar in both groups, suggesting that children with SLI do not suffer from an impairment of audiovisual integration. An analysis of percent of information transmitted revealed a deficit in the children with SLI, particularly for the place of articulation feature. Taken together, the data support the hypothesis of an intact peripheral processing of auditory speech information, coupled with a supra modal deficit of phonemic categorization in children with SLI. Clinical implications are discussed.

9.
PLoS One ; 9(3): e91839, 2014.
Article in English | MEDLINE | ID: mdl-24637785

ABSTRACT

An ongoing debate in research on numerical cognition concerns the extent to which the approximate number system and symbolic number knowledge influence each other during development. The current study aims at establishing the direction of the developmental association between these two kinds of abilities at an early age. Fifty-seven children of 3-4 years performed two assessments at 7 months interval. In each assessment, children's precision in discriminating numerosities as well as their capacity to manipulate number words and Arabic digits was measured. By comparing relationships between pairs of measures across the two time points, we were able to assess the predictive direction of the link. Our data indicate that both cardinality proficiency and symbolic number knowledge predict later accuracy in numerosity comparison whereas the reverse links are not significant. The present findings are the first to provide longitudinal evidence that the early acquisition of symbolic numbers is an important precursor in the developmental refinement of the approximate number representation system.


Subject(s)
Achievement , Cognition , Mathematical Concepts , Age Factors , Child , Child, Preschool , Female , Humans , Male , Models, Theoretical
10.
Cognition ; 127(3): 398-419, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23545388

ABSTRACT

Phonological development was assessed in six alphabetic orthographies (English, French, Greek, Icelandic, Portuguese and Spanish) at the beginning and end of the first year of reading instruction. The aim was to explore contrasting theoretical views regarding: the question of the availability of phonology at the outset of learning to read (Study 1); the influence of orthographic depth on the pace of phonological development during the transition to literacy (Study 2); and the impact of literacy instruction (Study 3). Results from 242 children did not reveal a consistent sequence of development as performance varied according to task demands and language. Phonics instruction appeared more influential than orthographic depth in the emergence of an early meta-phonological capacity to manipulate phonemes, and preliminary indications were that cross-linguistic variation was associated with speech rhythm more than factors such as syllable complexity. The implications of the outcome for current models of phonological development are discussed.


Subject(s)
Language Development , Language , Reading , Aging/psychology , Analysis of Variance , Child , Child, Preschool , Female , Humans , Male , Phonetics , Psycholinguistics , Psychomotor Performance/physiology , Teaching , Vocabulary
11.
J Speech Lang Hear Res ; 56(3): 956-70, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23275399

ABSTRACT

PURPOSE: Counting and exact arithmetic rely on language-based representations, whereas number comparison and approximate arithmetic involve approximate quantity-based representations that are available early in life, before the first stages of language acquisition. The objective of this study was to examine the impact of language abilities on the later development of exact and approximate number skills. METHOD: Twenty-eight 7- to 14-year-old children with specific language impairment (SLI) completed exact and approximate number tasks involving quantities presented symbolically and nonsymbolically. They were compared with age-matched (AM) and vocabulary-matched (VM) children. RESULTS: In the exact arithmetic task, the accuracy of children with SLI was lower than that of AM and VM controls and related to phonological measures. In the symbolic approximate tasks, children with SLI were less accurate than AM controls, but the difference vanished when their cognitive skills were considered or when they were compared with younger VM controls. In the nonsymbolic approximate tasks, children with SLI did not differ significantly from controls. Further, accuracy in the approximate number tasks was unrelated to language measures. CONCLUSIONS: Language impairment is related to reduced exact arithmetic skills, whereas it does not intrinsically affect the development of approximate number skills in children with SLI.


Subject(s)
Child Language , Dyscalculia/complications , Language Development Disorders/complications , Language Development , Learning Disabilities/complications , Mathematics/education , Adolescent , Child , Cognition , Dyscalculia/diagnosis , Executive Function , Female , Humans , Language Development Disorders/diagnosis , Language Tests , Learning Disabilities/diagnosis , Male , Memory, Short-Term
12.
Ear Hear ; 34(1): 110-21, 2013.
Article in English | MEDLINE | ID: mdl-23059850

ABSTRACT

OBJECTIVE: The aim of the present study was to examine audiovisual speech integration in cochlear-implanted children and in normally hearing children exposed to degraded auditory stimuli. Previous studies have shown that speech perception in cochlear-implanted users is biased toward the visual modality when audition and vision provide conflicting information. Our main question was whether an experimentally designed degradation of the visual speech cue would increase the importance of audition in the response pattern. The impact of auditory proficiency was also investigated. DESIGN: A group of 31 children with cochlear implants and a group of 31 normally hearing children matched for chronological age were recruited. All children with cochlear implants had profound congenital deafness and had used their implants for at least 2 years. Participants had to perform an /aCa/ consonant-identification task in which stimuli were presented randomly in three conditions: auditory only, visual only, and audiovisual (congruent and incongruent McGurk stimuli). In half of the experiment, the visual speech cue was normal; in the other half (visual reduction) a degraded visual signal was presented, aimed at preventing lipreading of good quality. The normally hearing children received a spectrally reduced speech signal (simulating the input delivered by the cochlear implant). RESULTS: First, performance in visual-only and in congruent audiovisual modalities were decreased, showing that the visual reduction technique used here was efficient at degrading lipreading. Second, in the incongruent audiovisual trials, visual reduction led to a major increase in the number of auditory based responses in both groups. Differences between proficient and nonproficient children were found in both groups, with nonproficient children's responses being more visual and less auditory than those of proficient children. Further analysis revealed that differences between visually clear and visually reduced conditions and between groups were not only because of differences in unisensory perception but also because of differences in the process of audiovisual integration per se. CONCLUSION: Visual reduction led to an increase in the weight of audition, even in cochlear-implanted children, whose perception is generally dominated by vision. This result suggests that the natural bias in favor of vision is not immutable. Audiovisual speech integration partly depends on the experimental situation, which modulates the informational content of the sensory channels and the weight that is awarded to each of them. Consequently, participants, whether deaf with cochlear implants or having normal hearing, not only base their perception on the most reliable modality but also award it an additional weight.


Subject(s)
Auditory Perception/physiology , Deafness , Lipreading , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adolescent , Case-Control Studies , Child , Cochlear Implantation , Cochlear Implants , Cues , Deafness/congenital , Deafness/surgery , Female , Humans , Male , Photic Stimulation
13.
Front Neurol ; 3: 97, 2012.
Article in English | MEDLINE | ID: mdl-22723789

ABSTRACT

It is known that sleep participates in memory consolidation processes. However, results obtained in the auditory domain are inconsistent. Here we aimed at investigating the role of post-training sleep in auditory training and learning new phonological categories, a fundamental process in speech processing. Adult French-speakers were trained to identify two synthetic speech variants of the syllable /d∂/ during two 1-h training sessions. The 12-h interval between the two sessions either did (8 p.m. to 8 a.m. ± 1 h) or did not (8 a.m. to 8 p.m. ± 1 h) included a sleep period. In both groups, identification performance dramatically improved over the first training session, to slightly decrease over the 12-h offline interval, although remaining above chance levels. Still, reaction times (RT) were slowed down after sleep suggesting higher attention devoted to the learned, novel phonological contrast. Notwithstanding, our results essentially suggest that post-training sleep does not benefit more than wakefulness to the consolidation or stabilization of new phonological categories.

14.
PLoS One ; 7(3): e33113, 2012.
Article in English | MEDLINE | ID: mdl-22427963

ABSTRACT

It has traditionally been assumed that cochlear implant users de facto perform atypically in audiovisual tasks. However, a recent study that combined an auditory task with visual distractors suggests that only those cochlear implant users that are not proficient at recognizing speech sounds might show abnormal audiovisual interactions. The present study aims at reinforcing this notion by investigating the audiovisual segregation abilities of cochlear implant users in a visual task with auditory distractors. Speechreading was assessed in two groups of cochlear implant users (proficient and non-proficient at sound recognition), as well as in normal controls. A visual speech recognition task (i.e. speechreading) was administered either in silence or in combination with three types of auditory distractors: i) noise ii) reverse speech sound and iii) non-altered speech sound. Cochlear implant users proficient at speech recognition performed like normal controls in all conditions, whereas non-proficient users showed significantly different audiovisual segregation patterns in both speech conditions. These results confirm that normal-like audiovisual segregation is possible in highly skilled cochlear implant users and, consequently, that proficient and non-proficient CI users cannot be lumped into a single group. This important feature must be taken into account in further studies of audiovisual interactions in cochlear implant users.


Subject(s)
Cochlear Implants , Perceptual Masking/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Analysis of Variance , Humans , Noise , Speech Articulation Tests
15.
Autism ; 16(5): 523-31, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22399448

ABSTRACT

This study assesses the extent to which children with autism understand requests performed with grammatically non-imperative sentence types. Ten children with autism were videotaped in naturalistic conditions. Four grammatical sentence types were distinguished: imperative, declarative, interrogative and sub-sentential. For each category, the proportion of requests complied with significantly exceeded the proportion of requests not complied with, and no difference across categories was found. These results show that children with autism do not rely exclusively on the linguistic form to interpret an utterance as a request.


Subject(s)
Autistic Disorder/psychology , Comprehension , Speech , Child , Child, Preschool , Cooperative Behavior , Humans , Male
16.
Scand J Psychol ; 53(1): 41-6, 2012 Feb.
Article in English | MEDLINE | ID: mdl-21995589

ABSTRACT

It is known that deaf individuals usually outperform normal hearing subjects in speechreading; however, the underlying reasons remain unclear. In the present study, speechreading performance was assessed in normal hearing participants (NH), deaf participants who had been exposed to the Cued Speech (CS) system early and intensively, and deaf participants exposed to oral language without Cued Speech (NCS). Results show a gradation in performance with highest performance in CS, then in NCS, and finally NH participants. Moreover, error analysis suggests that speechreading processing is more accurate in the CS group than in the other groups. Given that early and intensive CS has been shown to promote development of accurate phonological processing, we propose that the higher speechreading results in Cued Speech users are linked to a better capacity in phonological decoding of visual articulators.


Subject(s)
Cues , Early Intervention, Educational/methods , Learning/physiology , Lipreading , Persons With Hearing Impairments/psychology , Adolescent , Adult , Case-Control Studies , Deafness , Humans , Middle Aged , Speech
17.
Trends Amplif ; 14(2): 96-112, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20724357

ABSTRACT

Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants.


Subject(s)
Child Behavior , Cochlear Implants , Correction of Hearing Impairment , Cues , Language Development , Persons With Hearing Impairments/rehabilitation , Sign Language , Speech Perception , Acoustic Stimulation , Auditory Pathways/physiopathology , Child , Child, Preschool , Comprehension , Humans , Infant , Neuronal Plasticity , Noise , Perceptual Masking , Persons With Hearing Impairments/psychology , Photic Stimulation , Signal Processing, Computer-Assisted , Speech Intelligibility
18.
Rev. logop. foniatr. audiol. (Ed. impr.) ; 29(3): 174-185, sept. 2009. tab
Article in English | IBECS | ID: ibc-61976

ABSTRACT

En este trabajo se investigó, en el marco del modelo de Baddeley, la memoria fonológica a corto plazo (MfCP), o habilidad para mantener información en la mente durante unos segundos, en niños sordos quehan recibido un implante coclear (IC) antes de los3 años. Los resultados muestran que, comparados conun grupo de niños oyentes de igual edad, los niñoscon IC presentan un retraso en el desarrollo de su capacidad de MfCP, y muestran un efecto reducido de similitud fonológica y de longitud de la palabra. Sin embargo, cuando se empareja a los niños con IC con niños oyentes jóvenes por su capacidad de MfCP, desaparece la diferencia en similitud fonológica y longitud de la palabra. Los niños con IC no producen máserrores de orden que los niños oyentes. Tomados conjuntamente, estos resultados indican unos recursos normales de funcionamiento de la MfCP. Se discutenlas razones del lapso más bajo de MfCP de los niñoscon IC (AU)


Phonological short-term memory (pSTM), or the ability to hold information in mind for a few seconds, is investigated in deaf children fitted with a cochlear implant (CI children) before the age of 3 years, in the framework of Baddeley’s model. Results show that, compared to their age-matched hearing controls, CIchildren are delayed in the development of theirpSTM capacity, and exhibit reduced effect of phonologicalsimilarity (PSE) and word length (WLE). However,when CI children are matched for pSTM capacitywith younger NH children, the difference regarding PSE and WLE disappear. The CI children donot produce more order errors than NH children.Taken together, the results indicate normal resourcesof functioning of pSTM. The reasons for the shorter pSTM span in CI children are discussed (AU)


Subject(s)
Humans , Male , Female , Infant , Memory, Short-Term , Deafness/surgery , Cochlear Implants , Lipreading , Articulation Disorders/diagnosis
19.
Dev Neuropsychol ; 34(3): 296-311, 2009.
Article in English | MEDLINE | ID: mdl-19437205

ABSTRACT

Children with specific language impairment (SLI) who show impaired phonological processing are at risk of developing reading disabilities, which raises the question of phonological impairment commonality between developmental dyslexia (DD) and SLI. In order to distinguish the failing phonological processes in SLI and DD, we investigated the different steps involved in speech processing going from perceptual discrimination through various aspects of phonological memory. Our results show that whereas the memory for sequence is likewise impaired in either disorder, children with SLI have to face additional impairment in phonological discrimination and short-term memory, which may account for even poorer phonological awareness than dyslexics'.


Subject(s)
Articulation Disorders/complications , Developmental Disabilities/physiopathology , Dyslexia/classification , Dyslexia/etiology , Adolescent , Analysis of Variance , Auditory Perception , Child , Female , Humans , Language Development Disorders , Language Tests , Male , Mathematics , Memory, Short-Term/physiology , Mental Recall/physiology , Neuropsychological Tests , Psycholinguistics , Risk Factors , Speech Perception , Verbal Behavior/physiology
20.
Brain Lang ; 87(2): 227-40, 2003 Nov.
Article in English | MEDLINE | ID: mdl-14585292

ABSTRACT

A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.


Subject(s)
Brain/physiology , Deafness/physiopathology , Functional Laterality/physiology , Hearing/physiology , Judgment , Semantics , Speech Perception , Adolescent , Child , Child, Preschool , Cues , Humans , Language , Phonetics , Speech/physiology , Surveys and Questionnaires , Verbal Learning , Visual Fields/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...