Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 26
Filter
1.
Res Dev Disabil ; 111: 103882, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33548744

ABSTRACT

BACKGROUND: In recent years, a number of studies have begun to explore the nature of Attention-Deficit/Hyperactivity Disorder (ADHD) in children with Autism Spectrum Disorder (ASD). In this study, we examined the relationship between both symptoms of ADHD and symptoms of ASD on cognitive task performance in a sample of higher-functioning children and adolescents with ASD. Participants completed cognitive tasks tapping aspects of attention, impulsivity/inhibition, and immediate memory. AIMS: We hypothesized that children with ASD who had higher levels of ADHD symptom severity would be at higher risk for poorer sustained attention and selective attention, greater impulsivity/disinhibition, and weaker memory. METHODS AND PROCEDURES: The sample included 92 children (73 males) diagnosed with ASD (Mean Age = 9.41 years; Mean Full Scale IQ = 84.2). OUTCOMES AND RESULTS: Using regression analyses, more severe ADHD symptomatology was found to be significantly related to weaker performance on tasks measuring attention, immediate memory, and response inhibition. In contrast, increasing severity of ASD symptomatology was not associated with higher risk of poorer performance on any of the cognitive tasks assessed. CONCLUSIONS AND IMPLICATIONS: These results suggest that children with ASD who have more severe ADHD symptoms are at higher risk for impairments in tasks assessing attention, immediate memory, and response inhibition-similar to ADHD-related impairments seen in the general pediatric population. As such, clinicians should assess various aspects of cognition in pediatric patients with ASD in order to facilitate optimal interventional and educational planning.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Autism Spectrum Disorder , Adolescent , Attention Deficit Disorder with Hyperactivity/epidemiology , Child , Cognition , Humans , Male , Memory, Short-Term , Task Performance and Analysis
2.
J Child Adolesc Psychopharmacol ; 30(7): 414-426, 2020 09.
Article in English | MEDLINE | ID: mdl-32644833

ABSTRACT

Objective: To examine the effectiveness of four doses of psychostimulant medication, combining extended-release methylphenidate (ER-MPH) in the morning with immediate-release MPH (IR-MPH) in the afternoon, on cognitive task performance. Method: The sample comprised 24 children (19 boys and 5 girls) who met the Diagnostic and Statistical Manual of Mental Disorders, 4th Edition Text Revision (DSM-IV-TR) criteria for an autism spectrum disorder (ASD) on the Autism Diagnostic Interview-R and the Autism Diagnostic Observation Schedule, and had significant symptoms of attention-deficit/hyperactivity disorder (ADHD). This sample consisted of elementary school-age, community-based children (mean chronological age = 8.8 years, SD = 1.7; mean intelligence quotient = 85; SD = 16.8). Effects of placebo and three dose levels of ER-MPH (containing 0.21, 0.35, and 0.48 mg/kg equivalent of IR-MPH) on cognitive task performance were compared using a within-subject, crossover, placebo-controlled design. Each of the four MPH dosing regimens (placebo, low-dose MPH, medium-dose MPH, and high-dose MPH) was administered for 1 week; the dosing order was counterbalanced across children. Results: MPH treatment was associated with significant performance gains on cognitive tasks tapping sustained attention, selective attention, and impulsivity/inhibition. Dose/response was generally linear in the dose range studied, with no evidence of deterioration in performance at higher MPH doses in the dose range studied. Conclusion: The results of this study suggest that MPH formulations are associated with significant improvements on cognitive task performance in children with ASD and ADHD.


Subject(s)
Attention Deficit Disorder with Hyperactivity/drug therapy , Autism Spectrum Disorder/drug therapy , Central Nervous System Stimulants/therapeutic use , Cognition/drug effects , Delayed-Action Preparations/therapeutic use , Methylphenidate/therapeutic use , Child , Cross-Over Studies , Dose-Response Relationship, Drug , Double-Blind Method , Drug Administration Schedule , Female , Humans , Male , Neuropsychological Tests , Treatment Outcome
3.
Ear Hear ; 41(3): 508-520, 2020.
Article in English | MEDLINE | ID: mdl-31592903

ABSTRACT

OBJECTIVES: Efficient multisensory speech detection is critical for children who must quickly detect/encode a rapid stream of speech to participate in conversations and have access to the audiovisual cues that underpin speech and language development, yet multisensory speech detection remains understudied in children with hearing loss (CHL). This research assessed detection, along with vigilant/goal-directed attention, for multisensory versus unisensory speech in CHL versus children with normal hearing (CNH). DESIGN: Participants were 60 CHL who used hearing aids and communicated successfully aurally/orally and 60 age-matched CNH. Simple response times determined how quickly children could detect a preidentified easy-to-hear stimulus (70 dB SPL, utterance "buh" presented in auditory only [A], visual only [V], or audiovisual [AV] modes). The V mode formed two facial conditions: static versus dynamic face. Faster detection for multisensory (AV) than unisensory (A or V) input indicates multisensory facilitation. We assessed mean responses and faster versus slower responses (defined by first versus third quartiles of response-time distributions), which were respectively conceptualized as: faster responses (first quartile) reflect efficient detection with efficient vigilant/goal-directed attention and slower responses (third quartile) reflect less efficient detection associated with attentional lapses. Finally, we studied associations between these results and personal characteristics of CHL. RESULTS: Unisensory A versus V modes: Both groups showed better detection and attention for A than V input. The A input more readily captured children's attention and minimized attentional lapses, which supports A-bound processing even by CHL who were processing low fidelity A input. CNH and CHL did not differ in ability to detect A input at conversational speech level. Multisensory AV versus A modes: Both groups showed better detection and attention for AV than A input. The advantage for AV input was facial effect (both static and dynamic faces), a pattern suggesting that communication is a social interaction that is more than just words. Attention did not differ between groups; detection was faster in CHL than CNH for AV input, but not for A input. Associations between personal characteristics/degree of hearing loss of CHL and results: CHL with greatest deficits in detection of V input had poorest word recognition skills and CHL with greatest reduction of attentional lapses from AV input had poorest vocabulary skills. Both outcomes are consistent with the idea that CHL who are processing low fidelity A input depend disproportionately on V and AV input to learn to identify words and associate them with concepts. As CHL aged, attention to V input improved. Degree of HL did not influence results. CONCLUSIONS: Understanding speech-a daily challenge for CHL-is a complex task that demands efficient detection of and attention to AV speech cues. Our results support the clinical importance of multisensory approaches to understand and advance spoken communication by CHL.


Subject(s)
Deafness , Hearing Loss , Speech Perception , Aged , Child , Humans , Reaction Time , Speech , Visual Perception
4.
J Speech Lang Hear Res ; 61(12): 3095-3112, 2018 12 10.
Article in English | MEDLINE | ID: mdl-30515515

ABSTRACT

Purpose: Successful speech processing depends on our ability to detect and integrate multisensory cues, yet there is minimal research on multisensory speech detection and integration by children. To address this need, we studied the development of speech detection for auditory (A), visual (V), and audiovisual (AV) input. Method: Participants were 115 typically developing children clustered into age groups between 4 and 14 years. Speech detection (quantified by response times [RTs]) was determined for 1 stimulus, /buh/, presented in A, V, and AV modes (articulating vs. static facial conditions). Performance was analyzed not only in terms of traditional mean RTs but also in terms of the faster versus slower RTs (defined by the 1st vs. 3rd quartiles of RT distributions). These time regions were conceptualized respectively as reflecting optimal detection with efficient focused attention versus less optimal detection with inefficient focused attention due to attentional lapses. Results: Mean RTs indicated better detection (a) of multisensory AV speech than A speech only in 4- to 5-year-olds and (b) of A and AV inputs than V input in all age groups. The faster RTs revealed that AV input did not improve detection in any group. The slower RTs indicated that (a) the processing of silent V input was significantly faster for the articulating than static face and (b) AV speech or facial input significantly minimized attentional lapses in all groups except 6- to 7-year-olds (a peaked U-shaped curve). Apparently, the AV benefit observed for mean performance in 4- to 5-year-olds arose from effects of attention. Conclusions: The faster RTs indicated that AV input did not enhance detection in any group, but the slower RTs indicated that AV speech and dynamic V speech (mouthing) significantly minimized attentional lapses and thus did influence performance. Overall, A and AV inputs were detected consistently faster than V input; this result endorsed stimulus-bound auditory processing by these children.


Subject(s)
Attention/physiology , Child Development/physiology , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adolescent , Child , Child, Preschool , Cues , Female , Humans , Male , Photic Stimulation/methods , Reaction Time
5.
J Child Lang ; 45(2): 392-414, 2018 03.
Article in English | MEDLINE | ID: mdl-28724465

ABSTRACT

To communicate, children must discriminate and identify speech sounds. Because visual speech plays an important role in this process, we explored how visual speech influences phoneme discrimination and identification by children. Critical items had intact visual speech (e.g. bæz) coupled to non-intact (excised onsets) auditory speech (signified by /-b/æz). Children discriminated syllable pairs that differed in intactness (i.e. bæz:/-b/æz) and identified non-intact nonwords (/-b/æz). We predicted that visual speech would cause children to perceive the non-intact onsets as intact, resulting in more same responses for discrimination and more intact (i.e. bæz) responses for identification in the audiovisual than auditory mode. Visual speech for the easy-to-speechread /b/ but not for the difficult-to-speechread /g/ boosted discrimination and identification (about 35-45%) in children from four to fourteen years. The influence of visual speech on discrimination was uniquely associated with the influence of visual speech on identification and receptive vocabulary skills.


Subject(s)
Language Development , Lipreading , Phonetics , Speech Perception , Adolescent , Child , Child, Preschool , Female , Humans , Male , Speech , Vocabulary
6.
Int J Pediatr Otorhinolaryngol ; 94: 127-137, 2017 Mar.
Article in English | MEDLINE | ID: mdl-28167003

ABSTRACT

OBJECTIVES: Understanding spoken language is an audiovisual event that depends critically on the ability to discriminate and identify phonemes yet we have little evidence about the role of early auditory experience and visual speech on the development of these fundamental perceptual skills. Objectives of this research were to determine 1) how visual speech influences phoneme discrimination and identification; 2) whether visual speech influences these two processes in a like manner, such that discrimination predicts identification; and 3) how the degree of hearing loss affects this relationship. Such evidence is crucial for developing effective intervention strategies to mitigate the effects of hearing loss on language development. METHODS: Participants were 58 children with early-onset sensorineural hearing loss (CHL, 53% girls, M = 9;4 yrs) and 58 children with normal hearing (CNH, 53% girls, M = 9;4 yrs). Test items were consonant-vowel (CV) syllables and nonwords with intact visual speech coupled to non-intact auditory speech (excised onsets) as, for example, an intact consonant/rhyme in the visual track (Baa or Baz) coupled to non-intact onset/rhyme in the auditory track (/-B/aa or/-B/az). The items started with an easy-to-speechread/B/or difficult-to-speechread/G/onset and were presented in the auditory (static face) vs. audiovisual (dynamic face) modes. We assessed discrimination for intact vs. non-intact different pairs (e.g., Baa:/-B/aa). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more same-as opposed to different-responses in the audiovisual than auditory mode. We assessed identification by repetition of nonwords with non-intact onsets (e.g.,/-B/az). We predicted that visual speech would cause the non-intact onset to be perceived as intact and would therefore generate more Baz-as opposed to az- responses in the audiovisual than auditory mode. RESULTS: Performance in the audiovisual mode showed more same responses for the intact vs. non-intact different pairs (e.g., Baa:/-B/aa) and more intact onset responses for nonword repetition (Baz for/-B/az). Thus visual speech altered both discrimination and identification in the CHL-to a large extent for the/B/onsets but only minimally for the/G/onsets. The CHL identified the stimuli similarly to the CNH but did not discriminate the stimuli similarly. A bias-free measure of the children's discrimination skills (i.e., d' analysis) revealed that the CHL had greater difficulty discriminating intact from non-intact speech in both modes. As the degree of HL worsened, the ability to discriminate the intact vs. non-intact onsets in the auditory mode worsened. Discrimination ability in CHL significantly predicted their identification of the onsets-even after variation due to the other variables was controlled. CONCLUSIONS: These results clearly established that visual speech can fill in non-intact auditory speech, and this effect, in turn, made the non-intact onsets more difficult to discriminate from intact speech and more likely to be perceived as intact. Such results 1) demonstrate the value of visual speech at multiple levels of linguistic processing and 2) support intervention programs that view visual speech as a powerful asset for developing spoken language in CHL.


Subject(s)
Hearing Loss, Sensorineural/physiopathology , Language Development , Lipreading , Speech Perception , Visual Perception , Adolescent , Case-Control Studies , Child , Child, Preschool , Female , Humans , Male , Phonetics , Speech
7.
J Child Lang ; 44(1): 185-215, 2017 Jan.
Article in English | MEDLINE | ID: mdl-26752548

ABSTRACT

Adults use vision to perceive low-fidelity speech; yet how children acquire this ability is not well understood. The literature indicates that children show reduced sensitivity to visual speech from kindergarten to adolescence. We hypothesized that this pattern reflects the effects of complex tasks and a growth period with harder-to-utilize cognitive resources, not lack of sensitivity. We investigated sensitivity to visual speech in children via the phonological priming produced by low-fidelity (non-intact onset) auditory speech presented audiovisually (see dynamic face articulate consonant/rhyme b/ag; hear non-intact onset/rhyme: -b/ag) vs. auditorily (see still face; hear exactly same auditory input). Audiovisual speech produced greater priming from four to fourteen years, indicating that visual speech filled in the non-intact auditory onsets. The influence of visual speech depended uniquely on phonology and speechreading. Children - like adults - perceive speech onsets multimodally. Findings are critical for incorporating visual speech into developmental theories of speech perception.


Subject(s)
Lipreading , Speech Perception/physiology , Visual Perception/physiology , Adolescent , Auditory Perception/physiology , Child , Child, Preschool , Female , Humans , Male , Speech
8.
Ear Hear ; 37(6): 623-633, 2016.
Article in English | MEDLINE | ID: mdl-27438867

ABSTRACT

OBJECTIVES: This research determined (1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] versus auditory), fidelity (intact versus nonintact auditory onsets), and lexical status (words versus nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) versus children with normal hearing (CNH) and (2) how the degree of HI, auditory word recognition, and age influenced results in CHI. Note that the AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a nonintact onset/rhyme in the auditory track. Example stimuli for the word bag are (1) AV: intact visual (b/ag) coupled to nonintact auditory (-b/ag) and 2) auditory: static face coupled to the same nonintact auditory (-b/ag). The question was whether the intact visual speech would "restore or fill-in" the nonintact auditory speech in which case performance for the same auditory stimulus would differ depending on the presence/absence of visual speech. DESIGN: Participants were 62 CHI and 62 CNH whose ages had a group mean and group distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: (1) spoke English as a native language, (2) communicated successfully aurally/orally, and (3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multimodal picture word task. RESULTS: Both CHI and CNH showed greater phonological priming from high than low-fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI versus CNH-thus these CHI appeared to have sufficiently well-specified phonological onset representations to support priming, and visual speech did not appear to be a disproportionately important source of the CHI's phonological knowledge. Two exceptions occurred, however. First-with regard to lexical status-both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high- versus low-fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition-but not degree of HI or age-uniquely influenced phonological priming by the AV nonwords. CONCLUSIONS: With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, this research documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI.


Subject(s)
Acoustic Stimulation , Hearing Loss, Sensorineural/physiopathology , Photic Stimulation , Repetition Priming , Vocabulary , Adolescent , Case-Control Studies , Child , Child, Preschool , Female , Humans , Male , Phonetics
9.
J Exp Child Psychol ; 126: 295-312, 2014 Oct.
Article in English | MEDLINE | ID: mdl-24974346

ABSTRACT

We investigated whether visual speech fills in non-intact auditory speech (excised consonant onsets) in typically developing children from 4 to 14 years of age. Stimuli with the excised auditory onsets were presented in the audiovisual (AV) and auditory-only (AO) modes. A visual speech fill-in effect occurs when listeners experience hearing the same non-intact auditory stimulus (e.g., /-b/ag) as different depending on the presence/absence of visual speech such as hearing /bag/ in the AV mode but hearing /ag/ in the AO mode. We quantified the visual speech fill-in effect by the difference in the number of correct consonant onset responses between the modes. We found that easy visual speech cues /b/ provided greater filling in than difficult cues /g/. Only older children benefited from difficult visual speech cues, whereas all children benefited from easy visual speech cues, although 4- and 5-year-olds did not benefit as much as older children. To explore task demands, we compared results on our new task with those on the McGurk task. The influence of visual speech was uniquely associated with age and vocabulary abilities for the visual speech fill--in effect but was uniquely associated with speechreading skills for the McGurk effect. This dissociation implies that visual speech--as processed by children-is a complicated and multifaceted phenomenon underpinned by heterogeneous abilities. These results emphasize that children perceive a speaker's utterance rather than the auditory stimulus per se. In children, as in adults, there is more to speech perception than meets the ear.


Subject(s)
Lipreading , Speech Perception , Speech , Acoustic Stimulation , Adolescent , Age Factors , Auditory Perception , Child , Child, Preschool , Cues , Female , Humans , Male , Phonetics , Visual Perception
10.
J Child Adolesc Psychopharmacol ; 23(5): 337-51, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23782128

ABSTRACT

OBJECTIVE: The purpose of this study was to examine the behavioral effects of four doses of psychostimulant medication, combining extended-release methylphenidate (MPH) in the morning with immediate-release MPH in the afternoon. METHOD: The sample comprised 24 children (19 boys; 5 girls) who met American Psychiatric Association, Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV-TR) criteria for an autism spectrum disorder (ASD) on the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS), and had significant symptoms of attention-deficit/hyperactivity disorder (ADHD). This sample consisted of elementary school-age, community-based children (mean chronological age=8.8 years, SD=1.7; mean intelligence quotient [IQ]=85; SD=16.8). Effects of four dose levels of MPH on parent and teacher behavioral ratings were investigated using a within-subject, crossover, placebo-controlled design. RESULTS: MPH treatment was associated with significant declines in hyperactive and impulsive behavior at both home and school. Parents noted significant declines in inattentive and oppositional behavior, and improvements in social skills. No exacerbation of stereotypies was noted, and side effects were similar to those seen in typically developing children with ADHD. Dose response was primarily linear in the dose range studied. CONCLUSIONS: The results of this study suggest that MPH formulations are efficacious and well-tolerated for children with ASD and significant ADHD symptoms.


Subject(s)
Attention Deficit Disorder with Hyperactivity/drug therapy , Central Nervous System Stimulants/therapeutic use , Child Development Disorders, Pervasive/drug therapy , Methylphenidate/therapeutic use , Attention Deficit Disorder with Hyperactivity/physiopathology , Central Nervous System Stimulants/administration & dosage , Central Nervous System Stimulants/adverse effects , Child , Child Development Disorders, Pervasive/physiopathology , Cross-Over Studies , Delayed-Action Preparations , Dose-Response Relationship, Drug , Female , Humans , Male , Methylphenidate/administration & dosage , Methylphenidate/adverse effects , Single-Blind Method , Stereotyped Behavior/drug effects , Treatment Outcome
11.
Ear Hear ; 34(6): 753-62, 2013.
Article in English | MEDLINE | ID: mdl-23782714

ABSTRACT

OBJECTIVES: This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI). DESIGN: Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors' new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us to (1) quantify picture naming results in the presence of auditory speech distractors and (2) probe whether the addition of visual speech enriched the fidelity of the auditory input sufficiently to influence results. RESULTS: In the HI group, the auditory distractors produced no effect or a facilitative effect, in agreement with proposals of the CT hypothesis. In contrast, the audiovisual distractors produced the normal semantic interference effect. Results in the HI versus NH groups differed significantly for the auditory mode, but not for the audiovisual mode. CONCLUSIONS: This research indicates that the lower fidelity auditory speech associated with HI affects the normalcy of semantic access by children. Further, adding visual speech enriches the lower fidelity auditory input sufficiently to produce the semantic interference effect typical of children with NH.


Subject(s)
Attention/physiology , Hearing Loss, Sensorineural/physiopathology , Learning/physiology , Semantics , Speech/physiology , Analysis of Variance , Case-Control Studies , Child , Child, Preschool , Female , Hearing Loss, Sensorineural/psychology , Humans , Language Tests , Male
12.
J Speech Lang Hear Res ; 56(2): 388-403, 2013 Apr.
Article in English | MEDLINE | ID: mdl-22896045

ABSTRACT

PURPOSE: To examine whether semantic access by speech requires attention in children. METHOD: Children (N = 200) named pictures and ignored distractors on a cross-modal (distractors: auditory-no face) or multimodal (distractors: auditory-static face and audiovisual-dynamic face) picture word task. The cross-modal task had a low load, and the multimodal task had a high load (i.e., respectively naming pictures displayed on a blank screen vs. below the talker's face on his T-shirt). Semantic content of distractors was manipulated to be related vs. unrelated to the picture (e.g., picture "dog" with distractors "bear" vs. "cheese"). If irrelevant semantic content manipulation influences naming times on both tasks despite variations in loads, Lavie's (2005) perceptual load model proposes that semantic access is independent of capacity-limited attentional resources; if, however, irrelevant content influences naming only on the cross-modal task (low load), the perceptual load model proposes that semantic access is dependent on attentional resources exhausted by the higher load task. RESULTS: Irrelevant semantic content affected performance for both tasks in 6- to 9-year-olds but only on the cross-modal task in 4- to 5-year-olds. The addition of visual speech did not influence results on the multimodal task. CONCLUSION: Younger and older children differ in dependence on attentional resources for semantic access by speech.


Subject(s)
Attention/physiology , Perceptual Masking/physiology , Semantics , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adolescent , Age Factors , Child , Child Language , Child, Preschool , Cognition/physiology , Female , Humans , Male , Phonetics , Photic Stimulation/methods
13.
J Child Adolesc Psychopharmacol ; 22(4): 284-91, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22849541

ABSTRACT

OBJECTIVE: Parent and teacher ratings of core attention-deficit/hyperactivity disorder (ADHD) symptoms, as well as behavioral and emotional problems commonly comorbid with ADHD, were compared in children with autism spectrum disorders (ASD). METHOD: Participants were 86 children (66 boys; mean: age=9.3 years, intelligence quotient [IQ]=84) who met American Psychiatric Association Diagnostic and Statistical Manual of Mental Disorders, 4th ed. (DSM-IV) criteria for an ASD on the Autism Diagnostic Interview-Revised (ADI-R) and the Autism Diagnostic Observation Schedule (ADOS). Parent and teacher behavioral ratings were compared on the Conners' Parent and Teacher Rating Scales (CPRS-R; CTRS-R). The degree to which age, ASD subtype, severity of autistic symptomatology, and medication status mediated this relationship was also examined. RESULTS: Significant positive correlations between parent and teacher ratings suggest that a child's core ADHD symptoms-as well as closely related externalizing symptoms-are perceived similarly by parents and teachers. With the exception of oppositional behavior, there was no significant effect of age, gender, ASD subtype, or autism severity on the relationship between parent and teacher ratings. In general, parents rated children as having more severe symptomatology than did teachers. Patterns of parent and teacher ratings were highly correlated, both for children who were receiving medication, and for children who were not. CONCLUSIONS: Parents and teachers perceived core symptoms of ADHD and closely-related externalizing problems in a similar manner, but there is less agreement on ratings of internalizing problems (e.g., anxiety). The clinical implication of these findings is that both parents and teachers provide important behavioral information about children with ASD. However, when a clinician is unable to access teacher ratings (e.g., during school vacations), parent ratings can provide a reasonable estimate of the child's functioning in these domains in school. As such, parent ratings can be reliably used to make initial diagnostic and treatment decisions (e.g., medication treatment) regarding ADHD symptoms in children with ASDs.


Subject(s)
Attention Deficit Disorder with Hyperactivity/physiopathology , Child Development Disorders, Pervasive/physiopathology , Faculty , Parents , Adolescent , Adolescent Behavior , Attention Deficit Disorder with Hyperactivity/drug therapy , Child , Child Behavior , Child Development Disorders, Pervasive/complications , Female , Humans , Male , Severity of Illness Index
14.
J Speech Lang Hear Res ; 52(2): 412-34, 2009 Apr.
Article in English | MEDLINE | ID: mdl-19339701

ABSTRACT

PURPOSE: This research assessed the influence of visual speech on phonological processing by children with hearing loss (HL). METHOD: Children with HL and children with normal hearing (NH) named pictures while attempting to ignore auditory or audiovisual speech distractors whose onsets relative to the pictures were either congruent, conflicting in place of articulation, or conflicting in voicing-for example, the picture "pizza" coupled with the distractors "peach," "teacher," or "beast," respectively. Speed of picture naming was measured. RESULTS: The conflicting conditions slowed naming, and phonological processing by children with HL displayed the age-related shift in sensitivity to visual speech seen in children with NH, although with developmental delay. Younger children with HL exhibited a disproportionately large influence of visual speech and a negligible influence of auditory speech, whereas older children with HL showed a robust influence of auditory speech with no benefit to performance from adding visual speech. The congruent conditions did not speed naming in children with HL, nor did the addition of visual speech influence performance. Unexpectedly, the /wedge/-vowel congruent distractors slowed naming in children with HL and decreased articulatory proficiency. CONCLUSIONS: Results for the conflicting conditions are consistent with the hypothesis that speech representations in children with HL (a) are initially disproportionally structured in terms of visual speech and (b) become better specified with age in terms of auditorily encoded information.


Subject(s)
Hearing Loss/psychology , Psycholinguistics , Speech Perception , Acoustic Stimulation , Aging , Analysis of Variance , Child , Child, Preschool , Female , Humans , Language Tests , Lipreading , Male , Perceptual Masking , Photic Stimulation , Regression Analysis
15.
Int J Audiol ; 48(1): 1-11, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19173108

ABSTRACT

In this study we asked to what extent auditory evoked potentials can help us to understand the complex processes underlying word comprehension. Monosyllabic and bisyllabic words were presented to 34 young adults in the context of a semantic category judgment. The basic paradigm assessed the typicality effect, the tendency for classification of members of a category to be made more accurately and more rapidly for strong exemplars than for weak exemplars. Event-related potentials (ERPs) were recorded from 30 active scalp electrodes. The ERP waveform in response to the semantic categorization of a word was characterized by unique activity in four temporal intervals; (1) a negative peak at a latency of about 100 ms, (2) a positive peak at a latency of about 200 ms, (3) a broad negativity extending over the latency range from 200 to 600 ms, and (4) a broad positivity extending from 600 to 1400 ms. Independent component analysis (ICA) of the individual EEG epochs yielded four maximally independent components, interpreted as (1) exogenous detection of a change in the acoustic environment, followed by allocation of cognitive resources, especially sustained attention, to the analysis of subsequent acoustic events, (2) phonological processing, (3) semantic processing, and (4) decision processing. The morphologies of the four ICA waveforms were consistent with a parallel processing, interactive model of word recognition, and subsequent semantic categorization.


Subject(s)
Comprehension , Electroencephalography , Evoked Potentials, Auditory , Phonetics , Semantics , Signal Processing, Computer-Assisted , Speech Perception , Adolescent , Adult , Attention , Decision Making , Female , Humans , Male , Pattern Recognition, Physiological , Psycholinguistics , Reaction Time , Signal Detection, Psychological , Time Factors , Vocabulary , Young Adult
16.
J Exp Child Psychol ; 102(1): 40-59, 2009 Jan.
Article in English | MEDLINE | ID: mdl-18829049

ABSTRACT

This research developed a multimodal picture-word task for assessing the influence of visual speech on phonological processing by 100 children between 4 and 14 years of age. We assessed how manipulation of seemingly to-be-ignored auditory (A) and audiovisual (AV) phonological distractors affected picture naming without participants consciously trying to respond to the manipulation. Results varied in complex ways as a function of age and type and modality of distractors. Results for congruent AV distractors yielded an inverted U-shaped function with a significant influence of visual speech in 4-year-olds and 10- to 14-year-olds but not in 5- to 9-year-olds. In concert with dynamic systems theory, we proposed that the temporary loss of sensitivity to visual speech was reflecting reorganization of relevant knowledge and processing subsystems, particularly phonology. We speculated that reorganization may be associated with (a) formal literacy instruction and (b) developmental changes in multimodal processing and auditory perceptual, linguistic, and cognitive skills.


Subject(s)
Lipreading , Pattern Recognition, Visual , Phonetics , Semantics , Speech Perception , Verbal Behavior , Adolescent , Attention , Child , Child, Preschool , Female , Humans , Male , Reaction Time
17.
Ear Hear ; 28(6): 740-53, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17982362

ABSTRACT

The purpose of this paper is to provide a review of past and current research regarding language and literacy development in children with mild to severe hearing impairment. A related goal is to identify gaps in the empirical literature and suggest future research directions. Included in the language development review are studies of semantics (vocabulary, novel word learning, and conceptual categories), morphology, and syntax. The literacy section begins by considering dimensions of literacy and the ways in which hearing impairment may influence them. It is followed by a discussion of existing evidence on reading and writing, and highlights key constructs that need to be addressed for a comprehensive understanding of literacy in these children.


Subject(s)
Educational Status , Hearing Loss/diagnosis , Hearing Loss/therapy , Language , Adolescent , Child , Child, Preschool , Humans , Infant , Language Development , Reading , Research Design , Speech , Verbal Learning , Vocabulary
18.
Ear Hear ; 28(6): 754-65, 2007 Dec.
Article in English | MEDLINE | ID: mdl-17982363

ABSTRACT

Perception concerns the identification and interpretation of sensory stimuli in our external environment. The purpose of this review is to survey contemporary views about effects of mild to severe sensorineural hearing impairment (HI) in children on perceptual processing. The review is one of a series of papers resulting from a workshop on Outcomes Research in Children with Hearing Loss sponsored by The National Institute on Deafness and Other Communication Disorders/National Institutes of Health. Children with HI exhibit heterogeneous patterns of results. In general, however, perceptual processing of the (a) auditory properties of nonspeech reveals some problems in processing spectral information, but not temporal information; (b) auditory properties of speech reveals some problems in processing temporal sequences, variation in spatial location, and voice onset times, but not in processing talker-gender, weighting acoustic cues, or covertly orienting to the spatial location of sound; (c) linguistic properties of speech reveals some problems in processing general linguistic content, semantic content, and phonological content. The normalcy/abnormalcy of results varies as a function of degree of loss and task demands. As a general rule, children with severe HI have more abnormalities than children with mild to moderate HI. Auditory linguistic properties are also generally processed more abnormally than auditory nonverbal properties. This outcome implies that childhood HI has less effect on more physical, developmentally earlier properties that are characterized by less contingent processing. Some perceptual properties that are processed in a more automatic manner by normal listeners are processed in a more controlled manner by children with HI. This outcome implies that deliberate perceptual processing in the presence of childhood HI requires extra effort and more mental resources, thus limiting the availability of processing resources for other tasks.


Subject(s)
Auditory Perception , Hearing Loss/diagnosis , Adolescent , Auditory Threshold , Child , Child, Preschool , Communication , Female , Hearing Loss/physiopathology , Humans , Linguistics , Male , Models, Biological , Speech , Speech Perception
19.
Ear Hear ; 27(6): 686-702, 2006 Dec.
Article in English | MEDLINE | ID: mdl-17086079

ABSTRACT

OBJECTIVE: The purpose of this research was to study how early childhood hearing loss affects development of concepts and categories, aspects of semantic knowledge that allow us to group and make inferences about objects with common properties, such as dogs versus cats. We assessed category typicality and out-of-category relatedness effects. The typicality effect refers to performance advantage (faster reaction times, fewer errors) for objects with a higher number of a category's characteristic properties; the out-of-category relatedness effect refers to performance disadvantage (slower reaction times and more errors) for out-of-category objects that share some properties with category members. DESIGN: We applied a new children's speeded category-verification task (vote "yes" if the pictured object is clothing). Stimuli were pictures of typical and atypical category objects (e.g., pants, glove) and related and unrelated out-of-category objects (e.g., necklace, soup). Participants were 30 children with hearing impairment (HI) who were considered successful hearing aid users and who attended regular classes (mainstreamed) with some support services. Ages ranged from 5 to 15 yr (mean = 10 yr 8 mo). Results were related to normative data from . RESULTS: Typical objects consistently showed preferential processing (faster reaction times, fewer errors), and related out-of-category objects consistently showed the converse. Overall, results between HI and normative groups exhibited striking similarity. Variation in speed of classification was influenced primarily by age and age-related competencies, such as vocabulary skill. Audiological status, however, independently influenced performance to a lesser extent, with positive responses becoming faster as degree of hearing loss decreased and negative responses becoming faster as age of identification/amplification/education decreased. There were few errors overall. CONCLUSIONS: The presence of a typicality effect indicates that 1) the structure of conceptual representations for at least one category in the HI group was based on characteristic properties with an uneven distribution among members, and 2) typical objects with a higher number of characteristic properties were more easily accessed and/or retrieved. The presence of a relatedness effect indicates that the structure of representational knowledge in the HI group allowed them to appreciate semantic properties and understand that properties may be shared between categories. Speculations linked the association 1) between positive responses and degree of hearing loss to an increase in the quality, accessibility, and retrievability of conceptual representations with better hearing; and 2) between negative responses and age of identification/amplification/education to an improvement in effortful, postretrieval decision-making proficiencies with more schooling and amplified auditory experience. This research establishes the value of our new approach to assessing the organization of semantic memory in children with HI.


Subject(s)
Concept Formation/physiology , Hearing Loss/physiopathology , Semantics , Adolescent , Age Factors , Child , Child, Preschool , Female , Humans , Male , Memory/physiology , Reaction Time , Regression Analysis , Verbal Behavior
20.
J Exp Child Psychol ; 92(1): 46-75, 2005 Sep.
Article in English | MEDLINE | ID: mdl-15904928

ABSTRACT

We studied how category typicality and out-of-category relatedness affect speeded category verification (vote "yes" if pictured object is clothing) in typically developing 4- to 14-year-olds and adults. Stimuli were typical and atypical category objects (e.g., pants, glove) and related and unrelated out-of-category objects (e.g., necklace, soup). Typical and unrelated out-of-category objects exhibited preferential processing (faster reaction times and fewer errors). Variations in typicality and relatedness disproportionately influenced children's performance, with developmental improvement associated with both verbal and nonverbal factors. Underextension versus overextension errors seemed to be associated with independent factors, namely multifaceted maturational factors versus receptive vocabulary skill, respectively. Errors were infrequent, suggesting spontaneous taxonomic classification by all participants. An experiment with printed words in adults replicated results, indicating that typicality and relatedness effects reflected organizational principles of the semantic system, not picture-related processes. This research establishes the viability of an online approach to assessing automatic components of semantic organization in children.


Subject(s)
Verbal Behavior , Adolescent , Adult , Age Factors , Auditory Threshold/physiology , Child , Child, Preschool , Cognition , Female , Humans , Male , Memory , Reaction Time , Semantics
SELECTION OF CITATIONS
SEARCH DETAIL
...