Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
2.
J Neurophysiol ; 130(2): 291-302, 2023 08 01.
Artigo em Inglês | MEDLINE | ID: mdl-37377190

RESUMO

Traditionally, pitch variation in a sound stream has been integral to music identity. We attempt to expand music's definition, by demonstrating that the neural code for musicality is independent of pitch encoding. That is, pitchless sound streams can still induce music-like perception and a neurophysiological hierarchy similar to pitched melodies. Previous work reported that neural processing of sounds with no-pitch, fixed-pitch, and irregular-pitch (melodic) patterns, exhibits a right-lateralized hierarchical shift, with pitchless sounds favorably processed in Heschl's gyrus (HG), ascending laterally to nonprimary auditory areas for fixed-pitch and even more laterally for melodic patterns. The objective of this EEG study was to assess whether sound encoding maintains a similar hierarchical profile when musical perception is driven by timbre irregularities in the absence of pitch changes. Individuals listened to repetitions of three musical and three nonmusical sound-streams. The nonmusical streams were comprised of seven 200-ms segments of white, pink, or brown noise, separated by silent gaps. Musical streams were created similarly, but with all three noise types combined in a unique order within each stream to induce timbre variations and music-like perception. Subjects classified the sound streams as musical or nonmusical. Musical processing exhibited right dominant α power enhancement, followed by a lateralized increase in θ phase-locking and spectral power. The θ phase-locking was stronger in musicians than in nonmusicians. The lateralization of activity suggests higher-level auditory processing. Our findings validate the existence of a hierarchical shift, traditionally observed with pitched-melodic perception, underscoring that musicality can be achieved with timbre irregularities alone.NEW & NOTEWORTHY EEG induced by streams of pitchless noise segments varying in timbre were classified as music-like and exhibited a right-lateralized hierarchy in processing similar to pitched melodic processing. This study provides evidence that the neural-code of musicality is independent of pitch encoding. The results have implications for understanding music processing in individuals with degraded pitch perception, such as in cochlear-implant listeners, as well as the role of nonpitched sounds in the induction of music-like perceptual states.


Assuntos
Implantes Cocleares , Música , Humanos , Percepção da Altura Sonora/fisiologia , Percepção Auditiva/fisiologia , Som , Estimulação Acústica
3.
Sci Rep ; 13(1): 7154, 2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-37130838

RESUMO

Procedures used to elicit both behavioral and neurophysiological data to address a particular cognitive question can impact the nature of the data collected. We used functional near-infrared spectroscopy (fNIRS) to assess performance of a modified finger tapping task in which participants performed synchronized or syncopated tapping relative to a metronomic tone. Both versions of the tapping task included a pacing phase (tapping with the tone) followed by a continuation phase (tapping without the tone). Both behavioral and brain-based findings revealed two distinct timing mechanisms underlying the two forms of tapping. Here we investigate the impact of an additional-and extremely subtle-manipulation of the study's experimental design. We measured responses in 23 healthy adults as they performed the two versions of the finger-tapping tasks either blocked by tapping type or alternating from one to the other type during the course of the experiment. As in our previous study, behavioral tapping indices and cortical hemodynamics were monitored, allowing us to compare results across the two study designs. Consistent with previous findings, results reflected distinct, context-dependent parameters of the tapping. Moreover, our results demonstrated a significant impact of study design on rhythmic entrainment in the presence/absence of auditory stimuli. Tapping accuracy and hemodynamic responsivity collectively indicate that the block design context is preferable for studying action-based timing behavior.


Assuntos
Dedos , Hemodinâmica , Adulto , Humanos , Dedos/fisiologia , Desempenho Psicomotor/fisiologia
4.
Otolaryngol Head Neck Surg ; 169(5): 1290-1298, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37078337

RESUMO

OBJECTIVE: Untreated sleep-disordered breathing (SDB) is associated with problem behaviors in children. The neurological basis for this relationship is unknown. We used functional near-infrared spectroscopy (fNIRS) to assess the relationship between cerebral hemodynamics of the frontal lobe of the brain and problem behaviors in children with SDB. STUDY DESIGN: Cross-sectional. SETTING: Urban tertiary care academic children's hospital and affiliated sleep center. METHODS: We enrolled children with SDB aged 5 to 16 years old referred for polysomnography. We measured fNIRS-derived cerebral hemodynamics within the frontal lobe during polysomnography. We assessed parent-reported problem behaviors using the Behavioral Response Inventory of Executive Function Second Edition (BRIEF-2). We compared the relationships between (i) the instability in cerebral perfusion in the frontal lobe measured fNIRS, (ii) SDB severity using apnea-hypopnea index (AHI), and (iii) BRIEF-2 clinical scales using Pearson correlation (r). A p < .05 was considered significant. RESULTS: A total of 54 children were included. The average age was 7.8 (95% confidence interval, 7.0-8.7) years; 26 (48%) were boys and 25 (46%) were Black. The mean AHI was 9.9 (5.7-14.1). There is a statistically significant inverse relationship between the coefficient of variation of perfusion in the frontal lobe and BRIEF-2 clinical scales (range of r = 0.24-0.49, range of p = .076 to <.001). The correlations between AHI and BRIEF-2 scales were not statistically significant. CONCLUSION: These results provide preliminary evidence for fNIRS as a child-friendly biomarker for the assessment of adverse outcomes of SDB.


Assuntos
Comportamento Problema , Síndromes da Apneia do Sono , Masculino , Humanos , Criança , Pré-Escolar , Adolescente , Feminino , Estudos Transversais , Síndromes da Apneia do Sono/complicações , Hemodinâmica
5.
Brain Sci ; 13(3)2023 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-36979322

RESUMO

Recent studies have questioned past conclusions regarding the mechanisms of the McGurk illusion, especially how McGurk susceptibility might inform our understanding of audiovisual (AV) integration. We previously proposed that the McGurk illusion is likely attributable to a default mechanism, whereby either the visual system, auditory system, or both default to specific phonemes-those implicated in the McGurk illusion. We hypothesized that the default mechanism occurs because visual stimuli with an indiscernible place of articulation (like those traditionally used in the McGurk illusion) lead to an ambiguous perceptual environment and thus a failure in AV integration. In the current study, we tested the default hypothesis as it pertains to the auditory system. Participants performed two tasks. One task was a typical McGurk illusion task, in which individuals listened to auditory-/ba/ paired with visual-/ga/ and judged what they heard. The second task was an auditory-only task, in which individuals transcribed trisyllabic words with a phoneme replaced by silence. We found that individuals' transcription of missing phonemes often defaulted to '/d/t/th/', the same phonemes often experienced during the McGurk illusion. Importantly, individuals' default rate was positively correlated with their McGurk rate. We conclude that the McGurk illusion arises when people fail to integrate visual percepts with auditory percepts, due to visual ambiguity, thus leading the auditory system to default to phonemes often implicated in the McGurk illusion.

6.
Neurophotonics ; 9(Suppl 2): S24001, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36052058

RESUMO

This report is the second part of a comprehensive two-part series aimed at reviewing an extensive and diverse toolkit of novel methods to explore brain health and function. While the first report focused on neurophotonic tools mostly applicable to animal studies, here, we highlight optical spectroscopy and imaging methods relevant to noninvasive human brain studies. We outline current state-of-the-art technologies and software advances, explore the most recent impact of these technologies on neuroscience and clinical applications, identify the areas where innovation is needed, and provide an outlook for the future directions.

7.
Neurophotonics ; 9(3): 035003, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35990173

RESUMO

Significance: Resting-state functional connectivity (RSFC) analyses of functional near-infrared spectroscopy (fNIRS) data reveal cortical connections and networks across the brain. Motion artifacts and systemic physiology in evoked fNIRS signals present unique analytical challenges, and methods that control for systemic physiological noise have been explored. Whether these same methods require modification when applied to resting-state fNIRS (RS-fNIRS) data remains unclear. Aim: We systematically examined the sensitivity and specificity of several RSFC analysis pipelines to identify the best methods for correcting global systemic physiological signals in RS-fNIRS data. Approach: Using numerically simulated RS-fNIRS data, we compared the rates of true and false positives for several connectivity analysis pipelines. Their performance was scored using receiver operating characteristic analysis. Pipelines included partial correlation and multivariate Granger causality, with and without short-separation measurements, and a modified multivariate causality model that included a non-traditional zeroth-lag cross term. We also examined the effects of pre-whitening and robust statistical estimators on performance. Results: Consistent with previous work on bivariate correlation models, our results demonstrate that robust statistics and pre-whitening are effective methods to correct for motion artifacts and autocorrelation in the fNIRS time series. Moreover, we found that pre-filtering using principal components extracted from short-separation fNIRS channels as part of a partial correlation model was most effective in reducing spurious correlations due to shared systemic physiology when the two signals of interest fluctuated synchronously. However, when there was a temporal lag between the signals, a multivariate Granger causality test incorporating the short-separation channels was better. Since it is unknown if such a lag exists in experimental data, we propose a modified version of Granger causality that includes the non-traditional zeroth-lag term as a compromising solution. Conclusions: A combination of pre-whitening, robust statistical methods, and partial correlation in the processing pipeline to reduce autocorrelation, motion artifacts, and global physiology are suggested for obtaining statistically valid connectivity metrics with RS-fNIRS. Further studies should validate the effectiveness of these methods using human data.

8.
iScience ; 25(7): 104671, 2022 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-35845168

RESUMO

Previous work addressing the influence of audition on visual perception has mainly been assessed using non-speech stimuli. Herein, we introduce the Audiovisual Time-Flow Illusion in spoken language, underscoring the role of audition in multisensory processing. When brief pauses were inserted into or brief portions were removed from an acoustic speech stream, individuals perceived the corresponding visual speech as "pausing" or "skipping", respectively-even though the visual stimulus was intact. When the stimulus manipulation was reversed-brief pauses were inserted into, or brief portions were removed from the visual speech stream-individuals failed to perceive the illusion in the corresponding intact auditory stream. Our findings demonstrate that in the context of spoken language, people continually realign the pace of their visual perception based on that of the auditory input. In short, the auditory modality sets the pace of the visual modality during audiovisual speech processing.

9.
Pediatrics ; 149(6)2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35607935

RESUMO

BACKGROUND AND OBJECTIVES: Infants with profound hearing loss are typically considered for cochlear implantation. Many insurance providers deny implantation to children with developmental impairments because they have limited potential to acquire verbal communication. We took advantage of differing insurance coverage restrictions to compare outcomes after cochlear implantation or continued hearing aid use. METHODS: Young children with deafness were identified prospectively from 2 different states, Texas and California, and followed longitudinally for an average of 2 years. Children in cohort 1 (n = 138) had normal cognition and adaptive behavior and underwent cochlear implantation. Children in cohorts 2 (n = 37) and 3 (n = 29) had low cognition and low adaptive behavior. Those in cohort 2 underwent cochlear implantation, whereas those in cohort 3 were treated with hearing aids. RESULTS: Cohorts did not substantially differ in demographic characteristics. Using cohort 2 as the reference, children in cohort 1 showed more rapid gains in cognitive, adaptive function, language, and auditory skills (estimated coefficients, 0.166 to 0.403; P ≤ .001), whereas children in cohort 3 showed slower gains (-0.119 to -0.243; P ≤ .04). Children in cohort 3 also had greater increases in stress within the parent-child system (1.328; P = .02), whereas cohorts 1 and 2 were not different. CONCLUSIONS: Cochlear implantation benefits children with deafness and developmental delays. This finding has health policy implications not only for private insurers but also for large, statewide, publicly administered programs. Cognitive and adaptive skills should not be used as a "litmus test" for pediatric cochlear implantation.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Auxiliares de Audição , Percepção da Fala , Criança , Pré-Escolar , Surdez/psicologia , Deficiências do Desenvolvimento/cirurgia , Humanos , Lactente , Desenvolvimento da Linguagem
10.
Infant Behav Dev ; 63: 101566, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33894632

RESUMO

Parent-child interactions support the development of a wide range of socio-cognitive abilities in young children. As infants become increasingly mobile, the nature of these interactions change from person-oriented to object-oriented, with the latter relying on children's emerging ability to engage in joint attention. Joint attention is acknowledged to be a foundational ability in early child development, broadly speaking, yet its operationalization has varied substantially over the course of several decades of developmental research devoted to its characterization. Here, we outline two broad research perspectives-social and associative accounts-on what constitutes joint attention. Differences center on the criteria for what qualifies as joint attention and regarding the hypothesized developmental mechanisms that underlie the ability. After providing a theoretical overview, we introduce a joint attention coding scheme that we have developed iteratively based on careful reading of the literature and our own data coding experiences. This coding scheme provides objective guidelines for characterizing mulitmodal parent-child interactions. The need for such guidelines is acute given the widespread use of this and other developmental measures to assess atypically developing populations. We conclude with a call for open discussion about the need for researchers to include a clear description of what qualifies as joint attention in publications pertaining to joint attention, as well as details about their coding. We provide instructions for using our coding scheme in the service of starting such a discussion.


Assuntos
Atenção , Desenvolvimento Infantil , Pré-Escolar , Humanos , Lactente , Relações Pais-Filho
11.
Brain Sci ; 11(1)2021 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-33435472

RESUMO

A debate over the past decade has focused on the so-called bilingual advantage-the idea that bilingual and multilingual individuals have enhanced domain-general executive functions, relative to monolinguals, due to competition-induced monitoring of both processing and representation from the task-irrelevant language(s). In this commentary, we consider a recent study by Pot, Keijzer, and de Bot (2018), which focused on the relationship between individual differences in language usage and performance on an executive function task among multilingual older adults. We discuss their approach and findings in light of a more general movement towards embracing complexity in this domain of research, including individuals' sociocultural context and position in the lifespan. The field increasingly considers interactions between bilingualism/multilingualism and cognition, employing measures of language use well beyond the early dichotomous perspectives on language background. Moreover, new measures of bilingualism and analytical approaches are helping researchers interrogate the complexities of specific processing issues. Indeed, our review of the bilingualism/multilingualism literature confirms the increased appreciation researchers have for the range of factors-beyond whether someone speaks one, two, or more languages-that impact specific cognitive processes. Here, we highlight some of the most salient of these, and incorporate suggestions for a way forward that likewise encompasses neural perspectives on the topic.

12.
Brain Sci ; 10(11)2020 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-33147691

RESUMO

The efficacy of audiovisual (AV) integration is reflected in the degree of cross-modal suppression of the auditory event-related potentials (ERPs, P1-N1-P2), while stronger semantic encoding is reflected in enhanced late ERP negativities (e.g., N450). We hypothesized that increasing visual stimulus reliability should lead to more robust AV-integration and enhanced semantic prediction, reflected in suppression of auditory ERPs and enhanced N450, respectively. EEG was acquired while individuals watched and listened to clear and blurred videos of a speaker uttering intact or highly-intelligible degraded (vocoded) words and made binary judgments about word meaning (animate or inanimate). We found that intact speech evoked larger negativity between 280-527-ms than vocoded speech, suggestive of more robust semantic prediction for the intact signal. For visual reliability, we found that greater cross-modal ERP suppression occurred for clear than blurred videos prior to sound onset and for the P2 ERP. Additionally, the later semantic-related negativity tended to be larger for clear than blurred videos. These results suggest that the cross-modal effect is largely confined to suppression of early auditory networks with weak effect on networks associated with semantic prediction. However, the semantic-related visual effect on the late negativity may have been tempered by the vocoded signal's high-reliability.

13.
Discourse Process ; 57(5-6): 491-506, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32669749

RESUMO

In the current study, we examine how hearing parents use multimodal cuing to establish joint attention with their hearing (N=9) or deaf (N=9) children during a free-play session. The deaf children were all candidates for cochlear implantation who had not yet been implanted, and each hearing child was age-matched to a deaf child. We coded parents' use of auditory, visual, and tactile cues, alone and in different combinations, during both successful and failed bids for children's attention. Although our findings revealed no clear quantitative differences in parents' use of multimodal cues as a function of child hearing status, secondary analyses revealed that hearing parents of deaf children used shorter utterances while initiating joint attention than did hearing parents of hearing children. Hearing parents of deaf children also touched their children twice as often throughout the play session than did hearing parents of hearing children. These findings demonstrate that parents differentially accommodate the specific needs of their hearing and deaf children in subtle ways to establish communicative intent.

14.
J Neurosci Methods ; 341: 108790, 2020 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-32442439

RESUMO

Functional near-infrared spectroscopy (fNIRS) provides an alternative to functional magnetic resonance imaging (fMRI) for assessing changes in cortical hemodynamics. To establish the utility of fNIRS for measuring differential recruitment of the motor network during the production of timing-based actions, we measured cortical hemodynamic responses in 10 healthy adults while they performed two versions of a finger-tapping task. The task, used in an earlier fMRI study (Jantzen et al., 2004), was designed to track the neural basis of different timing behaviors. Participants paced their tapping to a metronomic tone, then continued tapping at the established pace without the tone. Initial tapping was either synchronous or syncopated relative to the tone. This produced a 2 × 2 design: synchronous or syncopated tapping and pacing the tapping with or continuing without a tone. Accuracy of the timing of tapping was tracked while cortical hemodynamics were monitored using fNIRS. Hemodynamic responses were computed by canonical statistical analysis across trials in each of the four conditions. Task-induced brain activation resulted in significant increases in oxygenated hemoglobin concentration (oxy-Hb) in a broad region in and around the motor cortex. Overall, syncopated tapping was harder behaviorally and produced more cortical activation than synchronous tapping. Thus, we observed significant changes in oxy-Hb in direct relation to the complexity of the task.


Assuntos
Córtex Motor , Adulto , Mapeamento Encefálico , Dedos , Humanos , Imageamento por Ressonância Magnética , Espectroscopia de Luz Próxima ao Infravermelho
15.
IEEE Trans Cogn Dev Syst ; 12(2): 243-249, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33748419

RESUMO

Here we characterize establishment of joint attention in hearing parent-deaf child dyads and hearing parent-hearing child dyads. Deaf children were candidates for cochlear implantation who had not yet been implanted and who had no exposure to formal manual communication (e.g., American Sign Language). Because many parents whose deaf children go through early cochlear implant surgery do not themselves know a visual language, these dyads do not share a formal communication system based in a common sensory modality prior to the child's implantation. Joint attention episodes were identified during free play between hearing parents and their hearing children (N = 4) and hearing parents and their deaf children (N = 4). Attentional episode types included successful parent-initiated joint attention, unsuccessful parent-initiated joint attention, passive attention, successful child-initiated joint attention, and unsuccessful child-initiated joint attention. Group differences emerged in both successful and unsuccessful parent-initiated attempts at joint attention, parent passive attention, and successful child-initiated attempts at joint attention based on proportion of time spent in each. These findings highlight joint attention as an indicator of early communicative efficacy in parent-child interaction for different child populations. We discuss the active role parents and children play in communication, regardless of their hearing status.

16.
Dev Psychobiol ; 61(3): 430-443, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30588618

RESUMO

Much of what is known about the course of auditory learning in following cochlear implantation is based on behavioral indicators that users are able to perceive sound. Both prelingually deafened children and postlingually deafened adults who receive cochlear implants display highly variable speech and language processing outcomes, although the basis for this is poorly understood. To date, measuring neural activity within the auditory cortex of implant recipients of all ages has been challenging, primarily because the use of traditional neuroimaging techniques is limited by the implant itself. Functional near-infrared spectroscopy (fNIRS) is an imaging technology that works with implant users of all ages because it is non-invasive, compatible with implant devices, and not subject to electrical artifacts. Thus, fNIRS can provide insight into processing factors that contribute to variations in spoken language outcomes in implant users, both children and adults. There are important considerations to be made when using fNIRS, particularly with children, to maximize the signal-to-noise ratio and to best identify and interpret cortical responses. This review considers these issues, recent data, and future directions for using fNIRS as a tool to understand spoken language processing in children and adults who hear through a cochlear implant.


Assuntos
Córtex Auditivo/fisiopatologia , Implantes Cocleares , Surdez/fisiopatologia , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Percepção da Fala/fisiologia , Adulto , Córtex Auditivo/diagnóstico por imagem , Criança , Surdez/diagnóstico por imagem , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/normas
17.
J Speech Lang Hear Res ; 61(8): 1970-1988, 2018 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-30073268

RESUMO

Purpose: Deaf children are frequently reported to be at risk for difficulties in executive function (EF); however, the literature is divided over whether these difficulties are the result of deafness itself or of delays/deficits in language that often co-occur with deafness. The purpose of this study is to discriminate these hypotheses by assessing EF in populations where the 2 accounts make contrasting predictions. Method: We use a between-groups design involving 116 children, ages 5-12 years, across 3 groups: (a) participants with normal hearing (n = 45), (b) deaf native signers who had access to American Sign Language from birth (n = 45), and (c) oral cochlear implant users who did not have full access to language prior to cochlear implantation (n = 26). Measures include both parent report and performance-based assessments of EF. Results: Parent report results suggest that early access to language has a stronger impact on EF than early access to sound. Performance-based results trended in a similar direction, but no between-group differences were significant. Conclusions: These results indicate that healthy EF skills do not require audition and therefore that difficulties in this domain do not result primarily from a lack of auditory experience. Instead, results are consistent with the hypothesis that language proficiency, whether in sign or speech, is crucial for the development of healthy EF. Further research is needed to test whether sign language proficiency also confers benefits to deaf children from hearing families.


Assuntos
Percepção Auditiva/fisiologia , Linguagem Infantil , Surdez/psicologia , Função Executiva/fisiologia , Pessoas com Deficiência Auditiva/psicologia , Criança , Pré-Escolar , Implante Coclear , Implantes Cocleares , Surdez/cirurgia , Feminino , Humanos , Masculino , Língua de Sinais
18.
Dev Sci ; 21(3): e12575, 2018 May.
Artigo em Inglês | MEDLINE | ID: mdl-28557278

RESUMO

Developmental psychology plays a central role in shaping evidence-based best practices for prelingually deaf children. The Auditory Scaffolding Hypothesis (Conway et al., 2009) asserts that a lack of auditory stimulation in deaf children leads to impoverished implicit sequence learning abilities, measured via an artificial grammar learning (AGL) task. However, prior research is confounded by a lack of both auditory and language input. The current study examines implicit learning in deaf children who were (Deaf native signers) or were not (oral cochlear implant users) exposed to language from birth, and in hearing children, using both AGL and Serial Reaction Time (SRT) tasks. Neither deaf nor hearing children across the three groups show evidence of implicit learning on the AGL task, but all three groups show robust implicit learning on the SRT task. These findings argue against the Auditory Scaffolding Hypothesis, and suggest that implicit sequence learning may be resilient to both auditory and language deprivation, within the tested limits. A video abstract of this article can be viewed at: https://youtu.be/EeqfQqlVHLI [Correction added on 07 August 2017, after first online publication: The video abstract link was added.].


Assuntos
Surdez/fisiopatologia , Desenvolvimento da Linguagem , Aprendizagem/fisiologia , Criança , Implantes Cocleares , Feminino , Humanos , Idioma , Testes de Linguagem , Linguística , Masculino
19.
J Deaf Stud Deaf Educ ; 22(1): 9-21, 2017 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-27624307

RESUMO

Deaf children are often described as having difficulty with executive function (EF), often manifesting in behavioral problems. Some researchers view these problems as a consequence of auditory deprivation; however, the behavioral problems observed in previous studies may not be due to deafness but to some other factor, such as lack of early language exposure. Here, we distinguish these accounts by using the BRIEF EF parent report questionnaire to test for behavioral problems in a group of Deaf children from Deaf families, who have a history of auditory but not language deprivation. For these children, the auditory deprivation hypothesis predicts behavioral impairments; the language deprivation hypothesis predicts no group differences in behavioral control. Results indicated that scores among the Deaf native signers (n = 42) were age-appropriate and similar to scores among the typically developing hearing sample (n = 45). These findings are most consistent with the language deprivation hypothesis, and provide a foundation for continued research on outcomes of children with early exposure to sign language.


Assuntos
Surdez/fisiopatologia , Função Executiva/fisiologia , Privação Sensorial/fisiologia , Língua de Sinais , Adolescente , Criança , Transtornos do Comportamento Infantil/etiologia , Pré-Escolar , Feminino , Audição/fisiologia , Humanos , Masculino , Fatores de Risco
20.
Biomed Opt Express ; 7(12): 5104-5119, 2016 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-28018728

RESUMO

Recent functional near-infrared spectroscopy (fNIRS) instrumentation encompasses several dozen of optodes to enable reconstructing a hemodynamic image of the entire cerebral cortex. Despite its potential clinical applicability, widespread use of fNIRS with human subjects is currently limited by unresolved issues, namely the collection from the entirety of optical channels of signals with a signal-to-noise ratio (SNR) sufficient to carry out a reliable estimation of cortical hemodynamics, and the considerable amount of time that placing numerous optodes take with individuals for whom achieving good optical coupling to the scalp is difficult due to thick or dark hair. To address these issues, we developed a numerical method that: 1) at the channel level, computes an objective measure of the signal-to-noise ratio (SNR) related to its optical coupling to the scalp, akin to electrode conductivity used in electroencephalography (EEG), and 2) at the optode level, determines and displays the coupling status of all individual optodes in real time on a model of a human head. This approach aims to shorten the pre-acquisition preparation time by visually displaying which optodes require further adjustment for optimum scalp coupling, and to maximize the signal-to-noise ratio (SNR) of all optical channels contributing to the functional hemodynamic mapping. The methodology described in this paper has been implemented in a software tool named PHOEBE (placing headgear optodes efficiently before experimentation) that is freely available for use by the fNIRS community.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...