Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.131
Filtrar
1.
Psychon Bull Rev ; 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38955989

RESUMO

This study tested the hypothesis that speaking with other voices can influence sensorimotor predictions of one's own voice. Real-time manipulations of auditory feedback were used to drive sensorimotor adaptation in speech, while participants spoke sentences in synchrony with another voice, a task known to induce implicit imitation (phonetic convergence). The acoustic-phonetic properties of the other voice were manipulated between groups, such that convergence with it would either oppose (incongruent group, n = 15) or align with (congruent group, n = 16) speech motor adaptation. As predicted, significantly greater adaptation was seen in the congruent compared to the incongruent group. This suggests the use of shared sensory targets in speech for predicting the sensory outcomes of both the actions of others (speech perception) and the actions of the self (speech production). This finding has important implications for wider theories of shared predictive mechanisms across perception and action, such as active inference.

2.
Clin Linguist Phon ; : 1-19, 2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-38965823

RESUMO

This study explores the influence of lexicality on gradient judgments of Swedish sibilant fricatives by contrasting ratings of initial fricatives in words and word fragments (initial CV-syllables). Visual-Analogue Scale (VAS) judgments were elicited from experienced listeners (speech-language pathologists; SLPs) and inexperienced listeners, and compared with respect to the effects of lexicality using Bayesian mixed-effects beta regression. Overall, SLPs had higher intra- and interrater reliability than inexperienced listeners. SLPs as a group also rated fricatives as more target-like, with higher precision, than did inexperienced listeners. An effect of lexicality was observed for all individual listeners, though the magnitude of the effect varied. Although SLP's ratings of Swedish children's initial voiceless fricatives were less influenced by lexicality, our results indicate that previous findings concerning VAS ratings of non-lexical CV-syllables cannot be directly transferred to the clinical context, without consideration of possible lexical bias.

3.
Disabil Rehabil Assist Technol ; : 1-7, 2024 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-38976231

RESUMO

Purpose: The study examined the benefits of transparent versus non-transparent surgical masks on the speech intelligibility in quiet of adult cochlear implant (CI) users, in conjunction with patient preferences and the acoustic effects of the different masks on the speech signal.Methods: Speech tracking test (STT) scores and acoustical characteristics were measured in quiet for live speech in three different conditions, without mask, with a non-transparent surgical mask and with a transparent surgical mask. Patients were asked about their experience with the face masks. The study sample consists of 30 patients using a cochlear implant.Results: We found a significant difference in speech perception among all conditions, with the speech tracking scores revealing a significant advantage when switching from the non-transparent surgical mask to the transparent one. The transparent surgical mask, although it does not transmit high frequencies effectively, seems to have minimal effect on speech comprehension in practice when lip movements are visible. This substantial benefit is further emphasized in the questionnaire, where 82% of the patients express a preference for the transparent surgical mask.Conclusion: The study highlights significant benefits for patients in speech intelligibility in quiet with the use of medically safe transparent facemasks. Transitioning from standard surgical masks to transparent masks demonstrates highly significant effectiveness and patient satisfaction for patients with hearing loss. This research strongly advocates for the implementation of transparent masks in broader hospital and perioperative settings.


In scenarios mandating mask usage, it's advisable for caregivers to opt for transparent surgical masks. Specifically within perioperative settings, where patients might not be able to utilise their hearing aids or cochlear implants, it becomes imperative for all caregivers to consistently wear transparent surgical masks to prevent communication impediments.When utilising a transparent surgical mask, caregivers must recognise that sound may be altered and maintaining a clear view of the face and lips is crucial for effective communication.

4.
Int Arch Otorhinolaryngol ; 28(3): e492-e501, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38974629

RESUMO

Introduction The limited access to temporal fine structure (TFS) cues is a reason for reduced speech-in-noise recognition in cochlear implant (CI) users. The CI signal processing schemes like electroacoustic stimulation (EAS) and fine structure processing (FSP) encode TFS in the low frequency whereas theoretical strategies such as frequency amplitude modulation encoder (FAME) encode TFS in all the bands. Objective The present study compared the effect of simulated CI signal processing schemes that either encode no TFS, TFS information in all bands, or TFS only in low-frequency bands on concurrent vowel identification (CVI) and Zebra speech perception (ZSP). Methods Temporal fine structure information was systematically manipulated using a 30-band sine-wave (SV) vocoder. The TFS was either absent (SV) or presented in all the bands as frequency modulations simulating the FAME algorithm or only in bands below 525 Hz to simulate EAS. Concurrent vowel identification and ZSP were measured under each condition in 15 adults with normal hearing. Results The CVI scores did not differ between the 3 schemes (F (2, 28) = 0.62, p = 0.55, η 2 p = 0.04). The effect of encoding TFS was observed for ZSP (F (2, 28) = 5.73, p = 0.008, η 2 p = 0.29). Perception of Zebra speech was significantly better with EAS and FAME than with SV. There was no significant difference in ZSP scores obtained with EAS and FAME ( p = 1.00) Conclusion For ZSP, the TFS cues from FAME and EAS resulted in equivalent improvements in performance compared to the SV scheme. The presence or absence of TFS did not affect the CVI scores.

5.
Neuroimage ; 297: 120696, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38909761

RESUMO

How is information processed in the cerebral cortex? In most cases, recorded brain activity is averaged over many (stimulus) repetitions, which erases the fine-structure of the neural signal. However, the brain is obviously a single-trial processor. Thus, we here demonstrate that an unsupervised machine learning approach can be used to extract meaningful information from electro-physiological recordings on a single-trial basis. We use an auto-encoder network to reduce the dimensions of single local field potential (LFP) events to create interpretable clusters of different neural activity patterns. Strikingly, certain LFP shapes correspond to latency differences in different recording channels. Hence, LFP shapes can be used to determine the direction of information flux in the cerebral cortex. Furthermore, after clustering, we decoded the cluster centroids to reverse-engineer the underlying prototypical LFP event shapes. To evaluate our approach, we applied it to both extra-cellular neural recordings in rodents, and intra-cranial EEG recordings in humans. Finally, we find that single channel LFP event shapes during spontaneous activity sample from the realm of possible stimulus evoked event shapes. A finding which so far has only been demonstrated for multi-channel population coding.

6.
Physiol Behav ; 283: 114615, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38880296

RESUMO

This study sets out to investigate the potential effect of males' testosterone level on speech production and speech perception. Regarding speech production, we investigate intra- and inter-individual variation in mean fundamental frequency (fo) and formant frequencies and highlight the potential interacting effect of another hormone, i.e. cortisol. In addition, we investigate the influence of different speech materials on the relationship between testosterone and speech production. Regarding speech perception, we investigate the potential effect of individual differences in males' testosterone level on ratings of attractiveness of female voices. In the production study, data is gathered from 30 healthy adult males ranging from 19 to 27 years (mean age: 22.4, SD: 2.2) who recorded their voices and provided saliva samples at 9 am, 12 noon and 3 pm on a single day. Speech material consists of sustained vowels, counting, read speech and a free description of pictures. Biological measures comprise speakers' height, grip strength, and hormone levels (testosterone and cortisol). In the perception study, participants were asked to rate the attractiveness of female voice stimuli (sentence stimulus, same-speaker pairs) that were manipulated in three steps regarding mean fo and formant frequencies. Regarding speech production, our results show that testosterone affected mean fo (but not formants) both within and between speakers. This relationship was weakened in speakers with high cortisol levels and depended on the speech material. Regarding speech perception, we found female stimuli with higher mean fo and formants to be rated as sounding more attractive than stimuli with lower mean fo and formants. Moreover, listeners with low testosterone showed an increased sensitivity to vocal cues of female attractiveness. While our results of the production study support earlier findings of a relationship between testosterone and mean fo in males (which is mediated by cortisol), they also highlight the relevance of the speech material: The effect of testosterone was strongest in sustained vowels, potentially due to a strengthened effect of hormones on physiologically strongly influenced tasks such as sustained vowels in contrast to more free speech tasks such as a picture description. The perception study is the first to show an effect of males' testosterone level on female attractiveness ratings using voice stimuli.

7.
Hear Res ; 450: 109050, 2024 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-38852534

RESUMO

Since the presence of tinnitus is not always associated with audiometric hearing loss, it has been hypothesized that hidden hearing loss may act as a potential trigger for increased central gain along the neural pathway leading to tinnitus perception. In recent years, the study of hidden hearing loss has improved with the discovery of cochlear synaptopathy and several objective diagnostic markers. This study investigated three potential markers of peripheral hidden hearing loss in subjects with tinnitus: extended high-frequency audiometric thresholds, the auditory brainstem response, and the envelope following response. In addition, speech intelligibility was measured as a functional outcome measurement of hidden hearing loss. To account for age-related hidden hearing loss, participants were grouped according to age, presence of tinnitus, and audiometric thresholds. Group comparisons were conducted to differentiate between age- and tinnitus-related effects of hidden hearing loss. All three markers revealed age-related differences, whereas no differences were observed between the tinnitus and non-tinnitus groups. However, the older tinnitus group showed improved performance on low-pass filtered speech in noise tests compared to the older non-tinnitus group. These low-pass speech in noise scores were significantly correlated with tinnitus distress, as indicated using questionnaires, and could be related to the presence of hyperacusis. Based on our observations, cochlear synaptopathy does not appear to be the underlying cause of tinnitus. The improvement in low-pass speech-in-noise could be explained by enhanced temporal fine structure encoding or hyperacusis. Therefore, we recommend that future tinnitus research takes into account age-related factors, explores low-frequency encoding, and thoroughly assesses hyperacusis.

8.
Front Psychol ; 15: 1383904, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38873525

RESUMO

Perceptual difficulty with an unfamiliar accent can dissipate within short time scales (e.g., within minutes), reflecting rapid adaptation effects. At the same time, long-term familiarity with an accent is also known to yield stable perceptual benefits. However, whether the long-term effects reflect sustained, cumulative progression from shorter-term adaptation remains unknown. To fill this gap, we developed a web-based, repeated exposure-test paradigm. In this paradigm, short test blocks alternate with exposure blocks, and this exposure-test sequence is repeated multiple times. This design allows for the testing of adaptive speech perception both (a) within the first moments of encountering an unfamiliar accent and (b) over longer time scales such as days and weeks. In addition, we used a Bayesian ideal observer approach to select natural speech stimuli that increase the statistical power to detect adaptation. The current report presents results from a first application of this paradigm, investigating changes in the recognition accuracy of Mandarin-accented speech by native English listeners over five sessions spanning 3 weeks. We found that the recognition of an accent feature (a syllable-final /d/, as in feed, sounding/t/-like) improved steadily over the three-week period. Unexpectedly, however, the improvement was seen with or without exposure to the accent. We discuss possible reasons for this result and implications for conducting future longitudinal studies with repeated exposure and testing.

9.
Lang Speech ; : 238309241258162, 2024 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-38877720

RESUMO

Human communication is inherently multimodal. Auditory speech, but also visual cues can be used to understand another talker. Most studies of audiovisual speech perception have focused on the perception of speech segments (i.e., speech sounds). However, less is known about the influence of visual information on the perception of suprasegmental aspects of speech like lexical stress. In two experiments, we investigated the influence of different visual cues (e.g., facial articulatory cues and beat gestures) on the audiovisual perception of lexical stress. We presented auditory lexical stress continua of disyllabic Dutch stress pairs together with videos of a speaker producing stress on the first or second syllable (e.g., articulating VOORnaam or voorNAAM). Moreover, we combined and fully crossed the face of the speaker producing lexical stress on either syllable with a gesturing body producing a beat gesture on either the first or second syllable. Results showed that people successfully used visual articulatory cues to stress in muted videos. However, in audiovisual conditions, we were not able to find an effect of visual articulatory cues. In contrast, we found that the temporal alignment of beat gestures with speech robustly influenced participants' perception of lexical stress. These results highlight the importance of considering suprasegmental aspects of language in multimodal contexts.

10.
Artigo em Inglês | MEDLINE | ID: mdl-38886302

RESUMO

The purpose of the present study was to examine the influence of visual cues in audiovisual perception of interrupted speech by nonnative English listeners and to identify the role of working memory, long-term memory retrieval, and vocabulary knowledge in audiovisual perception by nonnative listeners. The participants included 31 Mandarin-speaking English learners between 19 and 41 years of age. The perceptual stimuli were noise-filled periodically interrupted AzBio and QuickSIN sentences with or without visual cues that showed a male speaker uttering the sentences. In addition to sentence recognition, the listeners completed a semantic fluency task, verbal (operation span) and visuospatial (symmetry span) working memory tasks, and two vocabulary knowledge tests (Vocabulary Level Test and Lexical Test for Advanced Learners of English). The results revealed significantly better speech recognition in the audio-visual condition than the audio-only condition, but the magnitude of visual benefit was substantially attenuated for sentences that had limited semantic context. The listeners' vocabulary size in English played a key role in the restoration of missing speech information and audiovisual integration in the perception of interrupted speech. Meanwhile, the listeners' verbal working memory capacity played an important role in audiovisual integration especially for the difficult stimuli with limited semantic context.

11.
Phonetica ; 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38869142

RESUMO

Connected speech processes (CSPs) occur randomly in everyday conversations of native speakers; however, such phonological variations can bring about challenges for non-native listeners. Looking at CSP literature, there seems to be very few studies that involved young foreign language learners. Therefore, the present study aimed to explore the development of connected speech perception skills by focusing on 201 9- to 12-year-old Chinese EFL children. It also incorporated systematic error analysis to further probe into the specific perceptual difficulties. The results indicate that: (1) Despite a significantly ascending trend for the overall growth of perception skills, no significant differences were found between 11 and 12 year olds in elision and contraction, which suggests that the developmental trend varied depending on different CSP types; (2) Although random errors decreased with age, the number of lexicon and syntax errors gradually increased, and the distribution of perceptual errors shifted from the level of words and syllables to that of phonemes; (3) The primary types of errors resulting in the perception difficulties for elision and contraction were consonant errors, grammatical errors and morphology errors. Ergo, this study enhances the understanding of connected speech perception among EFL children and provides some implications for EFL/ESL listening instructions.

12.
Neurosci Bull ; 2024 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-38839688

RESUMO

Musical training can counteract age-related decline in speech perception in noisy environments. However, it remains unclear whether older non-musicians and musicians rely on functional compensation or functional preservation to counteract the adverse effects of aging. This study utilized resting-state functional connectivity (FC) to investigate functional lateralization, a fundamental organization feature, in older musicians (OM), older non-musicians (ONM), and young non-musicians (YNM). Results showed that OM outperformed ONM and achieved comparable performance to YNM in speech-in-noise and speech-in-speech tasks. ONM exhibited reduced lateralization than YNM in lateralization index (LI) of intrahemispheric FC (LI_intra) in the cingulo-opercular network (CON) and LI of interhemispheric heterotopic FC (LI_he) in the language network (LAN). Conversely, OM showed higher neural alignment to YNM (i.e., a more similar lateralization pattern) compared to ONM in CON, LAN, frontoparietal network (FPN), dorsal attention network (DAN), and default mode network (DMN), indicating preservation of youth-like lateralization patterns due to musical experience. Furthermore, in ONM, stronger left-lateralized and lower alignment-to-young of LI_intra in the somatomotor network (SMN) and DAN and LI_he in DMN correlated with better speech performance, indicating a functional compensation mechanism. In contrast, stronger right-lateralized LI_intra in FPN and DAN and higher alignment-to-young of LI_he in LAN correlated with better performance in OM, suggesting a functional preservation mechanism. These findings highlight the differential roles of functional preservation and compensation of lateralization in speech perception in noise among elderly individuals with and without musical expertise, offering insights into successful aging theories from the lens of functional lateralization and speech perception.

13.
Artigo em Inglês | MEDLINE | ID: mdl-38842041

RESUMO

OBJECTIVE: To compare speech recognition and quality of life outcomes between bilateral sequentially and simultaneously implanted adult cochlear implant (CI) recipients who initially qualify for a CI in both ears. STUDY DESIGN: Retrospective chart review. SETTING: Tertiary referral center. METHODS: Retrospective chart review identified adults who underwent bilateral CI, either simultaneously or sequentially, at a high-volume center between 2012 and 2022. Sequentially implanted patients were only included if the second ear qualified for CI in quiet (defined as best-aided AzBio quiet testing <60%), at time of initial CI evaluation. RESULTS: Of 112 bilateral CI patients who qualified in both ears at initial evaluation, 95 underwent sequential implantation and 17 simultaneous. Age, duration, and etiology of hearing loss, and CI usage were similar between groups. Preoperatively, the sequential group had lower pure-tone average (PTA) in the 1st ear than the simultaneously implanted group (P = <.001) but, no difference in 2nd ear PTA (P = .657). Preoperative speech recognition scores were significantly higher for the sequential group; however, this was not true for postoperative scores. There was no difference in the proportion of patients showing significant CI-only or bilateral performance improvement between the groups. Both groups demonstrated similar benefit in quality of life measures. CONCLUSION: Our findings indicate both simultaneous and sequential cochlear implantation are effective in improving hearing performance and quality of life. Thus, bilateral versus simultaneous implantation should be discussed and tailored for each individual patient.

14.
Sci Rep ; 14(1): 13089, 2024 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-38849415

RESUMO

Speech-in-noise (SIN) perception is a primary complaint of individuals with audiometric hearing loss. SIN performance varies drastically, even among individuals with normal hearing. The present genome-wide association study (GWAS) investigated the genetic basis of SIN deficits in individuals with self-reported normal hearing in quiet situations. GWAS was performed on 279,911 individuals from the UB Biobank cohort, with 58,847 reporting SIN deficits despite reporting normal hearing in quiet. GWAS identified 996 single nucleotide polymorphisms (SNPs), achieving significance (p < 5*10-8) across four genomic loci. 720 SNPs across 21 loci achieved suggestive significance (p < 10-6). GWAS signals were enriched in brain tissues, such as the anterior cingulate cortex, dorsolateral prefrontal cortex, entorhinal cortex, frontal cortex, hippocampus, and inferior temporal cortex. Cochlear cell types revealed no significant association with SIN deficits. SIN deficits were associated with various health traits, including neuropsychiatric, sensory, cognitive, metabolic, cardiovascular, and inflammatory conditions. A replication analysis was conducted on 242 healthy young adults. Self-reported speech perception, hearing thresholds (0.25-16 kHz), and distortion product otoacoustic emissions (1-16 kHz) were utilized for the replication analysis. 73 SNPs were replicated with a self-reported speech perception measure. 211 SNPs were replicated with at least one and 66 with at least two audiological measures. 12 SNPs near or within MAPT, GRM3, and HLA-DQA1 were replicated for all audiological measures. The present study highlighted a polygenic architecture underlying SIN deficits in individuals with self-reported normal hearing.


Assuntos
Estudo de Associação Genômica Ampla , Herança Multifatorial , Ruído , Polimorfismo de Nucleotídeo Único , Percepção da Fala , Humanos , Masculino , Feminino , Percepção da Fala/genética , Adulto , Pessoa de Meia-Idade , Autorrelato , Idoso , Audição/genética , Adulto Jovem
15.
Lang Speech ; : 238309241254350, 2024 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-38853599

RESUMO

Previous research has shown that it is difficult for English speakers to distinguish the front rounded vowels /y/ and /ø/ from the back rounded vowels /u/ and /o/. In this study, we examine the effect of noise on this perceptual difficulty. In an Oddity Discrimination Task, English speakers without any knowledge of German were asked to discriminate between German-sounding pseudowords varying in the vowel both in quiet and in white noise at two signal-to-noise ratios (8 and 0 dB). In test trials, vowels of the same height were contrasted with each other, whereas a contrast with /a/ served as a control trial. Results revealed that a contrast with /a/ remained stable in every listening condition for both high and mid vowels. When contrasting vowels of the same height, however, there was a perceptual shift along the F2 dimension as the noise level increased. Although the /ø/-/o/ and particularly /y/-/u/ contrasts were the most difficult in quiet, accuracy on /i/-/y/ and /e/-/ø/ trials decreased immensely when the speech signal was masked. The German control group showed the same pattern, albeit less severe than the non-native group, suggesting that even in low-level tasks with pseudowords, there is a native advantage in speech perception in noise.

16.
Q J Exp Psychol (Hove) ; : 17470218241264566, 2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38872247

RESUMO

This study aims to examine the perception of English vowels by Greek monolingual and bidialectal speakers of English as a second language (L2) and assess the predictions of the Universal Perceptual Model (UPM). Adult Cypriot Greek (CG) bidialectal speakers and Standard Modern Greek (SMG) monolingual speakers participated in classification and discrimination tests. The two groups were matched for various linguistic, sociolinguistic, and cognitive factors. Another group of adult English speakers served as controls. Data analysis has been conducted with the use of Bayesian regression models. The results of the discrimination test were predicted by acoustic similarity only to some extent, while perceptual similarity predicted most contrasts, confirming the hypotheses of UPM. A crucial finding was that bidialectals outperformed monolinguals in the discrimination of L2 contrasts. The advantage observed in bidialectals could be attributed to the greater flexibility of their speech categories, stemming from exposure to more diverse linguistic input.

17.
Behav Res Methods ; 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38829553

RESUMO

This tutorial is designed for speech scientists familiar with the R programming language who wish to construct experiment interfaces in R. We begin by discussing some of the benefits of building experiment interfaces in R-including R's existing tools for speech data analysis, platform independence, suitability for web-based testing, and the fact that R is open source. We explain basic concepts of reactive programming in R, and we apply these principles by detailing the development of two sample experiments. The first of these experiments comprises a speech production task in which participants are asked to read words with different emotions. The second sample experiment involves a speech perception task, in which participants listen to recorded speech and identify the emotion the talker expressed with forced-choice questions and confidence ratings. Throughout this tutorial, we introduce the new R package speechcollectr, which provides functions uniquely suited to web-based speech data collection. The package streamlines the code required for speech experiments by providing functions for common tasks like documenting participant consent, collecting participant demographic information, recording audio, checking the adequacy of a participant's microphone or headphones, and presenting audio stimuli. Finally, we describe some of the difficulties of remote speech data collection, along with the solutions we have incorporated into speechcollectr to meet these challenges.

18.
Ann N Y Acad Sci ; 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38924165

RESUMO

Considerable debate exists about the interplay between auditory and motor speech systems. Some argue for common neural mechanisms, whereas others assert that there are few shared resources. In four experiments, we tested the hypothesis that priming the speech motor system by repeating syllable pairs aloud improves subsequent syllable discrimination in noise compared with a priming discrimination task involving same-different judgments via button presses. Our results consistently showed that participants who engaged in syllable repetition performed better in syllable discrimination in noise than those who engaged in the priming discrimination task. This gain in accuracy was observed for primed and new syllable pairs, highlighting increased sensitivity to phonological details. The benefits were comparable whether the priming tasks involved auditory or visual presentation. Inserting a 1-h delay between the priming tasks and the syllable-in-noise task, the benefits persisted but were confined to primed syllable pairs. Finally, we demonstrated the effectiveness of this approach in older adults. Our findings substantiate the existence of a speech production-perception relationship. They also have clinical relevance as they raise the possibility of production-based interventions to improve speech perception ability. This would be particularly relevant for older adults who often encounter difficulties in perceiving speech in noise.

19.
Infant Behav Dev ; 76: 101973, 2024 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-38941721

RESUMO

Autism Spectrum Disorder is a highly heritable condition characterized by sociocommunicative difficulties, frequently entailing language atypicalities that extend to infants with a familial history of autism. The developmental mechanisms underlying these difficulties remain unknown. Detecting temporal synchrony between the lip movements and the auditory speech of a talking face and selectively attending to the mouth support typical early language acquisition. This preliminary eye-tracking study investigated whether these two fundamental mechanisms atypically function in infant siblings. We longitudinally tracked the trajectories of infants at elevated and low-likelihood for autism in these two abilities at 4, 8, and 12 months (n = 29). We presented two talking faces (synchronous and asynchronous) while recording infants' gaze to the talker's eyes and mouth. We found that infants detected temporal asynchronies in talking faces at 12 months regardless of group. However, compared to their typically developing peers, infants with an elevated likelihood of autism showed reduced attention to the mouth at the end of the first year and no variations in their interest to this area across time. Our findings provide preliminary evidence on a potentially atypical trajectory of reduced mouth-looking in audiovisual speech during the first year in infant siblings, with potential cascading consequences for language development, thus contributing to domain-general accounts of emerging autism.

20.
Adv Exp Med Biol ; 1455: 257-274, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38918356

RESUMO

Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.


Assuntos
Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Acústica da Fala , Periodicidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...