Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Clin Linguist Phon ; : 1-17, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38965836

ABSTRACT

A small body of research and reports from educational and clinical practice suggest that teaching literacy skills may facilitate the development of speech sound production in students with intellectual disabilities (ID). However, intervention research is needed to test the potential connection. This study aimed to investigate whether twelve weeks of systematic, digital literacy intervention enhanced speech sound production in students with ID and communication difficulties. A sample of 121 students with ID were assigned to four different groups: phonics-based, comprehension-based, a combination with both phonics- and comprehension-based intervention and a comparison group with teaching-as-usual. Speech sound production was assessed before and after the intervention. The results on the data without the imputed variable suggested a significant positive effect of systematic, digital literacy interventions on speech sound production. However, results from sensitivity analyses with imputed missing data was more ambiguous, with the effect only approaching significance (ps = .05-.07) for one of the interventions. Nonetheless, we tentatively suggest that systematic, digital literacy intervention could support speech development in students with ID and communication difficulties. Future research should be done to confirm and further elucidate the functional mechanisms of this link, so that we may have a better understanding and can improve instruction and the pivotal abilities of speech and reading.

2.
PLoS One ; 19(6): e0306113, 2024.
Article in English | MEDLINE | ID: mdl-38924006

ABSTRACT

Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.


Subject(s)
Emotions , Facial Expression , Memory, Short-Term , Humans , Memory, Short-Term/physiology , Male , Female , Emotions/physiology , Young Adult , Adult
3.
Disabil Rehabil Assist Technol ; : 1-11, 2024 Apr 22.
Article in English | MEDLINE | ID: mdl-38646848

ABSTRACT

PURPOSE: Students with intellectual disabilities (ID) typically have difficulties with literacy learning, often not acquiring basic literacy skills. Research and practical experience indicate that when these students are provided with evidence-based instruction, including comprehension as well as phonemic strategies, literacy may develop. METHODS: In this study, four pairs of teachers were interviewed regarding their perceptions of a 12-week digital literacy intervention that focused on both phonics and comprehension strategies. The intervention aimed to enhance literacy and communication development in students aged 7-21, who had mild to severe ID. RESULTS AND CONCLUSION: Four themes were identified in the analysis. It was seen that the teachers found it valuable to have access to two apps accessing and facilitating the use of different literacy strategies in meeting the needs of individual students. This digital format was also perceived as positive, contributing to creating a supportive and systematic learning environment that enhanced and increased literacy learning. The teachers recurringly also talked about the positive influence of participating in research, lifting the strong focus, and positive attention as very important for both teachers and students.

4.
BMJ Open ; 13(11): e071225, 2023 11 08.
Article in English | MEDLINE | ID: mdl-37940150

ABSTRACT

INTRODUCTION: Listening and communication difficulties can limit people's participation in activity and adversely affect their quality of life. Hearing, as well as listening and communication difficulties, can be measured either by using behavioural tests or self-report measures, and the outcomes are not always closely linked. The association between behaviourally measured and self-reported hearing is strong, whereas the association between behavioural and self-reported measures of listening and communication difficulties is much weaker, suggesting they assess different aspects of listening. While behavioural measures of listening and communication difficulties have been associated with poorer cognitive performance including executive functions, the same association has not always been shown for self-report measures. The objective of this systematic review and meta-analysis is to understand the relationship between executive function and self-reported listening and communication difficulties in adults with hearing loss, and where possible, potential covariates of age and pure-tone audiometric thresholds. METHODS AND ANALYSIS: Studies will be eligible for inclusion if they report data from both a self-report measure of listening difficulties and a behavioural measure of executive function. Eight databases are to be searched: MEDLINE (via Ovid SP), EMBASE (via Ovid SP), PsycINFO (via Ovid SP), ASSIA (via ProQuest), Cumulative Index to Nursing and Allied Health Literature or CINAHL (via EBSCO Host), Scopus, PubMed and Web of Science (Science and Social Science Citation Index). The JBI critical appraisal tool will be used to assess risk of bias for included studies. Results will be synthesised primarily using a meta-analysis, and where sufficient quantitative data are not available, a narrative synthesis will be carried out to describe key results. ETHICS AND DISSEMINATION: No ethical issues are foreseen. Data will be disseminated via academic publication and conference presentations. Findings may also be published in scientific newsletters and magazines. PROSPERO REGISTRATION NUMBER: CRD42022293546.


Subject(s)
Executive Function , Quality of Life , Adult , Humans , Self Report , Systematic Reviews as Topic , Meta-Analysis as Topic , Communication
5.
Front Psychol ; 13: 967260, 2022.
Article in English | MEDLINE | ID: mdl-36118435

ABSTRACT

The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.

6.
Front Psychol ; 13: 738866, 2022.
Article in English | MEDLINE | ID: mdl-35369269

ABSTRACT

The processing of a language involves a neural language network including temporal, parietal, and frontal cortical regions. This applies to spoken as well as signed languages. Previous research suggests that spoken language proficiency is associated with resting-state functional connectivity (rsFC) between language regions and other regions of the brain. Given the similarities in neural activation for spoken and signed languages, rsFC-behavior associations should also exist for sign language tasks. In this study, we explored the associations between rsFC and two types of linguistic skills in sign language: phonological processing skill and accuracy in elicited sentence production. Fifteen adult, deaf early signers were enrolled in a resting-state functional magnetic resonance imaging (fMRI) study. In addition to fMRI data, behavioral tests of sign language phonological processing and sentence reproduction were administered. Using seed-to-voxel connectivity analysis, we investigated associations between behavioral proficiency and rsFC from language-relevant nodes: bilateral inferior frontal gyrus (IFG) and posterior superior temporal gyrus (STG). Results showed that worse sentence processing skill was associated with stronger positive rsFC between the left IFG and left sensorimotor regions. Further, sign language phonological processing skill was associated with positive rsFC from right IFG to middle frontal gyrus/frontal pole although this association could possibly be explained by domain-general cognitive functions. Our findings suggest a possible connection between rsFC and developmental language outcomes in deaf individuals.

7.
Neuropsychologia ; 166: 108139, 2022 02 10.
Article in English | MEDLINE | ID: mdl-34990695

ABSTRACT

If the brain is deprived of input from one or more senses during development, functional and structural reorganization of the deprived regions takes place. However, little is known about how sensory deprivation affects large-scale brain networks. In the present study, we use data-driven independent component analysis (ICA) to characterize large-scale brain networks in 15 deaf early signers and 24 hearing non-signers based on resting-state functional MRI data. We found differences between the groups in independent components representing the left lateralized control network, the default network, the ventral somatomotor network, and the attention network. In addition, we showed stronger functional connectivity for deaf compared to hearing individuals from the middle and superior temporal cortices to the cingulate cortex, insular cortex, cuneus and precuneus, supramarginal gyrus, supplementary motor area, and cerebellum crus 1, and stronger connectivity for hearing non-signers to hippocampus, middle and superior frontal gyri, pre- and postcentral gyri, and cerebellum crus 8. These results show that deafness induces large-scale network reorganization, with the middle/superior temporal cortex as a central node of plasticity. Cross-modal reorganization may be associated with behavioral adaptations to the environment, including superior ability in some visual functions such as visual working memory and visual attention, in deaf signers.


Subject(s)
Auditory Cortex , Deafness , Adult , Brain/diagnostic imaging , Brain Mapping , Deafness/diagnostic imaging , Humans , Magnetic Resonance Imaging
8.
Front Psychol ; 12: 701795, 2021.
Article in English | MEDLINE | ID: mdl-34512459

ABSTRACT

Almost all studies on neonatal imitation to date seem to have focused on typically developing children, and we thus lack information on the early imitative abilities of children who follow atypical developmental trajectories. From both practical and theoretical perspectives, these abilities might be relevant to study in children who develop a neuropsychiatric diagnosis later on or in infants who later show impaired ability to imitate. Theoretical in the sense that it will provide insight into the earliest signs of intersubjectivity-i.e., primary intersubjectivity-and how this knowledge might influence our understanding of children following atypical trajectories of development. Practical in the sense that it might lead to earlier detection of certain disabilities. In the present work, we screen the literature for empirical studies on neonatal imitation in children with an Autism spectrum disorder (ASD) or Down syndrome (DS) as well as present an observation of neonatal imitation in an infant that later was diagnosed with autism and a re-interpretation of previously published data on the phenomenon in a small group of infants with DS. Our findings suggest that the empirical observations to date are too few to draw any definite conclusions but that the existing data suggests that neonatal imitation can be observed both in children with ASD and in children with DS. Thus, neonatal imitation might not represent a useful predictor of a developmental deficit. Based on current theoretical perspectives advocating that neonatal imitation is a marker of primary intersubjectivity, we propose tentatively that an ability to engage in purposeful exchanges with another human being exists in these populations from birth.

9.
Cereb Cortex ; 31(7): 3165-3176, 2021 06 10.
Article in English | MEDLINE | ID: mdl-33625498

ABSTRACT

Stimulus degradation adds to working memory load during speech processing. We investigated whether this applies to sign processing and, if so, whether the mechanism implicates secondary auditory cortex. We conducted an fMRI experiment where 16 deaf early signers (DES) and 22 hearing non-signers performed a sign-based n-back task with three load levels and stimuli presented at high and low resolution. We found decreased behavioral performance with increasing load and decreasing visual resolution, but the neurobiological mechanisms involved differed between the two manipulations and did so for both groups. Importantly, while the load manipulation was, as predicted, accompanied by activation in the frontoparietal working memory network, the resolution manipulation resulted in temporal and occipital activation. Furthermore, we found evidence of cross-modal reorganization in the secondary auditory cortex: DES had stronger activation and stronger connectivity between this and several other regions. We conclude that load and stimulus resolution have different neural underpinnings in the visual-verbal domain, which has consequences for current working memory models, and that for DES the secondary auditory cortex is involved in the binding of representations when task demands are low.


Subject(s)
Auditory Cortex/diagnostic imaging , Deafness/diagnostic imaging , Magnetic Resonance Imaging/methods , Memory, Short-Term/physiology , Sign Language , Visual Perception , Adult , Auditory Cortex/physiology , Deafness/physiopathology , Female , Humans , Male , Neuronal Plasticity/physiology , Photic Stimulation/methods , Reaction Time/physiology , Visual Perception/physiology , Young Adult
10.
J Speech Lang Hear Res ; 64(2): 359-370, 2021 02 17.
Article in English | MEDLINE | ID: mdl-33439747

ABSTRACT

Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.


Subject(s)
Hearing Aids , Speech Perception , Cognition , Hearing , Humans , Language , Memory, Short-Term
11.
Front Psychol ; 11: 534741, 2020.
Article in English | MEDLINE | ID: mdl-33192776

ABSTRACT

Auditory cortex in congenitally deaf early sign language users reorganizes to support cognitive processing in the visual domain. However, evidence suggests that the potential benefits of this reorganization are largely unrealized. At the same time, there is growing evidence that experience of playing computer and console games improves visual cognition, in particular visuospatial attentional processes. In the present study, we investigated in a group of deaf early signers whether those who reported recently playing computer or console games (deaf gamers) had better visuospatial attentional control than those who reported not playing such games (deaf non-gamers), and whether any such effect was related to cognitive processing in the visual domain. Using a classic test of attentional control, the Eriksen Flanker task, we found that deaf gamers performed on a par with hearing controls, while the performance of deaf non-gamers was poorer. Among hearing controls there was no effect of gaming. This suggests that deaf gamers may have better visuospatial attentional control than deaf non-gamers, probably because they are less susceptible to parafoveal distractions. Future work should examine the robustness of this potential gaming benefit and whether it is associated with neural plasticity in early deaf signers, as well as whether gaming intervention can improve visuospatial cognition in deaf people.

12.
Int J Audiol ; 58(5): 247-261, 2019 05.
Article in English | MEDLINE | ID: mdl-30714435

ABSTRACT

OBJECTIVE: The current update of the Ease of Language Understanding (ELU) model evaluates the predictive and postdictive aspects of speech understanding and communication. DESIGN: The aspects scrutinised concern: (1) Signal distortion and working memory capacity (WMC), (2) WMC and early attention mechanisms, (3) WMC and use of phonological and semantic information, (4) hearing loss, WMC and long-term memory (LTM), (5) WMC and effort, and (6) the ELU model and sign language. Study Samples: Relevant literature based on own or others' data was used. RESULTS: Expectations 1-4 are supported whereas 5-6 are constrained by conceptual issues and empirical data. Further strands of research were addressed, focussing on WMC and contextual use, and on WMC deployment in relation to hearing status. A wider discussion of task demands, concerning, for example, inference-making and priming, is also introduced and related to the overarching ELU functions of prediction and postdiction. Finally, some new concepts and models that have been inspired by the ELU-framework are presented and discussed. CONCLUSIONS: The ELU model has been productive in generating empirical predictions/expectations, the majority of which have been confirmed. Nevertheless, new insights and boundary conditions need to be experimentally tested to further shape the model.


Subject(s)
Cognition , Hearing Loss/psychology , Memory, Short-Term , Speech Perception , Attention , Humans , Memory, Long-Term
13.
J Deaf Stud Deaf Educ ; 22(4): 404-421, 2017 Oct 01.
Article in English | MEDLINE | ID: mdl-28961874

ABSTRACT

Strengthening the connections between sign language and written language may improve reading skills in deaf and hard-of-hearing (DHH) signing children. The main aim of the present study was to investigate whether computerized sign language-based literacy training improves reading skills in DHH signing children who are learning to read. Further, longitudinal associations between sign language skills and developing reading skills were investigated. Participants were recruited from Swedish state special schools for DHH children, where pupils are taught in both sign language and spoken language. Reading skills were assessed at five occasions and the intervention was implemented in a cross-over design. Results indicated that reading skills improved over time and that development of word reading was predicted by the ability to imitate unfamiliar lexical signs, but there was only weak evidence that it was supported by the intervention. These results demonstrate for the first time a longitudinal link between sign-based abilities and word reading in DHH signing children who are learning to read. We suggest that the active construction of novel lexical forms may be a supramodal mechanism underlying word reading development.


Subject(s)
Computer-Assisted Instruction/methods , Education of Hearing Disabled/methods , Literacy , Sign Language , Child , Female , Humans , Male , Reading
15.
Front Psychol ; 7: 854, 2016.
Article in English | MEDLINE | ID: mdl-27375532

ABSTRACT

Theory of Mind (ToM) is related to reading comprehension in hearing children. In the present study, we investigated progression in ToM in Swedish deaf and hard-of-hearing (DHH) signing children who were learning to read, as well as the association of ToM with reading comprehension. Thirteen children at Swedish state primary schools for DHH children performed a Swedish Sign Language (SSL) version of the Wellman and Liu (2004) ToM scale, along with tests of reading comprehension, SSL comprehension, and working memory. Results indicated that ToM progression did not differ from that reported in previous studies, although ToM development was delayed despite age-appropriate sign language skills. Correlation analysis revealed that ToM was associated with reading comprehension and working memory, but not sign language comprehension. We propose that some factor not investigated in the present study, possibly represented by inference making constrained by working memory capacity, supports both ToM and reading comprehension and may thus explain the results observed in the present study.

16.
Front Psychol ; 7: 107, 2016.
Article in English | MEDLINE | ID: mdl-26909050

ABSTRACT

Imitation and language processing are closely connected. According to the Ease of Language Understanding (ELU) model (Rönnberg et al., 2013) pre-existing mental representation of lexical items facilitates language understanding. Thus, imitation of manual gestures is likely to be enhanced by experience of sign language. We tested this by eliciting imitation of manual gestures from deaf and hard-of-hearing (DHH) signing and hearing non-signing children at a similar level of language and cognitive development. We predicted that the DHH signing children would be better at imitating gestures lexicalized in their own sign language (Swedish Sign Language, SSL) than unfamiliar British Sign Language (BSL) signs, and that both groups would be better at imitating lexical signs (SSL and BSL) than non-signs. We also predicted that the hearing non-signing children would perform worse than DHH signing children with all types of gestures the first time (T1) we elicited imitation, but that the performance gap between groups would be reduced when imitation was elicited a second time (T2). Finally, we predicted that imitation performance on both occasions would be associated with linguistic skills, especially in the manual modality. A split-plot repeated measures ANOVA demonstrated that DHH signers imitated manual gestures with greater precision than non-signing children when imitation was elicited the second but not the first time. Manual gestures were easier to imitate for both groups when they were lexicalized than when they were not; but there was no difference in performance between familiar and unfamiliar gestures. For both groups, language skills at T1 predicted imitation at T2. Specifically, for DHH children, word reading skills, comprehension and phonological awareness of sign language predicted imitation at T2. For the hearing participants, language comprehension predicted imitation at T2, even after the effects of working memory capacity and motor skills were taken into account. These results demonstrate that experience of sign language enhances the ability to imitate manual gestures once representations have been established, and suggest that the inherent motor patterns of lexical manual gestures are better suited for representation than those of non-signs. This set of findings prompts a developmental version of the ELU model, D-ELU.

17.
Res Dev Disabil ; 48: 145-59, 2016 Jan.
Article in English | MEDLINE | ID: mdl-26561215

ABSTRACT

BACKGROUND AND AIMS: Children with good phonological awareness (PA) are often good word readers. Here, we asked whether Swedish deaf and hard-of-hearing (DHH) children who are more aware of the phonology of Swedish Sign Language, a language with no orthography, are better at reading words in Swedish. METHODS AND PROCEDURES: We developed the Cross-modal Phonological Awareness Test (C-PhAT) that can be used to assess PA in both Swedish Sign Language (C-PhAT-SSL) and Swedish (C-PhAT-Swed), and investigated how C-PhAT performance was related to word reading as well as linguistic and cognitive skills. We validated C-PhAT-Swed and administered C-PhAT-Swed and C-PhAT-SSL to DHH children who attended Swedish deaf schools with a bilingual curriculum and were at an early stage of reading. OUTCOMES AND RESULTS: C-PhAT-SSL correlated significantly with word reading for DHH children. They performed poorly on C-PhAT-Swed and their scores did not correlate significantly either with C-PhAT-SSL or word reading, although they did correlate significantly with cognitive measures. CONCLUSIONS AND IMPLICATIONS: These results provide preliminary evidence that DHH children with good sign language PA are better at reading words and show that measures of spoken language PA in DHH children may be confounded by individual differences in cognitive skills.


Subject(s)
Deafness/psychology , Persons With Hearing Impairments , Phonetics , Reading , Sign Language , Child , Female , Hearing Aids , Humans , Language Development , Language Tests , Male , Persons With Hearing Impairments/psychology , Persons With Hearing Impairments/rehabilitation , Sweden
18.
Front Psychol ; 6: 1147, 2015.
Article in English | MEDLINE | ID: mdl-26321979

ABSTRACT

The Ease of Language Understanding model (Rönnberg et al., 2013) predicts that decreasing the distinctness of language stimuli increases working memory load; in the speech domain this notion is supported by empirical evidence. Our aim was to determine whether such an over-additive interaction can be generalized to sign processing in sign-naïve individuals and whether it is modulated by experience of computer gaming. Twenty young adults with no knowledge of sign language performed an n-back working memory task based on manual gestures lexicalized in sign language; the visual resolution of the signs and working memory load were manipulated. Performance was poorer when load was high and resolution was low. These two effects interacted over-additively, demonstrating that reducing the resolution of signed stimuli increases working memory load when there is no pre-existing semantic representation. This suggests that load and distinctness are handled by a shared amodal mechanism which can be revealed empirically when stimuli are degraded and load is high, even without pre-existing semantic representation. There was some evidence that the mechanism is influenced by computer gaming experience. Future work should explore how the shared mechanism is influenced by pre-existing semantic representation and sensory factors together with computer gaming experience.

19.
Front Psychol ; 4: 942, 2013.
Article in English | MEDLINE | ID: mdl-24379797

ABSTRACT

Similar working memory (WM) for lexical items has been demonstrated for signers and non-signers while short-term memory (STM) is regularly poorer in deaf than hearing individuals. In the present study, we investigated digit-based WM and STM in Swedish and British deaf signers and hearing non-signers. To maintain good experimental control we used printed stimuli throughout and held response mode constant across groups. We showed that deaf signers have similar digit-based WM performance, despite shorter digit spans, compared to well-matched hearing non-signers. We found no difference between signers and non-signers on STM span for letters chosen to minimize phonological similarity or in the effects of recall direction. This set of findings indicates that similar WM for signers and non-signers can be generalized from lexical items to digits and suggests that poorer STM in deaf signers compared to hearing non-signers may be due to differences in phonological similarity across the language modalities of sign and speech.

SELECTION OF CITATIONS
SEARCH DETAIL
...