Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 64
Filter
1.
Ear Hear ; 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39004788

ABSTRACT

OBJECTIVES: Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI). DESIGN: Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS: When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS: Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.

2.
Laryngoscope ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38837365

ABSTRACT

OBJECTIVE: The aim of the study is to compare the short-term effect of 7 versus 3 days of voice rest (VR) on objective vocal (acoustic) parameters following phonosurgery. METHODS: A prospective randomized study conducted at a tertiary referral medical center. Patients with vocal fold nodules, polyps, or cysts and scheduled for phonosurgery were recruited from the Voice Clinic. They were randomized into groups of 7- or 3-day postoperative VR periods and their voices were recorded preoperatively and at 4-week postoperatively. A mixed linear model statistical analysis (MLMSA) was used to compare pre- and postoperative jitter, shimmer, harmonic-to-noise ratio, and maximum phonation time between the two groups. RESULTS: Sixty-five patients were recruited, but only 34 fully complied with the study protocol, and their data were included in the final analysis (19 males, 20 females; mean age: 40.6 years; 17 patients in the 7-day VR group and 16 in the 3-day VR group). The groups were comparable in age, sex, and type of vocal lesion distribution. The preoperative MLMSA showed no significant group differences in the tested vocal parameters. Both groups exhibited significant (p < 0.05) and comparable improvement in all vocal parameters at postoperative week 4. CONCLUSIONS: A VR duration of 7 days showed no greater benefit on the examined vocal parameters than the 3-day protocol 4-week postoperatively. Our results suggest that a 3-day VR regimen can be followed by patients who undergo phonosurgery without compromising the vocal results. Larger-scale and longer-duration studies are needed to confirm our findings. LEVEL OF EVIDENCE: 2 Laryngoscope, 2024.

3.
Age Ageing ; 53(5)2024 05 01.
Article in English | MEDLINE | ID: mdl-38706392

ABSTRACT

Cognitive decline, mental health and mindset factors can all affect the autonomy and well-being of older adults. As the number of older adults across the globe increases, interventions to improve well-being are urgently needed. Improvisational theatre (improv) and improv-based interventions are well-suited to address this need. Studies have shown that participation in improv-based interventions has a positive impact on mental health indicators, including depressive symptoms, well-being and social connectedness, as well as cognitive skills such as attention and memory. In addition, improv-based interventions have been beneficial for people with dementia, improving positive affect, self-esteem and communication. In this article, we describe improvisational theatre, or improv, and the reasons it has emerged from a form of spontaneous theatre that involves playfulness and creativity to an important tool to effect behavioural change in individuals and groups. We then review the literature on the effects of improv in ageing populations, with a focus on social, emotional and cognitive functioning. Finally, we make recommendations on designing improv-based interventions so that future research, using rigorous quantitative methods, larger sample sizes and randomised controlled trials, can expand the use of improv in addressing important factors related to autonomy and well-being in older adults.


Subject(s)
Aging , Mental Health , Humans , Aging/psychology , Aged , Cognition , Creativity , Age Factors , Personal Autonomy , Emotions , Healthy Aging/psychology
4.
Cogn Emot ; : 1-14, 2024 May 24.
Article in English | MEDLINE | ID: mdl-38785380

ABSTRACT

Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.

5.
Cogn Emot ; : 1-10, 2024 May 19.
Article in English | MEDLINE | ID: mdl-38764186

ABSTRACT

Older adults process emotional speech differently than young adults, relying less on prosody (tone) relative to semantics (words). This study aimed to elucidate the mechanisms underlying these age-related differences via an emotional speech-in-noise test. A sample of 51 young and 47 older adults rated spoken sentences with emotional content on both prosody and semantics, presented on the background of wideband speech-spectrum noise (sensory interference) or on the background of multi-talker babble (sensory/cognitive interference). The presence of wideband noise eliminated age-related differences in semantics but not in prosody when processing emotional speech. Conversely, the presence of babble resulted in the elimination of age-related differences across all measures. The results suggest that both sensory and cognitive-linguistic factors contribute to age-related changes in emotional speech processing. Because real world conditions typically involve noisy background, our results highlight the importance of testing under such conditions.

6.
BJPsych Open ; 10(2): e54, 2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38404027

ABSTRACT

BACKGROUND: A rise in loneliness among older adults since the COVID-19 outbreak, even after vaccination, has been highlighted. Loneliness has deleterious consequences, with specific effects on perceptions of the ageing process during the COVID-19 pandemic. Coping with stressful life events and the challenges of ageing may result in a perception of acceleration of this process. AIM: Studies have shown a buffering effect of an internal locus of control in the relationship between COVID-19 stress and mental distress. The current study examined whether loneliness predicts subjective accelerated ageing and whether internal locus of control moderates this relationship. METHOD: Two waves of community-dwelling older adults (M = 70.44, s.d. = 5.95; age range 61-88 years), vaccinated three times, were sampled by a web-survey company. Participants completed the questionnaire after the beginning of the third vaccination campaign and reported again 4 months later on loneliness, internal locus of control and subjective accelerated ageing level in the second wave. RESULTS: Participants with higher levels of loneliness presented 4 months later with higher subjective accelerated ageing. Participants with a low level of internal locus of control presented 4 months later with high subjective accelerated ageing, regardless of their loneliness level. Participants with a high level of internal locus of control and a low level of loneliness presented with the lowest subjective accelerated ageing 4 months later. CONCLUSIONS: The findings emphasise the deleterious effects of loneliness and low internal locus of control on older adults' perception of their ageing process. Practitioners should focus their interventions not only on loneliness but also on improving the sense of internal locus of control to improve subjective accelerated ageing.

7.
Psychol Aging ; 38(6): 534-547, 2023 Sep.
Article in English | MEDLINE | ID: mdl-37227847

ABSTRACT

Older adults have been found to use context to facilitate word recognition at least as efficiently as young adults. This may pose a conundrum, as context use is based on cognitive resources that are considered to decrease with aging. The goal of this study was to shed light on this question by testing age-related differences in context use and the cognitive demands associated with it. The eye movements of 30 young (21-27 years old) and 30 older adults (61-79 years old) were examined as they listened to spoken instructions to touch an image on a monitor. The predictability of the target word was manipulated between trials: nonpredictive (baseline), predictive (context), or predictive of two images (competition). In tandem, listeners were asked to retain one or four spoken digits (low or high cognitive load) for later recall. Separate analyses were conducted for the preceding sentence and the (final) target word. Sentence processing: Older adults were slower than young adults to accumulate evidence for target-word prediction (context condition), and they were more negatively affected by the increase in cognitive load (context and competition). Target-word recognition: No age-related differences were found in word recognition rate or the effect of cognitive load following predictive context (context and competition). Although older adults have greater difficulty processing context, they can use context to facilitate word recognition as efficiently as young adults. These results provide a better understanding of how cognitive processing changes with aging. They may help develop interventions aimed at improving communication in older adults. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Aging , Speech Perception , Humans , Eye-Tracking Technology , Language , Eye Movements , Cognition
8.
J Appl Gerontol ; 42(5): 1113-1117, 2023 05.
Article in English | MEDLINE | ID: mdl-36794638

ABSTRACT

Israel became the first country to offer the second COVID-19 booster vaccination. The study tested for the first time, the predictive role of booster-related sense of control (SOC_B), trust and vaccination hesitancy (VH) on adoption of the second-booster among older adults, 7 months later. Four hundred Israelis (≥60 years-old), eligible for the first booster, responded online, two weeks into the first booster campaign. They completed demographics, self-reports, and first booster vaccination status (early-adopters or not). Second booster vaccination status was collected for 280 eligible responders: early- and late-adopters, vaccinated four and 75 days into the second booster campaign, respectively, versus non-adopters. Multinomial logistic regression was conducted with pseudo R2 = .385. Higher SOC_B, and first booster early-adoption were predictive of second booster early-vs.-non-adoption, 1.934 [1.148-3.257], 4.861 [1.847-12.791]; and late-vs.-non-adoption, 2.031 [1.294-3.188], 2.092 [0.979-4.472]. Higher trust was only predictive of late-vs.-non-adoption (1.981 [1.03-3.81]), whereas VH was non-predictive. We suggest that older-adult bellwethers, second booster early-adopters, could be predicted by higher SOC_B, and first booster early-adoption, 7 months earlier.


Subject(s)
COVID-19 Vaccines , COVID-19 , Humans , Aged , Israel , Longitudinal Studies , COVID-19/epidemiology , COVID-19/prevention & control , Vaccination
9.
J Autism Dev Disord ; 53(3): 1269-1272, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35507295

ABSTRACT

We recently read the interesting and informative paper entitled "Empathic accuracy and cognitive and affective empathy in young adults with and without autism spectrum disorder" (McKenzie et al. in Journal of Autism and Developmental Disorders 52: 1-15, 2021). This paper expands recent findings from our lab (Ben-David in Journal of Autism and Developmental Disorders 50: 741-756, 2020a; International Journal of Audiology 60: 319-321, 2020b) and a recent theoretical framework (Icht et al. in Autism Research 14: 1948-1964, 2021) that may suggest a new purview for McKenzie et al.'s results. Namely, these papers suggest that young adults with autism spectrum disorder without intellectual disability can successfully recruit their cognitive abilities to distinguish between different simple spoken emotions, but may still face difficulties processing complex, subtle emotions. McKenzie et al. (Journal of Autism and Developmental Disorders 52: 1-15, 2021) extended these findings to the processing of emotions in video clips, with both visual and auditory information.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Intellectual Disability , Humans , Young Adult , Autism Spectrum Disorder/psychology , Emotions/physiology , Empathy
10.
Res Dev Disabil ; 133: 104401, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36577332

ABSTRACT

BACKGROUND: Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS: Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES: Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS: In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS: ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.


Subject(s)
Attention Deficit Disorder with Hyperactivity , Speech Perception , Young Adult , Humans , Speech Perception/physiology , Eye Movements , Quality of Life , Word Processing , Speech Disorders
11.
JMIR Serious Games ; 10(3): e32297, 2022 Jul 28.
Article in English | MEDLINE | ID: mdl-35900825

ABSTRACT

BACKGROUND: The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE: This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS: In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS: Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS: This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.

12.
Int J Lang Commun Disord ; 57(5): 1023-1049, 2022 09.
Article in English | MEDLINE | ID: mdl-35714104

ABSTRACT

'Dysarthria' is a group of motor speech disorders resulting from a disturbance in neuromuscular control. Most individuals with dysarthria cope with communicative restrictions due to speech impairments and reduced intelligibility. Thus, language-sensitive measurements of intelligibility are important in dysarthria neurological assessment. The Frenchay Dysarthria Assessment, 2nd edition (FDA-2), is a validated tool for the identification of the nature and patterns of oro-motor movements associated with different types of dysarthria. The current study conducted a careful culture- and linguistic-sensitive adaption of the two intelligibility subtests of the FDA-2 to Hebrew (words and sentences) and performed a preliminary validation with relevant clinical populations. First, sets of Hebrew words and sentences were constructed, based on the criteria defined in FDA-2, as well as on several other factors that may affect performance: emotional valence, arousal and familiarity. Second, the new subtests were validated in healthy older adults (n = 20), and in two clinical groups (acquired dysarthria, n = 15; and developmental dysarthria, n = 19). Analysis indicated that the new subtests were found to be specific and sensitive, valid and reliable, as scores significantly differ between healthy older adults and adults with dysarthria, correlated with other subjective measures of intelligibility, and showed high test-retest reliability. The words and sentences intelligibility subtests can be used to evaluate speech disorders in various populations of Hebrew speakers, thus may be an important addition to the speech-language pathologist's toolbox, for clinical work as well as for research purposes. WHAT THIS PAPER ADDS: What is already known on the subject 'Dysarthria' is a group of disorders reflecting impairments in the strength, speed and precision of movements required for adequate control of the various speech subsystems. Reduced speech intelligibility is one of the main consequences of all dysarthria subtypes, irrespective of their underlying cause. Indeed, most individuals with dysarthria cope with communicative restrictions due to speech impairments. Thus, language-sensitive measurements of intelligibility are important in dysarthria assessment. The FDA-2's words and sentences subtests present standardized and validated tools for the identification of the nature and patterns of oro-motor movements associated with different types of dysarthria. What this paper adds to existing knowledge The lack of assessment tools in Hebrew poses challenges to clinical evaluation as well as research purposes. The current study conducted a careful culture- and linguistic-sensitive adaption of the FDA-2 intelligibility subtests to Hebrew and performed a preliminary validation with relevant clinical populations. First, sets of Hebrew words and sentences were constructed, based on the criteria defined in FDA-2, as well as on several other factors that may affect performance: emotional valence, arousal and familiarity. Second, the new subtests were validated in healthy older adults (n = 20), and in two clinical groups (adults with acquired dysarthria, n = 15; and young adults with developmental dysarthria, n = 19). What are the potential or actual clinical implications of this work? Analyses indicated that the new word and sentence subtests are specific, sensitive, valid and reliable. Namely, (1) they successfully differentiate between healthy individuals and individuals with dysarthria; (2) they correlate with other subjective measures of intelligibility; and (3) they show high test-retest reliability. The words and sentences intelligibility subtests can be used to evaluate speech disorders in various populations of Hebrew speakers. Thus, they may be an important addition to the speech-language pathologist's toolbox, for clinical and research purposes. The methods described here can be emulated for the adaptation of speech assessment tools to other languages.


Subject(s)
Dysarthria , Speech Intelligibility , Aged , Dysarthria/psychology , Humans , Linguistics , Reproducibility of Results , Speech Disorders/complications , Speech Production Measurement/methods , Young Adult
13.
Front Neurosci ; 16: 846117, 2022.
Article in English | MEDLINE | ID: mdl-35546888

ABSTRACT

Older adults process emotions in speech differently than do young adults. However, it is unclear whether these age-related changes impact all speech channels to the same extent, and whether they originate from a sensory or a cognitive source. The current study adopted a psychophysical approach to directly compare young and older adults' sensory thresholds for emotion recognition in two channels of spoken-emotions: prosody (tone) and semantics (words). A total of 29 young adults and 26 older adults listened to 50 spoken sentences presenting different combinations of emotions across prosody and semantics. They were asked to recognize the prosodic or semantic emotion, in separate tasks. Sentences were presented on the background of speech-spectrum noise ranging from SNR of -15 dB (difficult) to +5 dB (easy). Individual recognition thresholds were calculated (by fitting psychometric functions) separately for prosodic and semantic recognition. Results indicated that: (1). recognition thresholds were better for young over older adults, suggesting an age-related general decrease across channels; (2). recognition thresholds were better for prosody over semantics, suggesting a prosodic advantage; (3). importantly, the prosodic advantage in thresholds did not differ between age groups (thus a sensory source for age-related differences in spoken-emotions processing was not supported); and (4). larger failures of selective attention were found for older adults than for young adults, indicating that older adults experienced larger difficulties in inhibiting irrelevant information. Taken together, results do not support a sole sensory source, but rather an interplay of cognitive and sensory sources for age-related differences in spoken-emotions processing.

14.
Front Psychol ; 13: 841466, 2022.
Article in English | MEDLINE | ID: mdl-35478743

ABSTRACT

Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than peers with lower capacity when speech is presented in noise, with another task performed in tandem. Using the Eye-tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL) an adapted version of the "visual world" paradigm, 36 older listeners were asked to follow spoken instructions presented in background noise, while retaining digits for later recall under low (single-digit) or high (four-digits) memory load. In critical trials, instructions (e.g., "point at the candle") directed listeners' gaze to pictures of objects whose names shared onset or offset sounds with the name of a competitor that was displayed on the screen at the same time (e.g., candy or sandal). We compared listeners with different memory capacities on the time course for spoken word recognition under the two memory loads by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results indicated two trends. (1) For older adults with lower working memory capacity, increased memory load did not affect online speech processing, however, it impaired offline word recognition accuracy. (2) The reverse pattern was observed for older adults with higher working memory capacity: increased task difficulty significantly decreases online speech processing efficiency but had no effect on offline word recognition accuracy. Results suggest that in older adults, adaptation to adverse listening conditions is at least partially supported by cognitive reserve. Therefore, additional cognitive capacity may lead to greater resilience of older listeners to adverse listening conditions. The differential effects documented by eye movements and accuracy highlight the importance of using both online and offline measures of speech processing to explore age-related changes in speech perception.

15.
J Appl Gerontol ; 41(7): 1636-1640, 2022 07.
Article in English | MEDLINE | ID: mdl-35379029

ABSTRACT

Israel became the first country to offer the booster COVID-19 vaccination. The study tested for the first time the role of sense of control (SOC) due to vaccinations, trust and vaccination hesitancy (VH), and their association with compliance to the booster COVID-19 vaccine among older adults, during the first 2 weeks of the campaign. 400 Israeli citizens (≥ 6 years old), eligible for the booster vaccine, responded online. They completed demographics, self-reports, and booster vaccination status (already vaccinated, booked-a-slot, vaccination intent, and vaccination opposers). Multinomial logistic regression was conducted with pseudo R2 = .498. Higher SOC and lower VH were related to the difference between early and delayed vaccination (booked-a-slot, OR = 0.7 [0.49-0.99]; 2.2 [1.32-3.62], intent OR = 0.6 [0.42-0.98]; 2.7 [1.52-4.86]), as well as to rejection (OR = 0.3 [0.11-0.89]; 8.5 [3.39-21.16]). Increased trust was only related to the difference between early vaccinations and vaccine rejection (OR = 0.3 [0.11-0.89]). We suggest that SOC, as well as low VH, can be used as positive motivators, encouraging earlier vaccinations in older age.


Subject(s)
COVID-19 , Vaccines , Aged , COVID-19/epidemiology , COVID-19/prevention & control , COVID-19 Vaccines/therapeutic use , Humans , Israel , Vaccination/psychology
16.
Front Psychiatry ; 13: 838903, 2022.
Article in English | MEDLINE | ID: mdl-35360132

ABSTRACT

Objectives: The aim of the current study was to identify difficulties in adapting to normal life once COVID-19 lockdown has been lifted. Israel was used as a case study, as COVID-19 social restrictions, including a nation-wide lockdown, were lifted almost completely by mid-April 2021, following a large-scale vaccination operation. Methods: A sample of 293 mid-age and older Israeli adults (M age = 61.6 ± 12.8, range 40-85 years old) reported on return-to-routine adaptation difficulties (on a novel index), depression, positive solitude, and several demographic factors. Results: Of the participants, 40.4% met the criteria of (at least) mild depressive symptoms. Higher levels of adaptation difficulties were related to higher ratios of clinical depressive symptoms. This link was moderated by positive solitude. Namely, the association between return-to-routine adaptation difficulties and depression was mainly indicated for individuals with low positive solitude. Conclusions: The current findings are of special interest to public welfare, as adaptation difficulties were associated with higher chance for clinical depressive symptoms, while positive solitude was found to be as an efficient moderator during this period. The large proportion of depressive symptoms that persist despite lifting of social restrictions should be taken into consideration by policy makers when designing return-to-routine plans.

17.
Front Psychiatry ; 13: 847455, 2022.
Article in English | MEDLINE | ID: mdl-35386523

ABSTRACT

Patients with schizophrenia (PwS) typically demonstrate deficits in visual processing of emotions. Less is known about auditory processing of spoken-emotions, as conveyed by the prosodic (tone) and semantics (words) channels. In a previous study, forensic PwS (who committed violent offenses) identified spoken-emotions and integrated the emotional information from both channels similarly to controls. However, their performance indicated larger failures of selective-attention, and lower discrimination between spoken-emotions, than controls. Given that forensic schizophrenia represents a special subgroup, the current study compared forensic and non-forensic PwS. Forty-five PwS listened to sentences conveying four basic emotions presented in semantic or prosodic channels, in different combinations. They were asked to rate how much they agreed that the sentences conveyed a predefined emotion, focusing on one channel or on the sentence as a whole. Their performance was compared to that of 21 forensic PwS (previous study). The two groups did not differ in selective-attention. However, better emotional identification and discrimination, as well as better channel integration were found for the forensic PwS. Results have several clinical implications: difficulties in spoken-emotions processing might not necessarily relate to schizophrenia; attentional deficits might not be a risk factor for aggression in schizophrenia; and forensic schizophrenia might have unique characteristics as related to spoken-emotions processing (motivation, stimulation).

18.
Health Informatics J ; 28(1): 14604582221083483, 2022.
Article in English | MEDLINE | ID: mdl-35349777

ABSTRACT

BACKGROUND: Tinnitus may be a disabling, distressing disorder whereby patients report of sounds, in the absence of external stimulus. Recent evidence supports the effectiveness of psychological interventions, particularly, cognitive behavioral therapy (CBT) based intervention for the reduction of tinnitus-related distress and disability. This study assessed the effectiveness of mobile delivered cognitive training exercises to reduce tinnitus-related distress. MATERIALS AND METHODS: Out of 26 patients diagnosed with tinnitus, 14 participants completed all 48 levels of the app. Levels of pre-post intervention tinnitus intrusiveness and handicap were evaluated using the short Hebrew version of the Tinnitus Handicap Inventory (H-THI). Mood was assessed using a Visual Analogue Scale (VAS). Participants were instructed to complete 3-4 min of daily training for 14 days. RESULTS: Repeated-measures ANOVA of completers showed a significant large-effect size reduction on H-THI scores. 50% of completers have shown reliable change (indicated by their Reliable Change Index [RCI] scores). No significant change was found in mood. DISCUSSION: Several minutes a day of training using a CBT-based app targeting maladaptive believes may decreased patients' tinnitus intrusiveness and handicap. CONCLUSIONS: Mobile apps can provide access to CBT-based interventions, using an efficient, inviting and simple platform, addressing the ramifications of tinnitus symptoms.


Subject(s)
Cognitive Behavioral Therapy , Mobile Applications , Tinnitus , Cognition , Exercise , Humans , Tinnitus/psychology , Tinnitus/therapy
19.
Psychol Sci ; 33(3): 424-432, 2022 03.
Article in English | MEDLINE | ID: mdl-35175871

ABSTRACT

Attachment security has consistently been found to correlate with relaxed exploration, openness, and mindful attention to incoming information. The present studies explored whether contextually infusing a sense of attachment security (security priming) can improve hearing in young and older adults. In Study 1, participants (29 young, 30 older) performed a standardized pure-tone audiometric-thresholds test twice. In the security-priming condition, a picture of a participant's security-enhancing figure was presented throughout the task. In the control condition, a picture of an unknown person (matched in sex, age, and facial expression) was used as a neutral prime. Study 2 (14 young, 14 older) was almost identical, except that it was preregistered and the neutral prime was a circle. In both studies, participants performed better (had lower hearing thresholds) in the security-priming condition. The current study is the first to show that attachment security improves sensory perception, and these results have meaningful implications for theory and clinical hearing tests.


Subject(s)
Hearing , Noise , Aged , Audiometry, Pure-Tone/methods , Humans , Sound
20.
J Speech Lang Hear Res ; 65(3): 991-1000, 2022 03 08.
Article in English | MEDLINE | ID: mdl-35171689

ABSTRACT

PURPOSE: The Test for Rating Emotions in Speech (T-RES) has been developed in order to assess the processing of emotions in spoken language. In this tool, spoken sentences, which are composed of emotional content (anger, happiness, sadness, and neutral) in both semantics and prosody in different combinations, are rated by listeners. To date, English, German, and Hebrew versions have been developed, as well as online versions, iT-RES, to adapt to COVID-19 social restrictions. Since the perception of spoken emotions may be affected by linguistic (and cultural) variables, it is important to compare the acoustic characteristics of the stimuli within and between languages. The goal of the current report was to provide cross-linguistic acoustic validation of the T-RES. METHOD: T-RES sentences in the aforementioned languages were acoustically analyzed in terms of mean F0, F0 range, and speech rate to obtain profiles of acoustic parameters for different emotions. RESULTS: Significant within-language discriminability of prosodic emotions was found, for both mean F0 and speech rate. Similarly, these measures were associated with comparable patterns of prosodic emotions for each of the tested languages and emotional ratings. CONCLUSIONS: The results demonstrate the lack of dependence of prosody and semantics within the T-RES stimuli. These findings illustrate the listeners' ability to clearly distinguish between the different prosodic emotions in each language, providing a cross-linguistic validation of the T-RES and iT-RES.


Subject(s)
COVID-19 , Speech Perception , Acoustics , Emotions , Humans , Language , Linguistics , SARS-CoV-2 , Speech
SELECTION OF CITATIONS
SEARCH DETAIL
...