Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 71
Filter
1.
J Phys Ther Sci ; 36(6): 330-336, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38832217

ABSTRACT

[Purpose] Falls can significantly affect elderly individuals. However, most current methods used to detect and analyze high-risk conditions make use of simulated falling movements for data collection, which may not accurately represent actual falls. The present study aimed to induce natural falls using visual and auditory stimuli to create unstable walking conditions. [Participants and Methods] Two experiments were performed. The first experiment focused on inducing unstable walking using visual stimuli; whereas, the second experiment combined visual and auditory stimuli. To investigate the effects of stimuli on the induction of unstable walking, our results were compared with those of normal walking conditions. In addition, the two experimental conditions were compared to identify the most effective stimuli. [Results] Both experiments revealed a decrease in step length, an increase in step time and width, and an increase in the coefficient of variation of measurements, indicating an induced walking pattern with a higher risk of falls. Furthermore, combining visual and auditory stimuli caused deterioration of inter-limb coordination, as observed through an increased phase coordination index, thus resulting in further instability during walking. [Conclusion] Visual and auditory stimuli induced unstable walking. In particular, the combination of visual and auditory stimuli with a 0.8-s rhythm increased instability.

2.
J Clin Med ; 13(6)2024 Mar 15.
Article in English | MEDLINE | ID: mdl-38541931

ABSTRACT

Background: In temporal lobe epilepsy (TLE), estimating the potential risk of language dysfunction before surgery is a necessary procedure. Functional MRI (fMRI) is considered the most useful to determine language lateralization noninvasively. However, there are no standardized language fMRI protocols, and several issues remain unresolved. In particular, the language tasks normally used are predominantly active paradigms that require the overt participation of patients, making assessment difficult for pediatric patients or patients with intellectual disabilities. In this study, task-based fMRI with passive narrative listening was applied to evaluate speech comprehension to estimate language function in Japanese-speaking patients with drug-resistant TLE. Methods: Twenty-one patients (six with intellectual disabilities) participated. Patients listened to passive auditory stimuli with combinations of forward and silent playback, and forward and backward playback. The activation results were extracted using a block design, and lateralization indices were calculated. The obtained fMRI results were compared to the results of the Wada test. Results: The concordance rate between fMRI and the Wada test was 95.2%. Meaningful responses were successfully obtained even from participants with intellectual disabilities. Conclusions: This passive fMRI paradigm can provide safe and easy presurgical language evaluation, particularly for individuals who may not readily engage in active paradigms.

3.
Sensors (Basel) ; 24(5)2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38474961

ABSTRACT

This study investigated the impact of auditory stimuli on muscular activation patterns using wearable surface electromyography (EMG) sensors. Employing four key muscles (Sternocleidomastoid Muscle (SCM), Cervical Erector Muscle (CEM), Quadricep Muscles (QMs), and Tibialis Muscle (TM)) and time domain features, we differentiated the effects of four interventions: silence, music, positive reinforcement, and negative reinforcement. The results demonstrated distinct muscle responses to the interventions, with the SCM and CEM being the most sensitive to changes and the TM being the most active and stimulus dependent. Post hoc analyses revealed significant intervention-specific activations in the CEM and TM for specific time points and intervention pairs, suggesting dynamic modulation and time-dependent integration. Multi-feature analysis identified both statistical and Hjorth features as potent discriminators, reflecting diverse adaptations in muscle recruitment, activation intensity, control, and signal dynamics. These features hold promise as potential biomarkers for monitoring muscle function in various clinical and research applications. Finally, muscle-specific Random Forest classification achieved the highest accuracy and Area Under the ROC Curve for the TM, indicating its potential for differentiating interventions with high precision. This study paves the way for personalized neuroadaptive interventions in rehabilitation, sports science, ergonomics, and healthcare by exploiting the diverse and dynamic landscape of muscle responses to auditory stimuli.


Subject(s)
Muscle Contraction , Wearable Electronic Devices , Muscle Contraction/physiology , Psychosocial Intervention , Electromyography , Neck Muscles/physiology
4.
Odontology ; 2024 Feb 03.
Article in English | MEDLINE | ID: mdl-38308677

ABSTRACT

Dental drilling sounds can induce anxiety in some patients. This study aimed to use functional magnetic resonance imaging (fMRI) to assess the relationship between dental fear and auditory stimuli. Thirty-four right-handed individuals (21 women and 13 men; average age, 31.2 years) were selected. The level of dental fear was assessed using the dental fear survey (DFS). Based on a threshold DFS score > 52, participants were categorized into two groups: dental fear (DF) group (n = 12) and control group (n = 22). Two types of stimuli were presented in a single session: dental and neutral sounds. Cerebral activation during the presentation of these sounds was evaluated using contrast-enhanced blood oxygenation level-dependent fMRI. In the DF group, dental sounds induced significantly stronger activation in the left inferior frontal gyrus and left caudate nucleus (one-sample t test, P < 0.001). In contrast, in the control group, significantly stronger activation was observed in the bilateral Heschl's gyri and left middle frontal gyrus (one-sample t test, P < 0.001). Additionally, a two-sample t test revealed that dental sounds induced a significantly stronger activation in the left caudate nucleus in the DF group than in the control group (P < 0.005). These findings suggest that the cerebral activation pattern in individuals with DF differs from that in controls. Increased activation of subcortical regions may be associated with sound memory during dental treatment.

5.
Brain Sci ; 14(2)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38391706

ABSTRACT

Explored through EEG/MEG, auditory stimuli function as a suitable research probe to reveal various neural activities, including event-related potentials, brain oscillations and functional connectivity. Accumulating evidence in this field stems from studies investigating neuroplasticity induced by long-term auditory training, specifically cross-sectional studies comparing musicians and non-musicians as well as longitudinal studies with musicians. In contrast, studies that address the neural effects of short-term interventions whose duration lasts from minutes to hours are only beginning to be featured. Over the past decade, an increasing body of evidence has shown that short-term auditory interventions evoke rapid changes in neural activities, and oscillatory fluctuations can be observed even in the prestimulus period. In this scoping review, we divided the extracted neurophysiological studies into three groups to discuss neural activities with short-term auditory interventions: the pre-stimulus period, during stimulation, and a comparison of before and after stimulation. We show that oscillatory activities vary depending on the context of the stimuli and are greatly affected by the interplay of bottom-up and top-down modulational mechanisms, including attention. We conclude that the observed rapid changes in neural activitiesin the auditory cortex and the higher-order cognitive part of the brain are causally attributed to short-term auditory interventions.

6.
Exp Brain Res ; 242(4): 937-947, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38334793

ABSTRACT

Humans are quite accurate and precise in interception performance. So far, it is still unclear what role auditory information plays in spatiotemporal accuracy and consistency during interception. In the current study, interception performance was measured as the spatiotemporal accuracy and consistency of when and where a virtual ball was intercepted on a visible line displayed on a screen based on auditory information alone. We predicted that participants would more accurately indicate when the ball would cross a target line than where it would cross the line, because human hearing is particularly sensitive to temporal parameters. In a within-subject design, we manipulated auditory intensity (52, 61, 70, 79, 88 dB) using a sound stimulus programmed to be perceived over the screen in an inverted C-shape trajectory. Results showed that the louder the sound, the better was temporal accuracy, but the worse was spatial accuracy. We argue that louder sounds increased attention toward auditory information when performing interception judgments. How balls are intercepted and practically how intensity of sound may add to temporal accuracy and consistency is discussed from a theoretical perspective of modality-specific interception behavior.


Subject(s)
Hearing , Sound , Humans , Acoustic Stimulation , Attention , Hand
7.
J Fish Biol ; 104(5): 1579-1586, 2024 May.
Article in English | MEDLINE | ID: mdl-38417911

ABSTRACT

The ability to detect and respond to the presence of predation risk is under intense selection, especially for small-bodied fishes. Damselfishes (Pomacentridae) use auditory vocalizations during inter- and intrasexual interactions, but it is not known if they can use vocalizations in the context of predator-prey interactions. Here, we test if yellowtail damselfish, Chrysiptera parasema, can learn to associate the territorial vocalization of heterospecific humbug damselfish Dascyllus aruanus with predation risk. In conditioning trials yellowtail damselfish were presented with the territorial call of humbug damselfish while either blank water (control treatment) or chemical alarm cue derived from damaged skin of conspecific yellowtail damselfish was introduced. In conditioning trials, fish exposed to alarm cue exhibited increased activity and spent more time in the water column relative to fish that received the control treatment. After a single conditioning trial, conditioned fish were exposed again to the territorial call of humbug damselfish. Fish conditioned with the call + alarm cue showed increased activity and spent more time in the water column relative to fish that had been conditioned with the control treatment. These data indicate associative learning of an auditory stimulus with predation risk in a species that regularly uses auditory signalling in other contexts. Recordings of conditioning and test trials failed to detect any acoustic calls produced by test fish in response to the perception of predation risk. Thus, although yellowtail damselfish can associate risk with auditory stimuli, we found no evidence that they produce an alarm call.


Subject(s)
Cues , Perciformes , Predatory Behavior , Vocalization, Animal , Animals , Perciformes/physiology , Territoriality
8.
J Appl Anim Welf Sci ; : 1-10, 2023 Sep 19.
Article in English | MEDLINE | ID: mdl-37726876

ABSTRACT

Environmental changes like vet visit could cause stress in cats. Studies have attempted to develop stress management strategies targeting sensory systems. Even though species-appropriate music which includes cat affiliative sound (e.g., cats' purring and suckling sound) has been shown to relieve stress in cats. Little is known whether the cat sound alone works in stress management. This study was conducted to investigate the effects of species-relevant auditory stimuli on stress in cats exposed to a novel environment. During the 28-day experiment periods, 20 cats received four types of sound treatments which included silence (T1), purr of cats (T2), eating sound in cats (T3), and the mixed sound of T2 and T3 (T4) in a novel environment in random orders with intervals of 1 week between treatments. Cats' behaviors were recorded during each 10-min test. Results showed that T4 reduced visual scanning (P = 0.017) without significantly affecting other behaviors, compared with other treatments. Together, the two types of cat-specific sounds did not exert pronounced effects of relieving stress on cats exposed to a novel environment.

9.
Eur J Ageing ; 20(1): 29, 2023 Jun 30.
Article in English | MEDLINE | ID: mdl-37389678

ABSTRACT

BACKGROUND: Detecting impaired naming capacity contributes to the detection of mild (MildND) and major (MajorND) neurocognitive disorder due to Alzheimer's disease (AD). The Test for Finding Word retrieval deficits (WoFi) is a new, 50-item, auditory stimuli-based instrument. OBJECTIVE: The study aimed to adapt WoFi to the Greek language, to develop a short version of WoFi (WoFi-brief), to compare the item frequency and the utility of both instruments with the naming subtest of the widely used Addenbrooke's cognitive examination III (ACEIIINaming) in detecting MildND and MajorND due to AD. METHODS: This cross-sectional, validation study included 99 individuals without neurocognitive disorder, as well as 114 and 49 patients with MildND and MajorND due to AD, respectively. The analyses included categorical principal components analysis using Cramer's V, assessment of the frequency of test items based on corpora of television subtitles, comparison analyses, Kernel Fisher discriminant analysis models, proportional odds logistic regression (POLR) models and stratified repeated random subsampling used to recursive partitioning to training and validation set (70/30 ratio). RESULTS: WoFi and WoFi-brief, which consists of 16 items, have comparable item frequency and utility and outperform ACEIIINaming. According to the results of the discriminant analysis, the misclassification error was 30.9%, 33.6% and 42.4% for WoFi, WoFi-brief and ACEIIINaming, respectively. In the validation regression model including WoFi the mean misclassification error was 33%, while in those including WoFi-brief and ACEIIINaming it was 31% and 34%, respectively. CONCLUSIONS: WoFi and WoFi-brief are more effective in detecting MildND and MajorND due to AD than ACEIIINaming.

10.
Brain Res ; 1807: 148309, 2023 05 15.
Article in English | MEDLINE | ID: mdl-36870465

ABSTRACT

OBJECTIVES: Recent evidence indicates that hippocampus is important for conditioned fear memory (CFM). Though few studies consider the roles of various cell types' contribution to such a process, as well as the accompanying transcriptome changes during this process. The purpose of this study was to explore the transcriptional regulatory genes and the targeted cells that are altered by CFM reconsolidation. METHODS: A fear conditioning experiment was established on adult male C57 mice, after day 3 tone-cued CFM reconsolidation test, hippocampus cells were dissociated. Using single cell RNA sequencing (scRNA-seq) technique, alterations of transcriptional genes expression were detected and cell cluster analysis were performed and compared with those in sham group. RESULTS: Seven non-neuronal and eight neuronal cell clusters (including four known neurons and four newly identified neuronal subtypes) has been explored. Among them, CA subtype 1 has characteristic gene markers of Ttr and Ptgds, which is speculated to be the outcome of acute stress and promotes the production of CFM. The results of KEGG pathway enrichment indicate the differences in the expression of certain molecular protein functional subunits in long-term potentiation (LTP) pathway between two types of neurons (DG and CA1) and astrocytes, thus providing a new transcriptional perspective for the role of hippocampus in the CFM reconsolidation. More importantly, the correlation between the reconsolidation of CFM and neurodegenerative diseases-linked genes is substantiated by the results from cell-cell interactions and KEGG pathway enrichment. Further analysis shows that the reconsolidation of CFM inhibits the risk-factor genes App and ApoE in Alzheimer's Disease (AD) and activates the protective gene Lrp1. CONCLUSIONS: This study reports the transcriptional genes expression changes of hippocampal cells driven by CFM, which confirm the involvement of LTP pathway and suggest the possibility of CFM-like behavior in preventing AD. However, the current research is limited to normal C57 mice, and further studies on AD model mice are needed to prove this preliminary conclusion.


Subject(s)
Hippocampus , Phobic Disorders , Mice , Male , Animals , Hippocampus/metabolism , Neurons/physiology , Cues , Fear/physiology
11.
Article in English | MEDLINE | ID: mdl-36901204

ABSTRACT

Challenging behavior (CB) is a group of behaviors, reactions and symptoms due to dementia, which can be challenging for the caregivers. The study aims to research the influence of acoustics on CB in people with dementia (PwD). An ethnographic method was used to study the daily life of PwD in their nursing homes with a specific focus on how people react to everyday environmental sounds. Thirty-five residents were included in the sample based on purposeful, homogeneous group characteristics and sampling. Empirical data were collected using 24/7 participatory observations. The collected data were analyzed using a phenomenological-hermeneutical method: a naïve understanding, a structural analysis and a comprehensive understanding. The result shows that the onset of CB depends on whether the resident feels safe and is triggered by an excess or lack of stimuli. The excess or shortage of stimuli and whether and when it affects a person is personal. It depends on various factors, the person's state and the time of day, the nature of the stimuli, familiarity, or strangeness is also a determining factor for the onset and progression of CB. The results can form an essential basis for developing soundscapes to make the PwD feel safe and reduce CB.


Subject(s)
Dementia , Humans , Nursing Homes , Psychotherapy , Emotions , Acoustics
12.
J Psycholinguist Res ; 52(1): 153-177, 2023 Feb.
Article in English | MEDLINE | ID: mdl-35028824

ABSTRACT

Pupil dilation response has been shown to reflect different levels of sentence processing during prosodic and syntactic processing in language comprehension. Our pupillometry experiment aimed to investigate whether pupil diameter was sensitive to the auditory sentence processing involved in comprehending congruent and incongruent statements. Twenty-one participants were presented with 300 auditory stimuli consisting of syntactically and/or prosodically congruent and incongruent sentences in Turkish. The pupillary response results were significant only for syntactically incongruent sentences and for sentences that were both syntactically and prosodically incongruent. This indicates that prosody had no significant effect on its own. Based on the hypothesis that prosodic and syntactic processing require cognitive sensitivity for auditory sentence comprehension, we expected an increase in pupil diameter for both processes. However, our findings are consistent with the previous assumptions that pupil size increases during syntactic manipulation, but our findings showed that prosodic processing does not increase pupil size, contrary to previous studies.


Subject(s)
Pupil , Speech Perception , Humans , Pupil/physiology , Speech Perception/physiology , Language , Auditory Perception , Comprehension/physiology
13.
Behav Res Methods ; 55(3): 1121-1140, 2023 04.
Article in English | MEDLINE | ID: mdl-35581438

ABSTRACT

Music is a ubiquitous stimulus known to influence human affect, cognition, and behavior. In the context of eating behavior, music has been associated with food choice, intake and, more recently, taste perception. In the latter case, the literature has reported consistent patterns of association between auditory and gustatory attributes, suggesting that individuals reliably recognize taste attributes in musical stimuli. This study presents subjective norms for a new set of 100 instrumental music stimuli, including basic taste correspondences (sweetness, bitterness, saltiness, sourness), emotions (joy, anger, sadness, fear, surprise), familiarity, valence, and arousal. This stimulus set was evaluated by 329 individuals (83.3% women; Mage = 28.12, SD = 12.14), online (n = 246) and in the lab (n = 83). Each participant evaluated a random subsample of 25 soundtracks and responded to self-report measures of mood and taste preferences, as well as the Goldsmiths Musical Sophistication Index (Gold-MSI). Each soundtrack was evaluated by 68 to 97 participants (Mdn = 83), and descriptive results (means, standard deviations, and confidence intervals) are available as supplemental material at osf.io/2cqa5 . Significant correlations between taste correspondences and emotional/affective dimensions were observed (e.g., between sweetness ratings and pleasant emotions). Sex, age, musical sophistication, and basic taste preferences presented few, small to medium associations with the evaluations of the stimuli. Overall, these results suggest that the new Taste & Affect Music Database is a relevant resource for research and intervention with musical stimuli in the context of crossmodal taste perception and other affective, cognitive, and behavioral domains.


Subject(s)
Music , Taste Perception , Humans , Female , Adult , Male , Taste , Music/psychology , Emotions , Affect
14.
Clin EEG Neurosci ; 54(2): 160-163, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36198020

ABSTRACT

Triggering or modulation of seizures and rhythmic EEG patterns by external stimuli are well-known with the most common clinical appearance of stimulus induced periodic discharges (SI- PDs) patterns which are elicited by physical or auditory stimulation. However, stimulus terminated periodic discharges (ST-PDs), in other words, the periodic discharges stopped by external stimuli is an extremely rare electroencephalographic (EEG) finding. We report a 20-year-old woman with a marked psychomotor developmental delay of unknown cause, with frequent EEG patterns of long-lasting (10-60 s) bilateral paroxysmal high-voltage slow waves with occasional spikes, misdiagnosed as non-convulsive status epilepticus. However, no apparent clinical change was noted by the technician, physician, and her mother during these subclinical ictal EEG recordings. Interestingly, however, these epileptic discharges were abruptly interrupted by sudden verbal stimuli on the EEG, repeatedly. Whole exome sequencing and genotyping were performed to investigate possible genetic etiology that revealed two sequence variants, a frameshift variant of CACNA1H NM_021098.3:c.1701del;p.Asp568ThrfsTer15 and a missense variant of GRIN2D NM_000836.4:c.1783A>T;p.Thr595Ser as well as a copy number variant part deletion of ATP6V1A gene arr [hg19]3q13.31(113,499,698_113,543,081)x1 as possible pathogenic candidates. The subclinical periodic discharges terminated by verbal stimuli, is a very rare manifestation and needs particular attention. External modulation of ictal-appearing EEG patterns is important to identify stimulus terminated EEG patterns.


Subject(s)
Epilepsy , Status Epilepticus , Female , Humans , Young Adult , Adult , Electroencephalography/adverse effects , Status Epilepticus/diagnosis , Seizures/complications , Epilepsy/diagnosis , Acoustic Stimulation
15.
Front Neurosci ; 16: 995438, 2022.
Article in English | MEDLINE | ID: mdl-36340785

ABSTRACT

Cognitive deficits are common in Parkinson's disease (PD) and range from mild cognitive impairment to dementia, often dramatically reducing quality of life. Physiological models have shown that attention and memory are predicated on the brain's ability to process time. Perception has been shown to be increased or decreased by activation or deactivation of dopaminergic neurons respectively. Here we investigate differences in time perception between patients with PD and healthy controls. We have measured differences in sub-second- and second-time intervals. Sensitivity and error in perception as well as the response times are calculated. Additionally, we investigated intra-individual response variability and the effect of participant devices on both reaction time and sensitivity. Patients with PD have impaired sensitivity in discriminating between durations of both visual and auditory stimuli compared to healthy controls. Though initially designed as an in-person study, because of the pandemic the experiment was adapted into an online study. This adaptation provided a unique opportunity to enroll a larger number of international participants and use this study to evaluate the feasibility of future virtual studies focused on cognitive impairment. To our knowledge this is the only time perception study, focusing on PD, which measures the differences in perception using both auditory and visual stimuli. The cohort involved is the largest to date, comprising over 800 participants.

16.
J Neural Eng ; 19(6)2022 11 11.
Article in English | MEDLINE | ID: mdl-36317357

ABSTRACT

Objective.Auditory brain-computer interfaces (BCIs) enable users to select commands based on the brain activity elicited by auditory stimuli. However, existing auditory BCI paradigms cannot increase the number of available commands without decreasing the selection speed, because each stimulus needs to be presented independently and sequentially under the standard oddball paradigm. To solve this problem, we propose a double-stimulus paradigm that simultaneously presents multiple auditory stimuli.Approach.For addition to an existing auditory BCI paradigm, the best discriminable sound was chosen following a subjective assessment. The new sound was located on the right-hand side and presented simultaneously with an existing sound from the left-hand side. A total of six sounds were used for implementing the auditory BCI with a 6 × 6 letter matrix. We employ semi-supervised learning (SSL) and prior probability distribution tuning to improve the accuracy of the paradigm. The SSL method involved updating of the classifier weights, and their prior probability distributions were adjusted using the following three types of distributions: uniform, empirical, and extended empirical (e-empirical). The performance was evaluated based on the BCI accuracy and information transfer rate (ITR).Main results.The double-stimulus paradigm resulted in a BCI accuracy of 67.89 ± 11.46% and an ITR of 2.67 ± 1.09 bits min-1, in the absence of SSL and with uniform distribution. The proposed combination of SSL with e-empirical distribution improved the BCI accuracy and ITR to 74.59 ± 12.12% and 3.37 ± 1.27 bits min-1, respectively. The event-related potential analysis revealed that contralateral and right-hemispheric dominances contributed to the BCI performance improvement.Significance.Our study demonstrated that a BCI based on multiple simultaneous auditory stimuli, incorporating SSL and e-empirical prior distribution, can increase the number of commands without sacrificing typing speed beyond the acceptable level of accuracy.


Subject(s)
Brain-Computer Interfaces , Acoustic Stimulation/methods , Evoked Potentials , Supervised Machine Learning , Probability , Electroencephalography/methods , Event-Related Potentials, P300
17.
J Neurosci Methods ; 379: 109661, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35817307

ABSTRACT

BACKGROUND: Brain-computer interfaces (BCIs) are a promising tool for communication with completely locked-in state (CLIS) patients. Despite the great efforts already made by the BCI research community, the cases of success are still very few, very exploratory, limited in time, and based on simple 'yes/no' paradigms. NEW METHOD: A P300-based BCI is proposed comparing two conditions, one corresponding to purely spatial auditory stimuli (AU-S) and the other corresponding to hybrid visual and spatial auditory stimuli (HVA-S). In the HVA-S condition, there is a semantic, temporal, and spatial congruence between visual and auditory stimuli. The stimuli comprise a lexicon of 7 written and spoken words. Spatial sounds are generated through the head-related transfer function. Given the good results obtained with 10 able-bodied participants, we investigated whether a patient entering CLIS could use the proposed BCI. RESULTS: The able-bodied group achieved 71.3 % and 90.5 % online classification accuracy for the auditory and hybrid BCIs respectively, while the patient achieved 30 % and chance level accuracies, for the same conditions. Notwithstanding, the patient's event-related potentials (ERPs) showed statistical discrimination between target and non-target events in different time windows. COMPARISON WITH EXISTING METHODS: The results of the control group compare favorably with the state-of-the-art, considering a 7-class BCI controlled visual-covertly and with auditory stimuli. The integration of visual and auditory stimuli has not been tested before with CLIS patients. CONCLUSIONS: The semantic, temporal, and spatial congruence of the stimuli increased the performance of the control group, but not of the CLIS patient, which can be due to impaired attention and cognitive function. The patient's unique ERP patterns make interpretation difficult, requiring further tests/paradigms to decouple patients' responses at different levels (reflexive, perceptual, cognitive). The ERPs discrimination found indicates that a simplification of the proposed approaches may be feasible.


Subject(s)
Amyotrophic Lateral Sclerosis , Brain-Computer Interfaces , Electroencephalography/methods , Evoked Potentials/physiology , Humans , Semantics
18.
J Commun Disord ; 99: 106241, 2022.
Article in English | MEDLINE | ID: mdl-35728450

ABSTRACT

OBJECTIVE: People with dysphonia are judged more negatively than peers with normal vocal quality. This preliminary study aims to (1) investigate correlations between both auditory-perceptual and objective measures of vocal quality of dysphonic and non-dysphonic speakers and attitudes of listeners, and (2) discover whether these attitudes towards people with dysphonia vary for different types of stimuli: auditory (A) stimuli and combined auditory-visual (AV) stimuli. Visual (V) stimuli were included as a control condition. METHOD: Ten judges with no experience in the evaluation of dysphonia were asked to rate A, AV and V stimuli of 14 different speakers (10 dysphonic and 4 non-dysphonic speakers) Cognitive attitudes, evaluation of voice characteristics and behavioral attitudes were examined. Pearson and Spearman correlation coefficients were calculated to examine correlations between both Dysphonia Severity Index (DSI) values and perceptual vocal quality as assessed by a speech-language pathologist (PVQSLP) or perceptual vocal quality as assessed by the judges (PVQjudge). Linear mixed model (LMM) analyses were conducted to investigate differences between speakers and stimuli conditions. RESULTS: Statistically significant correlations were found between both perceptual and objective measures of vocal quality and mean attitude scores for A and AV stimuli, indicating increasingly negative attitudes with increasing dysphonia severity. Fewer statistically significant correlations were found for the combined AV stimuli than for A stimuli, and no significant correlations were found for V stimuli. LMM analyses revealed significant group effects for several cognitive attitudes. CONCLUSION: Generally, people with dysphonia are judged more negatively by listeners than peers without dysphonia. However, the findings of this study suggest a positive influence of visual cues on the judges' cognitive and behavioral attitudes towards dysphonic speakers. Further research is needed to investigate the significance of this influence.


Subject(s)
Dysphonia , Speech Perception , Humans , Severity of Illness Index , Speech Acoustics , Voice Quality
19.
Behav Res Methods ; 54(1): 378-392, 2022 02.
Article in English | MEDLINE | ID: mdl-34240338

ABSTRACT

Web-based experimental testing has seen exponential growth in psychology and cognitive neuroscience. However, paradigms involving affective auditory stimuli have yet to adapt to the online approach due to concerns about the lack of experimental control and other technical challenges. In this study, we assessed whether sounds commonly used to evoke affective responses in-lab can be used online. Using recent developments to increase sound presentation quality, we selected 15 commonly used sound stimuli and assessed their impact on valence and arousal states in a web-based experiment. Our results reveal good inter-rater and test-retest reliabilities, with results comparable to in-lab studies. Additionally, we compared a variety of previously used unpleasant stimuli, allowing us to identify the most aversive among these sounds. Our findings demonstrate that affective sounds can be reliably delivered through web-based platforms, which help facilitate the development of new auditory paradigms for affective online experiments.


Subject(s)
Arousal , Sound , Acoustic Stimulation/methods , Arousal/physiology , Auditory Perception , Humans , Internet , Reproducibility of Results
20.
J Mot Behav ; 54(1): 67-79, 2022.
Article in English | MEDLINE | ID: mdl-33715604

ABSTRACT

Music and metronomes differentially impact movement performance. The current experiment presented metronome and drum beats in simple and complex rhythms before goal-directed reaching movements, while also quantifying enjoyment. Auditory conditions were completed with and without visual feedback and were blocked and counterbalanced. There were no differences between simple and complex rhythms, indicating that rhythmic information alone is sufficient to benefit performance. The drum elicited shorter movement times and higher peak velocities, without an increase in spatial variability. Reaction times were moderately correlated with ratings of enjoyment. These data provide evidence that the source of an auditory stimulus impacts movement performance of a goal-directed reaching task. Results are contextualized within models of goal-directed reaching to elucidate mechanisms contributing to performance improvements.


Subject(s)
Goals , Psychomotor Performance , Acoustic Stimulation , Auditory Perception , Humans , Motivation , Movement , Reaction Time
SELECTION OF CITATIONS
SEARCH DETAIL
...