Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
HardwareX ; 18: e00529, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38690151

ABSTRACT

Understanding the somatosensory system and its abnormalities requires the development of devices that can accurately stimulate the human skin. New methods for assessing the somatosensory system can enhance the diagnosis, treatments, and prognosis for individuals with somatosensory impairments. Therefore, the design of NeuroSense, a tactile stimulator that evokes three types of daily life sensations (touch, air and vibration) is described in this work. The prototype aims to evoke quantitative assessments to evaluate the functionality of the somatosensory system and its abnormal conditions that affect the quality of life. In addition, the device has proven to have varying intensities and onset latencies that produces somatosensory evoked potentials and energy desynchronization on somatosensory cortex.

2.
Front Hum Neurosci ; 18: 1287544, 2024.
Article in English | MEDLINE | ID: mdl-38638806

ABSTRACT

Introduction: Assistive technologies for learning are aimed at promoting academic skills, such as reading and mathematics. These technologies mainly embrace mobile and web apps addressed to children with learning difficulties. Nevertheless, most applications lack pedagogical foundation. Additionally, the task of selecting suitable technology for educational purposes becomes challenging. Hence, this protocol posits the psychophysiological assessment of an online method for learning (OML) named Smartick. This platform comprises reading and math activities for learning training. In this protocol, individual monitoring of each child is proposed to determine the progress in learning caused by Smartick. Methods and analysis: One hundred and twelve children aged between 8 and 12 who present reading or math difficulty after a rigorous psychometric evaluation will be recruited. The study comprises four sessions. In sessions 1 and 2, collective and individual psychometric evaluations will be performed, respectively. Reading and mathematical proficiency will be assessed, as well as attentional levels and intellectual quotient. Subsequently, each child will be semi-randomly assigned to either the experimental or control groups. Afterward, a first EEG will be collected for all children in session 3. Then, experimental groups will use Smartick for 3 months, in addition to their traditional learning method. In contrast, control groups will only continue with their traditional learning method. Finally, session 4 will consist of performing a second psychometric evaluation and another EEG, so that psychophysiological parameters can be encountered that indicate learning improvements due to the OML, regardless of the traditional learning method at hand. Discussion: Currently, few studies have validated learning improvement due to assistive technologies for learning. However, this proposal presents a psychophysiological evaluation addressed to children with reading or math difficulties who will be trained with an OML.

3.
BMC Med ; 22(1): 121, 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38486293

ABSTRACT

BACKGROUND: Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. METHODS: Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. RESULTS: Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. CONCLUSIONS: This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. TRIAL REGISTRATION: BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Child , Humans , Autistic Disorder/diagnosis , Speech , Autism Spectrum Disorder/diagnosis , Bayes Theorem , Emotions/physiology , Acoustics
4.
Data Brief ; 53: 110142, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38357451

ABSTRACT

The present database contains brain activity of subjective tinnitus sufferers at identifying their sound tinnitus. The main objective of this database is to provide spontaneous Electroencephalographic (EEG) activity at rest, and evoked EEG activity when tinnitus sufferers attempt to identify their sound tinnitus among 54 tinnitus sound examples. For the database, 37 volunteers were recruited: 15 ones without tinnitus (Control Group - CG), and 22 ones with tinnitus (Tinnitus Group - TG). For EEG recording, 30 channels were used to record two conditions: 1) basal condition, where the volunteer remained in a state of rest with the open eyes for two minutes; and 2) active condition, where the volunteer must have identified his/her sound stimulus by pressing a key. For the active condition, a sound-tinnitus library was generated in accordance with the most typical acoustic properties of tinnitus. The library consisted in ten pure tones (250 Hz, 500 Hz, 1 kHz, 2 kHz, 3 kHz, 3.5 kHz, 4 kHz, 6 kHz, 8 kHz, 10 kHz), a White Noise (WN), a Narrow Band noise-High frequencies (NBH, 4 kHz-10 kHz), a Narrow Band noise-Medium frequencies (NBM,1 kHz-4 kHz), a Narrow-Band noise Low frequencies (NBL, 250 Hz-1 kHz), ten pure tones combined with WN, ten pure tones superimposed with NBH, ten tones with NBM and ten pure tones combined with NBL. In total, 54 sound-tinnitus were applied for both groups. In the case of CG, volunteers must have identified a sound at 3.5 kHz. In addition to EEG information, a csv-file with audiometric and psychoacoustic information of volunteers is provided. For TG, this information refers to: 1) hearing level, 2) type of tinnitus, 3) tinnitus frequency, 4) tinnitus perception, 5) Hospital Anxiety and Depression Scale (HADS) and 6) Tinnitus Functional Index (TFI). For CG, the information refers to: 1) hearing level, and 2) HADS.

6.
Sci Data ; 10(1): 659, 2023 09 28.
Article in English | MEDLINE | ID: mdl-37770457

ABSTRACT

Acoustic characterizations of different locations are necessary to obtain relevant information on their behavior, particularly in the case of places that have not been fully understood or which purpose is still unknown since they are from cultures that no longer exist. Acoustic measurements were conducted in the archaeological zone of Edzna to obtain useful information to better understand the customs and practices of its past inhabitants. The information obtained from these acoustic measurements is presented in a dataset, which includes measurements taken at 32 points around the entire archaeological zone, with special attention given to the Main Plaza, the Great Acropolis, and the Little Acropolis. Two recording systems were used for this purpose: a microphone and a binaural head. As a result, a measurement database with the following characteristics was obtained: it comprises a total of 32 measurement points with 4 different sound source positions. In total, there are 297 files divided into separate folders. The sampling frequency used was 96 kHz, and the files are in mat format.

7.
Brain Topogr ; 36(5): 671-685, 2023 09.
Article in English | MEDLINE | ID: mdl-37490130

ABSTRACT

The impact of binaural beats (BBs) on human cognition and behavior remains and various methods have been used to measure their effect, including neurophysiological, psychometric, and human performance evaluations. The few approaches where the level of neural synchronicity and connectivity were measured by neuroimaging techniques have only been undertaken in spontaneous mode. The present research proposes an approach based on the oddball paradigm to study BB effect by estimating the level of attention induced by BBs. Evoked activity of 25 young adults between 19 and 24 years old with no hearing impairments nor clinical neurological history were analyzed. The experiment was conducted in two different sessions of 24.5 min. The first part consisted of 20-min BB stimulation in either theta (BBθ) or beta (BBß). After the BB stimulation, an oddball paradigm was applied in each BB condition to assess the attentional effect induced by BBs. Attention enhancement is expected for BBß with respect to BBθ. Target event related potentials (ERPs) were mainly analyzed in the time and time-frequency domains. The frequency analysis was based on continuous wavelet transform (CWT), event-related spectral perturbation (ERSP), and inter-trial phase coherence (ITPC). The study revealed that the P300 component was not significantly different between conditions (BBθ vs. BBß). However, the target grand average ERP in BBθ condition was mainly composed of 8 Hz-frequency components, appearing before 400 ms post-stimulus, and mainly on the centro-parietal regions. In contrast, the target grand average ERP in BBß condition was mainly composed of frequency components below 6 Hz, mainly appearing at 400 ms post-stimulus on the parieto-occipital regions. Furthermore, ERPs in the BBθ condition were more phase locked than the BBß condition.


Subject(s)
Electroencephalography , Evoked Potentials, Auditory , Young Adult , Humans , Adult , Evoked Potentials, Auditory/physiology , Acoustic Stimulation/methods , Electroencephalography/methods , Evoked Potentials/physiology , Attention
8.
Sci Rep ; 13(1): 8178, 2023 05 20.
Article in English | MEDLINE | ID: mdl-37210415

ABSTRACT

Emotional content is particularly salient, but situational factors such as cognitive load may disturb the attentional prioritization towards affective stimuli and interfere with their processing. In this study, 31 autistic and 31 typically developed children volunteered to assess their perception of affective prosodies via event-related spectral perturbations of neuronal oscillations recorded by electroencephalography under attentional load modulations induced by Multiple Object Tracking or neutral images. Although intermediate load optimized emotion processing by typically developed children, load and emotion did not interplay in children with autism. Results also outlined impaired emotional integration emphasized in theta, alpha and beta oscillations at early and late stages, and lower attentional ability indexed by the tracking capacity. Furthermore, both tracking capacity and neuronal patterns of emotion perception during task were predicted by daily-life autistic behaviors. These findings highlight that intermediate load may encourage emotion processing in typically developed children. However, autism aligns with impaired affective processing and selective attention, both insensitive to load modulations. Results were discussed within a Bayesian perspective that suggests atypical updating in precision between sensations and hidden states, towards poor contextual evaluations. For the first time, implicit emotion perception assessed by neuronal markers was integrated with environmental demands to characterize autism.


Subject(s)
Autistic Disorder , Child , Humans , Autistic Disorder/psychology , Bayes Theorem , Emotions/physiology , Electroencephalography , Attention/physiology , Perception
9.
Data Brief ; 48: 109057, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37006385

ABSTRACT

The relevance of affective information triggers cognitive prioritisation, dictated by both the attentional load of the relevant task, and socio-emotional abilities. This dataset provides electroencephalographic (EEG) signals related to implicit emotional speech perception under low, intermediate, and high attentional demands. Demographic and behavioural data are also provided. Specific social-emotional reciprocity and verbal communication characterise Autism Spectrum Disorder (ASD) and may influence the processing of affective prosodies. Therefore, 62 children and their parents or legal guardians participated in data collection, including 31 children with high autistic traits (x̄age=9.6-year-old, σage=1.5) who previously received a diagnosis of ASD by a medical specialist, and 31 typically developed children (x̄age=10.2-year-old, σage=1.2). Assessments of the scope of autistic behaviours using the Autism Spectrum Rating Scales (ASRS, parent report) are provided for every child. During the experiment, children listened to task-irrelevant affective prosodies (anger, disgust, fear, happiness, neutral and sadness) while answering three visual tasks: neutral image viewing (low attentional load), one-target 4-disc Multiple Object Tracking (MOT; intermediate), one-target 8-disc MOT (high). The EEG data recorded during all three tasks and the tracking capacity (behavioural data) from MOT conditions are included in the dataset. Particularly, the tracking capacity was computed as a standardised index of attentional abilities during MOT, corrected for guessing. Beforehand, children answered the Edinburgh Handedness Inventory, and resting-state EEG activity of children was recorded for 2 minutes with eyes open. Those data are also provided. The present dataset can be used to investigate the electrophysiological correlates of implicit emotion and speech perceptions and their interaction with attentional load and autistic traits. Besides, resting-state EEG data may be used to characterise inter-individual heterogeneity at rest and, in turn, associate it with attentional capacities during MOT and with autistic behavioural patterns. Finally, tracking capacity may be useful to explore dynamic and selective attentional mechanisms under emotional constraints.

10.
Data Brief ; 48: 109060, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37006396

ABSTRACT

Thirty-six chronic neuropathic pain patients (8 men and 28 women) of Mexican nationality with a mean age of 44±13.98 were recruited for EEG signal recording in eyes open and eyes closed resting state condition. Each condition was recorded for 5 min, with a total recording session time of 10 min. An ID number was given to each patient after signing up for the study, with which they answered the painDETECT questionnaire as a screening process for neuropathic pain alongside their clinical history. The day of the recording, the patients answered the Brief Pain Inventory, as an evaluation questionnaire for the interference of the pain with their daily life. Twenty-two EEG channels positioned in accordance with the 10/20 international system were registered with Smarting mBrain device. EEG signals were sampled at 250 Hz with a bandwidth between 0.1 and 100 Hz. The article provides two types of data: (1) raw EEG data in resting state and (2) the report of patients for two validated pain questionnaires. The data described in this article can be used for classifier algorithms considering stratifying chronic neuropathic pain patients with EEG data alongside their pain scores. In sum, this data is of extreme relevance for the pain field, where researchers have been seeking to integrate the pain experience with objective physiological data, such as the EEG.

11.
Comput Methods Programs Biomed ; 230: 107349, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36689806

ABSTRACT

BACKGROUND AND OBJECTIVE: Chronic neuropathic pain (NP) is a chronic pain condition that severely impacts a patient's life. Pain management has proved to be inefficient due to a lack of a simple clinical tool that may identify and monitor NP. A low-cost, noninvasive tool that provides relevant information on NP is the electroencephalogram (EEG). However, the commonly used linear EEG features have proved to be limited in characterizing NP pathophysiology. This study sought to determine whether nonlinear EEG features such as approximate entropy (ApEn) would better differentiate pain severity than absolute band power. METHODS: A non-parametric statistical approach based on the Brief Pain Inventory (BPI), along with linear and nonlinear EEG features, is proposed in this study. For this purpose, thirty-six chronic NP patients were recruited, and 22 channels were registered. Additionally, a control database of 13 participants with no NP was used as a reference, where 19 channels were registered. For both groups, EEG was recorded for 10 min in a resting state: 5 min with eyes open (EO) and 5 min with eyes closed (EC). Absolute band power and ApEn EEG features in the five clinical frequency bands (delta, theta, alpha, beta, and gamma) were estimated for all channels in both groups. As a result, 220-dimensional and 190-dimensional feature vectors were obtained for experimental and control classes respectively. For the experimental class, NP patients were grouped according to their BPI evaluation in three groups: low, moderate, and high pain. Finally, feature vectors were compared between groups using Kruskal Wallis and post-hoc Dunn's tests. RESULTS: ApEn revealed significant statistical difference (p <=0.0001) in most frequency bands and conditions among the groups. In contrast, power had less significant differences between groups, particularly with EO. Furthermore, NP groups were notably clustered using only ApEn in theta, alpha, and beta bands. CONCLUSIONS: The results indicate that ApEn effectively characterizes the different severities of chronic NP rather than the commonly used linear features. ApEn and other nonlinear techniques (e.g., spectral entropy, Shannon entropy) might be a more suitable methodology to monitor chronic NP experience.


Subject(s)
Electroencephalography , Neuralgia , Humans , Pain Measurement , Electroencephalography/methods , Eye , Chronic Disease , Neuralgia/diagnosis
12.
Front Comput Neurosci ; 16: 1022787, 2022.
Article in English | MEDLINE | ID: mdl-36465969

ABSTRACT

Artificial voices are nowadays embedded into our daily lives with latest neural voices approaching human voice consistency (naturalness). Nevertheless, behavioral, and neuronal correlates of the perception of less naturalistic emotional prosodies are still misunderstood. In this study, we explored the acoustic tendencies that define naturalness from human to synthesized voices. Then, we created naturalness-reduced emotional utterances by acoustic editions of human voices. Finally, we used Event-Related Potentials (ERP) to assess the time dynamics of emotional integration when listening to both human and synthesized voices in a healthy adult sample. Additionally, listeners rated their perceptions for valence, arousal, discrete emotions, naturalness, and intelligibility. Synthesized voices were characterized by less lexical stress (i.e., reduced difference between stressed and unstressed syllables within words) as regards duration and median pitch modulations. Besides, spectral content was attenuated toward lower F2 and F3 frequencies and lower intensities for harmonics 1 and 4. Both psychometric and neuronal correlates were sensitive to naturalness reduction. (1) Naturalness and intelligibility ratings dropped with emotional utterances synthetization, (2) Discrete emotion recognition was impaired as naturalness declined, consistent with P200 and Late Positive Potentials (LPP) being less sensitive to emotional differentiation at lower naturalness, and (3) Relative P200 and LPP amplitudes between prosodies were modulated by synthetization. Nevertheless, (4) Valence and arousal perceptions were preserved at lower naturalness, (5) Valence (arousal) ratings correlated negatively (positively) with Higuchi's fractal dimension extracted on neuronal data under all naturalness perturbations, (6) Inter-Trial Phase Coherence (ITPC) and standard deviation measurements revealed high inter-individual heterogeneity for emotion perception that is still preserved as naturalness reduces. Notably, partial between-participant synchrony (low ITPC), along with high amplitude dispersion on ERPs at both early and late stages emphasized miscellaneous emotional responses among subjects. In this study, we highlighted for the first time both behavioral and neuronal basis of emotional perception under acoustic naturalness alterations. Partial dependencies between ecological relevance and emotion understanding outlined the modulation but not the annihilation of emotional integration by synthetization.

13.
Sci Data ; 9(1): 500, 2022 08 17.
Article in English | MEDLINE | ID: mdl-35977951

ABSTRACT

The present database provides demographic (age and sex), clinical (hearing loss and acoustic properties of tinnitus), psychometric (based on Tinnitus Handicapped Inventory and Hospital Anxiety and Depression Scale) and electroencephalographic information of 89 tinnitus sufferers who were semi-randomly treated for eight weeks with one of five acoustic therapies. These were (1) placebo (relaxing music), (2) tinnitus retraining therapy, (3) auditory discrimination therapy, (4) enriched acoustic environment, and (5) binaural beats therapy. Fourteen healthy volunteers who were exposed to relaxing music and followed the same experimental procedure as tinnitus sufferers were additionally included in the study (control group). The database is available at https://doi.org/10.17632/kj443jc4yc.1 . Acoustic therapies were monitored one week after, three weeks after, five weeks after, and eight weeks after the acoustic therapy. This study was previously approved by the local Ethical Committee (CONBIOETICA19CEI00820130520), it was registered as a clinical trial (ISRCTN14553550) in BioMed Central (Springer Nature), the protocol was published in 2016, it attracted L'Oréal-UNESCO Organization as a sponsor, and six journal publications have resulted from the analysis of this database.


Subject(s)
Acoustic Stimulation , Databases, Factual , Tinnitus , Acoustic Stimulation/methods , Acoustics , Auditory Perception , Electroencephalography , Humans , Tinnitus/therapy
14.
Neurosci Biobehav Rev ; 136: 104599, 2022 05.
Article in English | MEDLINE | ID: mdl-35271915

ABSTRACT

The management of chronic neuropathic pain remains a challenge, because pain is subjective, and measuring it objectively is usually out of question. However, neuropathic pain is also a signal provided by maladaptive neuronal activity. Thus, the integral management of chronic neuropathic pain should not only rely on the subjective perception of the patient, but also on objective data that measures the evolution of neuronal activity. We will discuss different objective and subjective methods for the characterization of neuropathic pain. Additionally, the gaps and proposals for an integral management of chronic neuropathic pain will also be discussed. The current management that relies mostly on subjective measures has not been sufficient, therefore, this has hindered advances in pain management and clinical trials. If an integral characterization is achieved, clinical management and stratification for clinical trials could be based on both questionnaires and neuronal activity. Appropriate characterization may lead to an increased effectiveness for new therapies, and a better quality of life for neuropathic pain sufferers.


Subject(s)
Chronic Pain , Neuralgia , Chronic Pain/therapy , Humans , Neuralgia/therapy , Neurons , Pain Management , Perception , Quality of Life
15.
Sensors (Basel) ; 22(3)2022 Jan 26.
Article in English | MEDLINE | ID: mdl-35161683

ABSTRACT

Tinnitus is an auditory condition that causes humans to hear a sound anytime, anywhere. Chronic and refractory tinnitus is caused by an over synchronization of neurons. Sound has been applied as an alternative treatment to resynchronize neuronal activity. To date, various acoustic therapies have been proposed to treat tinnitus. However, the effect is not yet well understood. Therefore, the objective of this study is to establish an objective methodology using electroencephalography (EEG) signals to measure changes in attentional processes in patients with tinnitus treated with auditory discrimination therapy (ADT). To this aim, first, event-related (de-) synchronization (ERD/ERS) responses were mapped to extract the levels of synchronization related to the auditory recognition event. Second, the deep representations of the scalograms were extracted using a previously trained Convolutional Neural Network (CNN) architecture (MobileNet v2). Third, the deep spectrum features corresponding to the study datasets were analyzed to investigate performance in terms of attention and memory changes. The results proved strong evidence of the feasibility of ADT to treat tinnitus, which is possibly due to attentional redirection.


Subject(s)
Tinnitus , Acoustic Stimulation , Attention , Auditory Perception , Electroencephalography , Humans , Tinnitus/therapy
16.
Am J Otolaryngol ; 43(1): 103248, 2022.
Article in English | MEDLINE | ID: mdl-34563804

ABSTRACT

INTRODUCTION: Tinnitus is an annoying buzz that manifests itself in many ways. In addition, it can provoke anxiety, stress, depression, and fatigue. The acoustic therapies have become the most commonly applied treatment for tinnitus, either self-administered or clinically prescribed. Binaural Sound Therapy (BST) and Music Therapy (MT) aim to reverse the neuroplasticity phenomenon related to tinnitus by adequately stimulating the auditory path-way. The goal of this research is to evaluate the feasibility of applying BST for tinnitus treatment by comparing its effect with MT effect. MATERIALS AND METHODS: 34 patients with tinnitus from 29 to 60 years were informed about the experimental procedure and consented their participation. Patients were divided into two groups: 1) MT and 2) BST. They applied their sound-based treatment for one hour every day along eight weeks. Each treatment was adjusted to Hearing Loss (HL) and tinnitus characteristics of each participant. To record EEG data, a bio-signal amplifier with sixteen EEG channels was used. The system recorded data at a sampling frequency of 256 Hz within a bandwidth between 0.1 and 100 Hz. RESULTS: The questionnaire-monitoring reported that MT increased tinnitus perception in 30% of the patients, and increased anxiety and stress in 8% of them. Regarding EEG-monitoring, major neural synchronicity over the frontal lobe was found after the treatment. In the case of BST reduced stress in 23% of patients. Additionally, BST reduced tinnitus perception similar to MT (15% of patients). With respect to EEG-monitoring, slightly major neural synchronicity over the right frontal lobe was found after the treatment. CONCLUSIONS: MT should be applied with caution since it could be worsening the tinnitus sufferer condition. On the other hand, BST is recommended for tinnitus sufferers who have side effects concerning stress but no anxiety.


Subject(s)
Acoustic Stimulation/psychology , Hearing Loss/therapy , Music Therapy/methods , Neurological Rehabilitation/psychology , Tinnitus/therapy , Acoustic Stimulation/methods , Adult , Auditory Perception , Feasibility Studies , Female , Hearing Loss/etiology , Hearing Loss/psychology , Humans , Male , Middle Aged , Neurological Rehabilitation/methods , Psychometrics , Tinnitus/complications , Tinnitus/psychology , Treatment Outcome
17.
Front Psychol ; 12: 764068, 2021.
Article in English | MEDLINE | ID: mdl-34867666

ABSTRACT

Binaural beats (BB) consist of two slightly distinct auditory frequencies (one in each ear), which are differentiated with clinical electroencephalographic (EEG) bandwidths, namely, delta, theta, alpha, beta, or gamma. This auditory stimulation has been widely used to module brain rhythms and thus inducing the mental condition associated with the EEG bandwidth in use. The aim of this research was to investigate whether personalized BB (specifically those within theta and beta EEG bands) improve brain entrainment. Personalized BB consisted of pure tones with a carrier tone of 500 Hz in the left ear together with an adjustable frequency in the right ear that was defined for theta BB (since f c for theta EEG band was 4.60 Hz ± 0.70 SD) and beta BB (since f c for beta EEG band was 18.42 Hz ± 2.82 SD). The adjustable frequencies were estimated for each participant in accordance with their heart rate by applying the Brain-Body Coupling Theorem postulated by Klimesch. To achieve this aim, 20 healthy volunteers were stimulated with their personalized theta and beta BB for 20 min and their EEG signals were collected with 22 channels. EEG analysis was based on the comparison of power spectral density among three mental conditions: (1) theta BB stimulation, (2) beta BB stimulation, and (3) resting state. Results showed larger absolute power differences for both BB stimulation sessions than resting state on bilateral temporal and parietal regions. This power change seems to be related to auditory perception and sound location. However, no significant differences were found between theta and beta BB sessions when it was expected to achieve different brain entrainments, since theta and beta BB induce relaxation and readiness, respectively. In addition, relative power analysis (theta BB/resting state) revealed alpha band desynchronization in the parieto-occipital region when volunteers listened to theta BB, suggesting that participants felt uncomfortable. In conclusion, neural resynchronization was met with both personalized theta and beta BB, but no different mental conditions seemed to be achieved.

18.
Physiol Behav ; 241: 113563, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34464647

ABSTRACT

Environmental noise (EN) refers to unpleasant harmful sounds that deteriorates living conditions. Therefore, this study aims to investigate how EN affects students at learning commons, where EN increases between 70 and 90 dBA, and which levels disturb psycho-physiologically. For this purpose, 16 students of Tecnologico de Monterrey were recruited: nine men and seven women. They were divided into four groups, and were involved in two activities: to solve a puzzle of 300 pieces without and with EN at 75 dBA. In both activities, a summative evaluation based on the level of puzzle completeness, and the electrophysiological monitoring of heart and blink rate, and neural electrical activity were conducted. Results showed that student performance was 4% higher in a quiet room than in learning commons. EN increased heart rate in 3.48%, and blink rate in 22.91%, and neural electrical activity was reduced at least in 3%, regardless of task demands. The findings of the present study suggest that academic work is difficult to undertake in learning commons when EN is above the permissible limit, and what diminishes the performance of students and alters their electrophysiological functioning.


Subject(s)
Learning , Students , Blinking , Female , Humans , Male , Noise
19.
Am J Otolaryngol ; 42(6): 103109, 2021.
Article in English | MEDLINE | ID: mdl-34175772

ABSTRACT

At present, the majority of the top tinnitus treatments is based on sound. Sound-based therapies may become highly effective when the right patient at the correct time and the appropriate context is selected. The investigation presented here attempts to compare sound therapies based on music, retraining, neuromodulation, and binaural sounds in line with (1) neuro-audiology assessments and (2) psychological evaluations. Sound-based therapies were applied in 76 volunteers with tinnitus for 60 days. The neuro-audiology assessment was based on the estimation of the approximate entropy of the electrical neural activity. This assessment revealed that the whole frequency structure of the neural networks showed a higher level of activeness in tinnitus sufferers than in control individuals. Then psychological evaluation showed that retraining treatment tended to be the most effective sound-based therapy to reduce tinnitus perception, but it may be not recommended for individuals with anxiety. Binaural sounds and neuromodulation produced very similar effects at reducing tinnitus perception, stress and anxiety. Music treatments can be applied with caution since they may worsen the condition due to their frequency content.


Subject(s)
Music Therapy/methods , Sound , Tinnitus/therapy , Adult , Aged , Aged, 80 and over , Audiometry , Chronic Disease , Entropy , Female , Humans , Male , Middle Aged , Neuropsychological Tests , Tinnitus/diagnosis
20.
Front Hum Neurosci ; 15: 626146, 2021.
Article in English | MEDLINE | ID: mdl-33716696

ABSTRACT

Socio-emotional impairments are key symptoms of Autism Spectrum Disorders. This work proposes to analyze the neuronal activity related to the discrimination of emotional prosodies in autistic children (aged 9 to 11-year-old) as follows. Firstly, a database for single words uttered in Mexican Spanish by males, females, and children will be created. Then, optimal acoustic features for emotion characterization will be extracted, followed of a cubic kernel function Support Vector Machine (SVM) in order to validate the speech corpus. As a result, human-specific acoustic properties of emotional voice signals will be identified. Secondly, those identified acoustic properties will be modified to synthesize the recorded human emotional voices. Thirdly, both human and synthesized utterances will be used to study the electroencephalographic correlate of affective prosody processing in typically developed and autistic children. Finally, and on the basis of the outcomes, synthesized voice-enhanced environments will be created to develop an intervention based on social-robot and Social StoryTM for autistic children to improve affective prosodies discrimination. This protocol has been registered at BioMed Central under the following number: ISRCTN18117434.

SELECTION OF CITATIONS
SEARCH DETAIL
...