Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE J Transl Eng Health Med ; 12: 171-181, 2024.
Article in English | MEDLINE | ID: mdl-38088996

ABSTRACT

The study of emotions through the analysis of the induced physiological responses gained increasing interest in the past decades. Emotion-related studies usually employ films or video clips, but these stimuli do not give the possibility to properly separate and assess the emotional content provided by sight or hearing in terms of physiological responses. In this study we have devised an experimental protocol to elicit emotions by using, separately and jointly, pictures and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. We processed galvanic skin response, electrocardiogram, blood volume pulse, pupillary signal and electroencephalogram from 21 subjects to extract both autonomic and central nervous system indices to assess physiological responses in relation to three types of stimulation: auditory, visual, and auditory/visual. Results show a higher galvanic skin response to sounds compared to images. Electrocardiogram and blood volume pulse show different trends between auditory and visual stimuli. The electroencephalographic signal reveals a greater attention paid by the subjects when listening to sounds compared to watching images. In conclusion, these results suggest that emotional responses increase during auditory stimulation at both central and peripheral levels, demonstrating the importance of sounds for emotion recognition experiments and also opening the possibility toward the extension of auditory stimuli in other fields of psychophysiology. Clinical and Translational Impact Statement- These findings corroborate auditory stimuli's importance in eliciting emotions, supporting their use in studying affective responses, e.g., mood disorder diagnosis, human-machine interaction, and emotional perception in pathology.


Subject(s)
Emotions , Sound , Humans , Emotions/physiology , Acoustic Stimulation/methods , Hearing , Mood Disorders
2.
Sensors (Basel) ; 23(22)2023 Nov 08.
Article in English | MEDLINE | ID: mdl-38005437

ABSTRACT

We present a novel architecture designed to enhance the detection of Error Potential (ErrP) signals during ErrP stimulation tasks. In the context of predicting ErrP presence, conventional Convolutional Neural Networks (CNNs) typically accept a raw EEG signal as input, encompassing both the information associated with the evoked potential and the background activity, which can potentially diminish predictive accuracy. Our approach involves advanced Single-Trial (ST) ErrP enhancement techniques for processing raw EEG signals in the initial stage, followed by CNNs for discerning between ErrP and NonErrP segments in the second stage. We tested different combinations of methods and CNNs. As far as ST ErrP estimation is concerned, we examined various methods encompassing subspace regularization techniques, Continuous Wavelet Transform, and ARX models. For the classification stage, we evaluated the performance of EEGNet, CNN, and a Siamese Neural Network. A comparative analysis against the method of directly applying CNNs to raw EEG signals revealed the advantages of our architecture. Leveraging subspace regularization yielded the best improvement in classification metrics, at up to 14% in balanced accuracy and 13.4% in F1-score.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Electroencephalography/methods , Evoked Potentials , Neural Networks, Computer , Wavelet Analysis , Algorithms
3.
Front Neurogenom ; 4: 1080794, 2023.
Article in English | MEDLINE | ID: mdl-38234500

ABSTRACT

Introduction: Motor Imagery (MI)-based Brain Computer Interfaces (BCI) have raised gained attention for their use in rehabilitation therapies since they allow controlling an external device by using brain activity, in this way promoting brain plasticity mechanisms that could lead to motor recovery. Specifically, rehabilitation robotics can provide precision and consistency for movement exercises, while embodied robotics could provide sensory feedback that can help patients improve their motor skills and coordination. However, it is still not clear whether different types of visual feedback may affect the elicited brain response and hence the effectiveness of MI-BCI for rehabilitation. Methods: In this paper, we compare two visual feedback strategies based on controlling the movement of robotic arms through a MI-BCI system: 1) first-person perspective, with visual information that the user receives when they view the robot arms from their own perspective; and 2) third-person perspective, whereby the subjects observe the robot from an external perspective. We studied 10 healthy subjects over three consecutive sessions. The electroencephalographic (EEG) signals were recorded and evaluated in terms of the power of the sensorimotor rhythms, as well as their lateralization, and spatial distribution. Results: Our results show that both feedback perspectives can elicit motor-related brain responses, but without any significant differences between them. Moreover, the evoked responses remained consistent across all sessions, showing no significant differences between the first and the last session. Discussion: Overall, these results suggest that the type of perspective may not influence the brain responses during a MI- BCI task based on a robotic feedback, although, due to the limited sample size, more evidence is required. Finally, this study resulted into the production of 180 labeled MI EEG datasets, publicly available for research purposes.

4.
Front Hum Neurosci ; 17: 1286621, 2023.
Article in English | MEDLINE | ID: mdl-38259333

ABSTRACT

Emotions significantly shape decision-making, and targeted emotional elicitations represent an important factor in neuromarketing, where they impact advertising effectiveness by capturing potential customers' attention intricately associated with emotional triggers. Analyzing biometric parameters after stimulus exposure may help in understanding emotional states. This study investigates autonomic and central nervous system responses to emotional stimuli, including images, auditory cues, and their combination while recording physiological signals, namely the electrocardiogram, blood volume pulse, galvanic skin response, pupillometry, respiration, and the electroencephalogram. The primary goal of the proposed analysis is to compare emotional stimulation methods and to identify the most effective approach for distinct physiological patterns. A novel feature selection technique is applied to further optimize the separation of four emotional states. Basic machine learning approaches are used in order to discern emotions as elicited by different kinds of stimulation. Electroencephalographic signals, Galvanic skin response and cardio-respiratory coupling-derived features provided the most significant features in distinguishing the four emotional states. Further findings highlight how auditory stimuli play a crucial role in creating distinct physiological patterns that enhance classification within a four-class problem. When combining all three types of stimulation, a validation accuracy of 49% was achieved. The sound-only and the image-only phases resulted in 52% and 44% accuracy respectively, whereas the combined stimulation of images and sounds led to 51% accuracy. Isolated visual stimuli yield less distinct patterns, necessitating more signals for relatively inferior performance compared to other types of stimuli. This surprising significance arises from limited auditory exploration in emotional recognition literature, particularly contrasted with the pleathora of studies performed using visual stimulation. In marketing, auditory components might hold a more relevant potential to significantly influence consumer choices.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1968-1971, 2022 07.
Article in English | MEDLINE | ID: mdl-36086244

ABSTRACT

Many studies in the literature attempt recognition of emotions through the use of videos or images, but very few have explored the role that sounds have in evoking emotions. In this study we have devised an experimental protocol for elicitation of emotions by using, separately and jointly, images and sounds from the widely used International Affective Pictures System and International Affective Digital Sounds databases. During the experiments we have recorded the skin conductance and pupillary signals and processed them with the goal of extracting indices linked to the autonomic nervous system, thus revealing specific patterns of behavior depending on the different stimulation modalities. Our results show that skin conductance helps discriminate emotions along the arousal dimension, whereas features derived from the pupillary signal are able to discriminate different states along both valence and arousal dimensions. In particular, the pupillary diameter was found to be significantly greater at increasing arousal and during elicitation of negative emotions in the phases of viewing images and images with sounds. In the sound-only phase, on the other hand, the power calculated in the high and very high frequency bands of the pupillary diameter were significantly greater at higher valence (valence ratings > 5). Clinical relevance- This study demonstrates the ability of physiological signals to assess specific emotional states by providing different activation patterns depending on the stimulation through images, sounds and images with sounds. The approach has high clinical relevance as it could be extended to evaluate mood disorders (e.g. depression, bipolar disorders, or just stress), or to use physiological patterns found for sounds in order to study whether hearing aids can lead to increased emotional perception.


Subject(s)
Emotions , Pupil , Arousal/physiology , Autonomic Nervous System/physiology , Emotions/physiology , Galvanic Skin Response , Pupil/physiology
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3710-3713, 2022 07.
Article in English | MEDLINE | ID: mdl-36086568

ABSTRACT

Emotions processing is a complex mechanism that involves different physiological systems. In particular, the Central Nervous System (CNS) is considered to play a key role in this mechanism and one of the main modalities to study the CNS activity is the Electroencephalographic signal (EEG). To elicit emotions, different kinds of stimuli can be used e.g.: audio, visual or a combination of the two. Literature studies focus more on the correct classification of the different types of emotions or which kind of stimulation gives the best performance in terms of classification accuracy. However, it is still unclear how the different stimuli elicit the emotions and which are the results in terms of brain activity. In this paper, we analysed and compared EEG signals given by eliciting emotions using audio and visual stimuli or a combination of the latter two. Data were collected during experiments conducted in our laboratories using IAPS and IADS dataset. Our study confirmed literature physiological studies about emotions highlighting higher brain activity in the frontal and central regions and in the δ and θ bands for each kind of stimulus. However, audio stimulation was found to have higher responses when compared to the other two modalities of stimulation in almost all the comparisons performed. Higher values of the δ/ß ratios, an index related to negative emotions, have been achieved when using only sounds as stimuli. Moreover, the same type of stimuli, resulted in higher δ-ß coupling, suggesting a better attention control. We concluded that stimulating subjects without letting them know (seeing) what is actually happening may give a higher perception of emotions, even if this mechanism remains highly subjective. Clinical Relevance- This paper suggests that audio stimuli may give higher perception of the elicited emotion resulting in higher brain activity in the physiological areas and more focused subjects. Thus using only audio in emotion related studies may give more reliable and consistent results.


Subject(s)
Electroencephalography , Emotions , Central Nervous System , Electroencephalography/methods , Emotions/physiology , Humans
7.
J Neural Eng ; 19(3)2022 05 31.
Article in English | MEDLINE | ID: mdl-35523120

ABSTRACT

Objective.Deep learning algorithms employed in brain computer interfaces (BCIs) need large electroencephalographic (EEG) datasets to be trained. These datasets are usually unbalanced, particularly when error potential (ErrP) experiment are considered, being ErrP's epochs much rarer than non-ErrP ones. To face the issue of unbalance of rare epochs, this paper presents a novel, data balancing methods based on ARX-modelling.Approach.AutoRegressive with eXogenous input (ARX)-models are identified on the EEG data of the 'Monitoring error-related potentials' dataset of the BNCI Horizon 2020 and then employed to generate new synthetic data of the minority class of ErrP epochs. The balanced dataset is used to train a classifier of non-ErrP vs. ErrP epochs based on EEGNet.Main results.Compared to classical techniques (e.g. class weights, CW) for data balancing, the new method outperforms the others in terms of resulting accuracy (i.e. ARX91.5%vs CW88.3%), F1-score (i.e. ARX78.3%vs CW73.7%) and balanced accuracy (i.e. ARX87.0%vs CW81.1%) and also reduces the number of false positive detection (i.e. ARX 51 vs CW 104). Moreover, the ARX-based method shows a better generalization capability of the whole model to classify and predict new data.Significance.The results obtained suggest that the proposed method can be used in BCI application for tackling the issue of data unbalance and obtain more reliable and robust performances.


Subject(s)
Brain-Computer Interfaces , Algorithms , Electroencephalography/methods
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 1641-1644, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946211

ABSTRACT

Socially assistive robots have shown potential benefits in therapy of child and elderly patients with social and cognitive deficits. In particular, for autistic children, humanoid robots could enhance engagement and attention, thanks to their simplified toy-like appearance and the reduced set of possible movements and expressions. The recent focus on autism-related motor impairments has increased the interest on developing new robotic tools aimed at improving not only the social capabilities but also the motor skills of autistic children. To this purpose, we have designed two embodied mirroring setups using the NAO humanoid robot. Two different tracking systems were used and compared: Inertial Measurement Units and the Microsoft Kinect, a marker-less vision based system. Both platforms were able to mirror upper limb basic movements of two healthy subjects, an adult and a child. However, despite the lower accuracy, the Kinect-based setup was chosen as the best candidate for embodied mirroring in autism treatment, thanks to the lower intrusiveness and reduced setup time. A prototype of an interactive mirroring game was developed and successfully tested with the Kinect-based platform, paving the way to the development of a versatile and powerful tool for clinical use with autistic children.


Subject(s)
Autistic Disorder , Robotics , Aminoacridines , Child , Humans , Movement , Upper Extremity
SELECTION OF CITATIONS
SEARCH DETAIL
...