Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38083601

ABSTRACT

The rise in population and aging has led to a significant increase in the number of individuals affected by common causes of vision loss. Early diagnosis and treatment are crucial to avoid the consequences of visual impairment. However, in early stages, many visual problems are making it difficult to detect. Visual adaptation can compensate for several visual deficits with adaptive eye movements. These adaptive eye movements may serve as indicators of vision loss. In this work, we investigate the association between eye movement and blurred vision. By using Electrooculography (EOG) to record eye movements, we propose a new tracking model to identify the deterioration of refractive power. We verify the technical feasibility of this method by designing a blurred vision simulation experiment. Six sets of prescription lenses and a pair of flat lenses were used to create different levels of blurring effects. We analyzed binocular movements through EOG signals and performed a seven-class classification using the ResNet18 architecture. The results revealed an average classification accuracy of 94.7% in the subject-dependent model. However, the subject-independent model presented poor performance, with the highest accuracy reaching only 34.5%. Therefore, the potential of an EOG-based visual quality monitoring system is proven. Furthermore, our experimental design provides a novel approach to assessing blurred vision.


Subject(s)
Eye Movements , Vision, Low , Humans , Electrooculography/methods , Vision Disorders
2.
Iperception ; 14(6): 20416695231216370, 2023.
Article in English | MEDLINE | ID: mdl-38025964

ABSTRACT

Prior research indicate that emotional states can alter taste perception, but the underlying mechanisms remain unclear. This study explores whether taste perception changes due to the mere evocation of emotions or the cognitive awareness of emotions. The first experiment investigated how anxiety affects taste perception when individuals are aware of their anxiety. Participants watched videos inducing relaxation or anxiety, then were divided into groups focusing on their emotions and those who did not, and the taste perception was measure. The second experiment investigated the influence of awareness directed toward emotions on taste evaluation, without manipulating emotional states. This focused on cognitive processing of taste through evaluations of visual stimuli. Results showed that sweetness perception is suppressed by the evocation of anxiety, whereas bitterness perception is enhanced only by anxiety with awareness. These findings indicate that the mechanisms by which emotional states affect taste perception may differ depending on taste quality.

3.
Sensors (Basel) ; 23(15)2023 Jul 25.
Article in English | MEDLINE | ID: mdl-37571449

ABSTRACT

Experiences of virtual reality (VR) can easily break if the method of evaluating subjective user states is intrusive. Behavioral measures are increasingly used to avoid this problem. One such measure is eye tracking, which recently became more standard in VR and is often used for content-dependent analyses. This research is an endeavor to utilize content-independent eye metrics, such as pupil size and blinks, for identifying mental load in VR users. We generated mental load independently from visuals through auditory stimuli. We also defined and measured a new eye metric, focus offset, which seeks to measure the phenomenon of "staring into the distance" without focusing on a specific surface. In the experiment, VR-experienced participants listened to two native and two foreign language stimuli inside a virtual phone booth. The results show that with increasing mental load, relative pupil size on average increased 0.512 SDs (0.118 mm), with 57% reduced variance. To a lesser extent, mental load led to fewer fixations, less voluntary gazing at distracting content, and a larger focus offset as if looking through surfaces (about 0.343 SDs, 5.10 cm). These results are in agreement with previous studies. Overall, we encourage further research on content-independent eye metrics, and we hope that hardware and algorithms will be developed in the future to further increase tracking stability.


Subject(s)
Virtual Reality , Humans , Auditory Perception , Language , User-Computer Interface , Surveys and Questionnaires
4.
PLoS One ; 14(12): e0226328, 2019.
Article in English | MEDLINE | ID: mdl-31830111

ABSTRACT

Facial expressions are behavioural cues that represent an affective state. Because of this, they are an unobtrusive alternative to affective self-report. The perceptual identification of facial expressions can be performed automatically with technological assistance. Once the facial expressions have been identified, the interpretation is usually left to a field expert. However, facial expressions do not always represent the felt affect; they can also be a communication tool. Therefore, facial expression measurements are prone to the same biases as self-report. Hence, the automatic measurement of human affect should also make inferences on the nature of the facial expressions instead of describing facial movements only. We present two experiments designed to assess whether such automated inferential judgment could be advantageous. In particular, we investigated the differences between posed and spontaneous smiles. The aim of the first experiment was to elicit both types of expressions. In contrast to other studies, the temporal dynamics of the elicited posed expression were not constrained by the eliciting instruction. Electromyography (EMG) was used to automatically discriminate between them. Spontaneous smiles were found to differ from posed smiles in magnitude, onset time, and onset and offset speed independently of the producer's ethnicity. Agreement between the expression type and EMG-based automatic detection reached 94% accuracy. Finally, measurements of the agreement between human video coders showed that although agreement on perceptual labels is fairly good, the agreement worsens with inferential labels. A second experiment confirmed that a layperson's accuracy as regards distinguishing posed from spontaneous smiles is poor. Therefore, the automatic identification of inferential labels would be beneficial in terms of affective assessments and further research on this topic.


Subject(s)
Emotions/physiology , Facial Expression , Judgment , Movement , Smiling/physiology , Smiling/psychology , Adult , Cues , Female , Humans , Male
5.
Front Hum Neurosci ; 13: 293, 2019.
Article in English | MEDLINE | ID: mdl-31555112

ABSTRACT

Head direction has been identified to anticipate trajectory direction during human locomotion. Head anticipation has also been shown to persist in darkness. Arguably, the purpose for this anticipatory behavior is related to motor control and trajectory planning, independently of the visual condition. This implies that anticipation remains in the absence of visual input. However, experiments so far have only explored this phenomenon with visual instructions which intrinsically primes a visual representation to follow. The primary objective of this study is to describe head anticipation in auditory instructed locomotion, in the presence and absence of visual input. Auditory instructed locomotion trajectories were performed in two visual conditions: eyes open and eyes closed. First, 10 sighted participants localized static sound sources to ensure they could understand the sound cues provided. Afterwards, they listened to a moving sound source while actively following it. Later, participants were asked to reproduce the trajectory of the moving sound source without sound. Anticipatory head behavior was observed during trajectory reproduction in both eyes open and closed conditions. The results suggest that head anticipation is related to motor anticipation rather than mental simulation of the trajectory.

6.
IEEE Trans Neural Syst Rehabil Eng ; 23(5): 877-86, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26353236

ABSTRACT

Recently, brain-computer interface (BCI) research has extended to investigate its possible use in motor rehabilitation. Most of these investigations have focused on the upper body. Only few studies consider gait because of the difficulty of recording EEG during gross movements. However, for stroke patients the rehabilitation of gait is of crucial importance. Therefore, this study investigates if a BCI can be based on walking related desynchronization features. Furthermore, the influence of complexity of the walking movements on the classification performance is investigated. Two BCI experiments were conducted in which healthy subjects performed a cued walking task, a more complex walking task (backward or adaptive walking), and imagination of the same tasks. EEG data during these tasks was classified into walking and no-walking. The results from both experiments show that despite the automaticity of walking and recording difficulties, brain signals related to walking could be classified rapidly and reliably. Classification performance was higher for actual walking movements than for imagined walking movements. There was no significant increase in classification performance for both the backward and adaptive walking tasks compared with the cued walking tasks. These results are promising for developing a BCI for the rehabilitation of gait.


Subject(s)
Algorithms , Brain-Computer Interfaces , Electroencephalography/methods , Evoked Potentials, Motor/physiology , Imagination/physiology , Walking/physiology , Adult , Female , Humans , Male , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...