Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Front Neurogenom ; 4: 1297722, 2023.
Article in English | MEDLINE | ID: mdl-38234468

ABSTRACT

Introduction: Understanding how food neophobia affects food experience may help to shift toward sustainable diets. Previous research suggests that individuals with higher food neophobia are more aroused and attentive when observing food-related stimuli. The present study examined whether electrodermal activity (EDA), as index of arousal, relates to food neophobia outside the lab when exposed to a single piece of food. Methods: The EDA of 153 participants was analyzed as part of a larger experiment conducted at a festival. Participants completed the 10-item Food Neophobia Scale. Subsequently, they saw three lids covering three foods: a hotdog labeled as "meat", a hotdog labeled as "100% plant-based", and tofu labeled as "100% plant-based". Participants lifted the lids consecutively and the area-under-the-curve (AUC) of the skin conductance response (SCR) was captured between 20 s before and 20 s after each food reveal. Results: We found a significant positive correlation between food neophobia and AUC of SCR during presentation of the first and second hotdog and a trend for tofu. These correlations remained significant even when only including the SCR data prior to the food reveal (i.e., an anticipatory response). Discussion: The association between food neophobia and EDA indicates that food neophobic individuals are more aroused upon the presentation of food. We show for the first time that the anticipation of being presented with food already increased arousal for food neophobic individuals. These findings also indicate that EDA can be meaningfully determined using wearables outside the lab, in a relatively uncontrolled setting for single-trial analysis.

2.
Front Psychol ; 11: 558172, 2020.
Article in English | MEDLINE | ID: mdl-33101128

ABSTRACT

Emotional state during food consumption is expected to affect food pleasantness. We hypothesize that a negative emotional state reduces food pleasantness and more so for novel foods than for familiar foods because novel foods have not yet been associated with previous emotions. Furthermore, we expect this effect to be stronger when judging the food again from memory without tasting. We induced a positive emotional state in 34 participants by telling them that they earned a monetary bonus and induced a negative emotional state in 35 other participants by subjecting them to a social stress test. After this emotion induction, both groups tasted and rated a (for them) novel soup (sumashi soup) and a familiar soup (vegetable soup). Several explicit and implicit measures of food pleasantness (rated valence, EsSense25, willingness-to-take-home and sip size) indicated that while the negative emotion group did not experience the soups as less pleasant than the positive emotion group, there was an interaction between food familiarity and emotional group. The positive emotion group experienced novel and familiar soups as equally pleasant, whereas the negative emotion group experienced the novel soup as relatively unpleasant and the familiar soup as pleasant. The latter result is consistent with a comforting effect of a familiar taste in a stressful situation. This effect remained in the ratings given 1 week later based on memory and even after retasting. Our results show that emotional state affects food pleasantness differently for novel and familiar foods and that such an effect can be robust.

3.
Sensors (Basel) ; 19(20)2019 Oct 11.
Article in English | MEDLINE | ID: mdl-31614504

ABSTRACT

Probing food experience or liking through verbal ratings has its shortcomings. We compare explicit ratings to a range of (neuro)physiological and behavioral measures with respect to their performance in distinguishing drinks associated with different emotional experience. Seventy participants tasted and rated the valence and arousal of eight regular drinks and a "ground truth" high-arousal, low-valence vinegar solution. The discriminative power for distinguishing between the vinegar solution and the regular drinks was highest for sip size, followed by valence ratings, arousal ratings, heart rate, skin conductance level, facial expression of "disgust," pupil diameter, and Electroencephalogram (EEG) frontal alpha asymmetry. Within the regular drinks, a positive correlation was found between rated arousal and heart rate, and a negative correlation between rated arousal and Heart Rate Variability (HRV). Most physiological measures showed consistent temporal patterns over time following the announcement of the drink and taking a sip. This was consistent over all nine drinks, but the peaks were substantially higher for the vinegar solution than for the regular drinks, likely caused by emotion. Our results indicate that implicit variables have the potential to differentiate between drinks associated with different emotional experiences. In addition, this study gives us insight into the physiological temporal response patterns associated with taking a sip.


Subject(s)
Beverages , Taste/physiology , Adult , Analysis of Variance , Arousal , Behavior , Facial Expression , Female , Heart Rate/physiology , Humans , Male , Middle Aged , Time Factors , Young Adult
4.
Front Hum Neurosci ; 11: 264, 2017.
Article in English | MEDLINE | ID: mdl-28559807

ABSTRACT

EEG and eye tracking variables are potential sources of information about the underlying processes of target detection and storage during visual search. Fixation duration, pupil size and event related potentials (ERPs) locked to the onset of fixation or saccade (saccade-related potentials, SRPs) have been reported to differ dependent on whether a target or a non-target is currently fixated. Here we focus on the question of whether these variables also differ between targets that are subsequently reported (hits) and targets that are not (misses). Observers were asked to scan 15 locations that were consecutively highlighted for 1 s in pseudo-random order. Highlighted locations displayed either a target or a non-target stimulus with two, three or four targets per trial. After scanning, participants indicated which locations had displayed a target. To induce memory encoding failures, participants concurrently performed an aurally presented math task (high load condition). In a low load condition, participants ignored the math task. As expected, more targets were missed in the high compared with the low load condition. For both conditions, eye tracking features distinguished better between hits and misses than between targets and non-targets (with larger pupil size and shorter fixations for missed compared with correctly encoded targets). In contrast, SRP features distinguished better between targets and non-targets than between hits and misses (with average SRPs showing larger P300 waveforms for targets than for non-targets). Single trial classification results were consistent with these averages. This work suggests complementary contributions of eye and EEG measures in potential applications to support search and detect tasks. SRPs may be useful to monitor what objects are relevant to an observer, and eye variables may indicate whether the observer should be reminded of them later.

5.
PLoS One ; 11(12): e0165016, 2016.
Article in English | MEDLINE | ID: mdl-28036328

ABSTRACT

The fusion and enhancement of multiband nighttime imagery for surveillance and navigation has been the subject of extensive research for over two decades. Despite the ongoing efforts in this area there is still only a small number of static multiband test images available for the development and evaluation of new image fusion and enhancement methods. Moreover, dynamic multiband imagery is also currently lacking. To fill this gap we present the TRICLOBS dynamic multi-band image data set containing sixteen registered visual (0.4-0.7µm), near-infrared (NIR, 0.7-1.0µm) and long-wave infrared (LWIR, 8-14µm) motion sequences. They represent different military and civilian surveillance scenarios registered in three different scenes. Scenes include (military and civilian) people that are stationary, walking or running, or carrying various objects. Vehicles, foliage, and buildings or other man-made structures are also included in the scenes. This data set is primarily intended for the development and evaluation of image fusion, enhancement and color mapping algorithms for short-range surveillance applications. The imagery was collected during several field trials with our newly developed TRICLOBS (TRI-band Color Low-light OBServation) all-day all-weather surveillance system. This system registers a scene in the Visual, NIR and LWIR part of the electromagnetic spectrum using three optically aligned sensors (two digital image intensifiers and an uncooled long-wave infrared microbolometer). The three sensor signals are mapped to three individual RGB color channels, digitized, and stored as uncompressed RGB (false) color frames. The TRICLOBS data set enables the development and evaluation of (both static and dynamic) image fusion, enhancement and color mapping algorithms. To allow the development of realistic color remapping procedures, the data set also contains color photographs of each of the three scenes. The color statistics derived from these photographs can be used to define color mappings that give the multi-band imagery a realistic color appearance.


Subject(s)
Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Algorithms , Color , Motion
6.
Front Neurosci ; 8: 322, 2014.
Article in English | MEDLINE | ID: mdl-25352774

ABSTRACT

While studies exist that compare different physiological variables with respect to their association with mental workload, it is still largely unclear which variables supply the best information about momentary workload of an individual and what is the benefit of combining them. We investigated workload using the n-back task, controlling for body movements and visual input. We recorded EEG, skin conductance, respiration, ECG, pupil size and eye blinks of 14 subjects. Various variables were extracted from these recordings and used as features in individually tuned classification models. Online classification was simulated by using the first part of the data as training set and the last part of the data for testing the models. The results indicate that EEG performs best, followed by eye related measures and peripheral physiology. Combining variables from different sensors did not significantly improve workload assessment over the best performing sensor alone. Best classification accuracy, a little over 90%, was reached for distinguishing between high and low workload on the basis of 2 min segments of EEG and eye related variables. A similar and not significantly different performance of 86% was reached using only EEG from single electrode location Pz.

7.
Front Neurosci ; 8: 224, 2014.
Article in English | MEDLINE | ID: mdl-25120425

ABSTRACT

We here introduce a new experimental paradigm to induce mental stress in a quick and easy way while adhering to ethical standards and controlling for potential confounds resulting from sensory input and body movements. In our Sing-a-Song Stress Test, participants are presented with neutral messages on a screen, interleaved with 1-min time intervals. The final message is that the participant should sing a song aloud after the interval has elapsed. Participants sit still during the whole procedure. We found that heart rate and skin conductance during the 1-min intervals following the sing-a-song stress message are substantially higher than during intervals following neutral messages. The order of magnitude of the rise is comparable to that achieved by the Trier Social Stress Test. Skin conductance increase correlates positively with experienced stress level as reported by participants. We also simulated stress detection in real time. When using both skin conductance and heart rate, stress is detected for 18 out of 20 participants, approximately 10 s after onset of the sing-a-song message. In conclusion, the Sing-a-Song Stress Test provides a quick, easy, controlled and potent way to induce mental stress and could be helpful in studies ranging from examining physiological effects of mental stress to evaluating interventions to reduce stress.

8.
Int J Psychophysiol ; 93(2): 242-52, 2014 Aug.
Article in English | MEDLINE | ID: mdl-24841994

ABSTRACT

Learning to master a task is expected to be accompanied by a decrease in effort during task execution. We examine the possibility to monitor learning using physiological measures that have been reported to reflect effort or workload. Thirty-five participants performed different difficulty levels of the n-back task while a range of physiological and performance measurements were recorded. In order to dissociate non-specific time-related effects from effects of learning, we used the easiest level as a baseline condition. This condition is expected to only reflect non-specific effects of time. Performance and subjective measures confirmed more learning for the difficult level than for the easy level. The difficulty levels affected physiological variables in the way as expected, therewith showing their sensitivity. However, while most of the physiological variables were also affected by time, time-related effects were generally the same for the easy and the difficult level. Thus, in a well-controlled experiment that enabled the dissociation of general time effects from learning we did not find physiological variables to indicate decreasing effort associated with learning. Theoretical and practical implications are discussed.


Subject(s)
Brain Waves/physiology , Galvanic Skin Response/physiology , Heart Rate/physiology , Learning/physiology , Workload , Adult , Analysis of Variance , Electrocardiography , Electroencephalography , Female , Humans , Male , Physical Stimulation , Psychomotor Performance , Reaction Time/physiology , Young Adult
9.
J Neural Eng ; 9(4): 045008, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22832068

ABSTRACT

Previous studies indicate that both electroencephalogram (EEG) spectral power (in particular the alpha and theta band) and event-related potentials (ERPs) (in particular the P300) can be used as a measure of mental work or memory load. We compare their ability to estimate workload level in a well-controlled task. In addition, we combine both types of measures in a single classification model to examine whether this results in higher classification accuracy than either one alone. Participants watched a sequence of visually presented letters and indicated whether or not the current letter was the same as the one (n instances) before. Workload was varied by varying n. We developed different classification models using ERP features, frequency power features or a combination (fusion). Training and testing of the models simulated an online workload estimation situation. All our ERP, power and fusion models provide classification accuracies between 80% and 90% when distinguishing between the highest and the lowest workload condition after 2 min. For 32 out of 35 participants, classification was significantly higher than chance level after 2.5 s (or one letter) as estimated by the fusion model. Differences between the models are rather small, though the fusion model performs better than the other models when only short data segments are available for estimating workload.


Subject(s)
Electroencephalography/methods , Evoked Potentials/physiology , Photic Stimulation/methods , Psychomotor Performance/physiology , Workload , Adult , Electroencephalography/psychology , Female , Humans , Male , Workload/psychology
10.
Optom Vis Sci ; 85(10): E951-62, 2008 Oct.
Article in English | MEDLINE | ID: mdl-18832970

ABSTRACT

PURPOSE: The purpose of our study was to develop a tool to visualize the limitations posed by visual impairments in detecting small and low-contrast elements in natural images. This visualization tool incorporates existing models of several aspects of visual perception, such as the band-limited contrast model of Peli (J Opt Soc Am A 1996;13:1131-8). METHODS: The models underlying the visualization tool were elaborated and tested in experiments with human subjects with various visual impairments such as macular degeneration, diabetic retinopathy, glaucoma and subjects with normal vision but under various degraded viewing conditions (including reduced contrast, eccentric viewing). The experiments were designed to determine in three successive steps the contrast sensitivity function that produces a degraded image that can just be discriminated from its original. In the first step, the just detectable blur was determined, while in the next two steps contrast threshold levels were determined for removing high and medium spatial frequencies from the image. Threshold parameters were determined for three image-types (face, stairs, forest) and the relationship with acuity and contrast thresholds (of Landolt-C symbols) was examined. RESULTS: The blur threshold is inversely related to acuity, and this relationship is largely independent of the cause of reduced acuity (visual impairment, contrast reduction or eccentric viewing). CONCLUSIONS: We developed a validated visualization tool based on these results that provides a reliable impression of detectability of image features by visual impaired people.


Subject(s)
Optometry/methods , Vision, Low/diagnosis , Vision, Low/physiopathology , Vision, Ocular , Aged , Aged, 80 and over , Contrast Sensitivity , Diabetic Retinopathy/complications , Glaucoma/complications , Humans , Macular Degeneration/complications , Middle Aged , Models, Biological , Photic Stimulation/methods , Reproducibility of Results , Sensory Thresholds , Vision, Low/etiology , Visual Acuity
11.
Perception ; 33(10): 1155-72, 2004.
Article in English | MEDLINE | ID: mdl-15693662

ABSTRACT

A common assumption in cue combination models is that small discrepancies between cues are due to the limited resolution of the individual cues. Whenever this assumption holds, information from the separate cues can best be combined to give a single, more accurate estimate of the property of interest. We examined whether information about the discrepancy itself is lost when this is done. In our experiments, subjects were required to combine cues to match certain properties while avoiding perceptual conflicts. In part 1, they combined expansion and change in disparity to estimate motion in depth; and in part 2, they combined perspective and binocular disparities to estimate slant. We compared the pattern in the way that subjects set the two cues with the patterns predicted by models of cue combination with and without a loss of information about the discrepancy. From this comparison we conclude that little information about the discrepancies between cues is lost when the cues are combined.


Subject(s)
Cues , Visual Perception/physiology , Depth Perception/physiology , Humans , Motion Perception/physiology , Perceptual Distortion , Psychophysics , Vision Disparity
12.
J Vis ; 3(7): 464-85, 2003.
Article in English | MEDLINE | ID: mdl-14507253

ABSTRACT

To gain insight into how speeds are combined in structure-from-motion, we compared performance for estimating the mean speed and performance for detecting deviations from planarity. The stimuli showed a center dot surrounded by an annulus of dots. In one (plane) condition, the stimuli simulated a rotating plane. In a two alternative forced choice (2AFC) task, the subject had to choose in which of two stimuli the center dot moved in the plane. In another (cloud) condition, the same dot locations and speeds were used but now assigned to different dots. Such a stimulus resembles a translating and rotating cloud of dots. In this case, the subject had to choose the stimulus in which the center dot moved with the mean speed of the surrounding dots. Performance was measured as a function of deformation/slant. Although location and speeds were the same in both conditions, performance was much poorer in the cloud condition. Subsequent experiments and an ideal observer model point to a plausible explanation: in detecting deviations from planarity, the visual system can focus on the most reliable pieces of information (the slower dots, closest to the test dot). Although performance could benefit by taking more dots into account, performance barely improved with an increase in the number of dots. This may reflect a limited processing capacity of the visual system.


Subject(s)
Motion Perception/physiology , Pattern Recognition, Visual/physiology , Humans , Psychophysics , Sensory Thresholds/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...