Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
Add more filters










Publication year range
1.
Psych J ; 12(1): 34-43, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36129003

ABSTRACT

Interpersonal distance plays an important role in human social interaction. With the increasing usage of virtual reality in social interaction, people's interpersonal distance in virtual space attracts great attention. It remains unclear whether and to what extent human-required interpersonal distance is altered by crowded virtual scenes. In this study, we manipulated crowd density in virtual environments and used the classical stop-distance paradigm to measure required interpersonal distances at different crowd densities. We found that people's required interpersonal distance decreased with increased social crowdedness but not with physical crowdedness. Moreover, the decrease of two types of interpersonal distance was associated with the globally averaged crowd density rather than local crowd density. The reduction is not due to the imitation of other virtual humans in the crowd. Moreover, we developed a model to describe the quantitative relationships between the crowdedness of the environment and the required interpersonal distance. Our finding provides insights into designing user-friendly virtual humans in metaverse virtual worlds.


Subject(s)
Interpersonal Relations , Virtual Reality , Humans , Crowding , Attention
2.
Appl Ergon ; 103: 103785, 2022 Sep.
Article in English | MEDLINE | ID: mdl-35490546

ABSTRACT

Eye-gaze and head-gaze are two hands-free interaction modes in virtual reality, each of which has demonstrated different strengths. Selecting suitable interaction modes in different scenarios is important to achieve efficient interaction in virtual scenes. This study compared the movement time in an object positioning task by examining eye-gaze interaction and head-gaze interaction in various conditions. In turn, it identified the superior zones for each mode, respectively. Based on this information, we designed a combination mode - utilizing eye-gaze interaction at the acceleration phase and deceleration phase and head-gaze interaction at the correction phase - to achieve the optimal interaction mode, which has allowed us to obtain higher efficiency and subjective satisfaction. This study provides a comprehensive analysis of the characteristics of the eye-gaze and head-gaze interaction modes and provides valuable insights into selecting the appropriate interaction modes for virtual reality applications.


Subject(s)
Fixation, Ocular , Virtual Reality , Acceleration , Humans , Movement
3.
J Med Internet Res ; 23(8): e29150, 2021 08 12.
Article in English | MEDLINE | ID: mdl-34280118

ABSTRACT

BACKGROUND: The COVID-19 outbreak has induced negative emotions among people. These emotions are expressed by the public on social media and are rapidly spread across the internet, which could cause high levels of panic among the public. Understanding the changes in public sentiment on social media during the pandemic can provide valuable information for developing appropriate policies to reduce the negative impact of the pandemic on the public. Previous studies have consistently shown that the COVID-19 outbreak has had a devastating negative impact on public sentiment. However, it remains unclear whether there has been a variation in the public sentiment during the recovery phase of the pandemic. OBJECTIVE: In this study, we aim to determine the impact of the COVID-19 pandemic in mainland China by continuously tracking public sentiment on social media throughout 2020. METHODS: We collected 64,723,242 posts from Sina Weibo, China's largest social media platform, and conducted a sentiment analysis based on natural language processing to analyze the emotions reflected in these posts. RESULTS: We found that the COVID-19 pandemic not only affected public sentiment on social media during the initial outbreak but also induced long-term negative effects even in the recovery period. These long-term negative effects were no longer correlated with the number of new confirmed COVID-19 cases both locally and nationwide during the recovery period, and they were not attributed to the postpandemic economic recession. CONCLUSIONS: The COVID-19 pandemic induced long-term negative effects on public sentiment in mainland China even as the country recovered from the pandemic. Our study findings remind public health and government administrators of the need to pay attention to public mental health even once the pandemic has concluded.


Subject(s)
COVID-19/epidemiology , Emotions , Mental Health/statistics & numerical data , Pandemics , Public Opinion , Social Media/statistics & numerical data , China/epidemiology , Humans , SARS-CoV-2 , Time Factors
4.
Brain Imaging Behav ; 15(4): 1934-1943, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33034845

ABSTRACT

Public speaking anxiety refers to feelings of nervousness when anticipating or delivering a speech. However, the relationship between anxiety in the anticipation phase and speech delivery phase is unclear. In this study, we used functional near-infrared spectroscopy to record participants' brain activities when they were anticipating or performing public speaking tasks in an immersive virtual reality environment. Neuroimaging results showed that participants' subjective ratings of public anxiety in the anticipation phase but not the delivery phase were correlated with activities in the dorsolateral prefrontal cortex, the inferior frontal gyrus, and the precentral and postcentral gyrus. In contrast, their speaking performance could be predicted by activities in the temporal gyrus and the right postcentral gyrus in the delivery phase. This suggests a dissociation in the neural mechanisms between anxiety in preparation and execution of a speech. The conventional anxiety questionnaire is a good predictor of anticipatory anxiety, but cannot predict speaking performance. Using virtual reality to establish a situational test could be a better approach to assess in vivo public speaking performance.


Subject(s)
Phobic Disorders , Speech , Anxiety , Humans , Magnetic Resonance Imaging , Prefrontal Cortex/diagnostic imaging
5.
J Neurosci ; 40(5): 1120-1132, 2020 01 29.
Article in English | MEDLINE | ID: mdl-31826945

ABSTRACT

When moving around in the world, the human visual system uses both motion and form information to estimate the direction of self-motion (i.e., heading). However, little is known about cortical areas in charge of this task. This brain-imaging study addressed this question by using visual stimuli consisting of randomly distributed dot pairs oriented toward a locus on a screen (the form-defined focus of expansion [FoE]) but moved away from a different locus (the motion-defined FoE) to simulate observer translation. We first fixed the motion-defined FoE location and shifted the form-defined FoE location. We then made the locations of the motion- and the form-defined FoEs either congruent (at the same location in the display) or incongruent (on the opposite sides of the display). The motion- or the form-defined FoE shift was the same in the two types of stimuli, but the perceived heading direction shifted for the congruent, but not for the incongruent stimuli. Participants (both sexes) made a task-irrelevant (contrast discrimination) judgment during scanning. Searchlight and ROI-based multivoxel pattern analysis revealed that early visual areas V1, V2, and V3 responded to either the motion- or the form-defined FoE shift. After V3, only the dorsal areas V3a and V3B/KO responded to such shifts. Furthermore, area V3B/KO shows a significantly higher decoding accuracy for the congruent than the incongruent stimuli. Our results provide direct evidence showing that area V3B/KO does not simply respond to motion and form cues but integrates these two cues for the perception of heading.SIGNIFICANCE STATEMENT Human survival relies on accurate perception of self-motion. The visual system uses both motion (optic flow) and form cues for the perception of the direction of self-motion (heading). Although human brain areas for processing optic flow and form structure are well identified, the areas responsible for integrating these two cues for the perception of self-motion remain unknown. We conducted fMRI experiments and used multivoxel pattern analysis technique to find human brain areas that can decode the shift in heading specified by each cue alone and the two cues combined. We found that motion and form cues are first processed in the early visual areas and then are likely integrated in the higher dorsal area V3B/KO for the final estimation of heading.


Subject(s)
Form Perception/physiology , Motion Perception/physiology , Visual Cortex/physiology , Brain/physiology , Cues , Female , Humans , Male , Optic Flow/physiology , Photic Stimulation
6.
Nat Hum Behav ; 3(8): 847-855, 2019 08.
Article in English | MEDLINE | ID: mdl-31182793

ABSTRACT

Identifying whether people are part of a group is essential for humans to understand social interactions in social activities. Previous studies have focused mainly on the perceptual grouping of low-level visual features. However, very little attention has been paid to grouping in social scenes. Here we implemented virtual reality technology to manipulate characteristics of avatars in virtual scenes. We found that closer interpersonal distances, more direct interpersonal angles and more open avatar postures led to a higher probability of a group being judged as interactive. We developed a social interaction field model that describes a front-back asymmetric social interaction field. This model accurately predicts participants' perceptual judgements of social grouping in real static and dynamic social scenes. Our findings indicate that the social interaction field model is an efficient computational framework for analysing social interactions and provides insight into how human observers perceive the interactions of others, enabling the identification of social groups.


Subject(s)
Interpersonal Relations , Social Identification , Female , Humans , Judgment , Male , Models, Theoretical , Posture/physiology , Psychological Distance , Social Perception , Young Adult
7.
Hum Factors ; 61(6): 879-894, 2019 09.
Article in English | MEDLINE | ID: mdl-30912987

ABSTRACT

OBJECTIVE: The study examines the factors determining the movement time (MT) of positioning an object in an immersive 3D virtual environment. BACKGROUND: Positioning an object into a prescribed area is a fundamental operation in a 3D space. Although Fitts's law models the pointing task very well, it does not apply to a positioning task in an immersive 3D virtual environment since it does not consider the effect of object size in the positioning task. METHOD: Participants were asked to position a ball-shaped object into a spherical area in a virtual space using a handheld or head-tracking controller in the ray-casting technique. We varied object size (OS), movement amplitude (A), and target tolerance (TT). MT was recorded and analyzed in three phases: acceleration, deceleration, and correction. RESULTS: In the acceleration phase, MT was inversely related to object size and positively proportional to movement amplitude. In the deceleration phase, MT was primarily determined by movement amplitude. In the correction phase, MT was affected by all three factors. We observed similar results whether participants used a handheld controller or head-tracking controller. We thus propose a three-phase model with different formulae at each phase. This model fit participants' performance very well. CONCLUSION: A three-phase model can successfully predict MT in the positioning task in an immersive 3D virtual environment in the acceleration, deceleration, and correction phases, separately. APPLICATION: Our model provides a quantitative framework for researchers and designers to design and evaluate 3D interfaces for the positioning task in a virtual space.


Subject(s)
Acceleration , Deceleration , Motor Activity/physiology , Virtual Reality , Adult , Female , Humans , Male , Movement , Time Factors , Young Adult
8.
J Vis ; 18(13): 10, 2018 12 03.
Article in English | MEDLINE | ID: mdl-30550619

ABSTRACT

Angle perception is an important middle-level visual process, combining line features to generate an integrated shape percept. Previous studies have proposed two theories of angle perception-a combination of lines and a holistic feature following Weber's law. However, both theories failed to explain the dual-peak fluctuations of the just-noticeable difference (JND) across angle sizes. In this study, we found that the human visual system processes the angle feature in two stages: first, by encoding the orientation of the bounding lines and combining them into an angle feature; and second, by estimating the angle in an orthogonal internal reference frame (IRF). The IRF model fits well with the dual-peak fluctuations of the JND that neither the theory of line combinations nor Weber's law can explain. A statistical image analysis of natural images revealed that the IRF was in alignment with the distribution of the angle features in the natural environment, suggesting that the IRF reflects human prior knowledge of angles in the real world. This study provides a new computational framework for angle discrimination, thereby resolving a long-standing debate on angle perception.


Subject(s)
Form Perception/physiology , Reference Standards , Computer Simulation , Female , Humans , Male , Orientation , Psychophysics , Sensory Thresholds/physiology , Young Adult
9.
Cereb Cortex ; 27(5): 3042-3051, 2017 05 01.
Article in English | MEDLINE | ID: mdl-27242029

ABSTRACT

The brain integrates discrete but collinear stimuli to perceive global contours. Previous contour integration (CI) studies mainly focus on integration over space, and CI is attributed to either V1 long-range connections or contour processing in high-visual areas that top-down modulate V1 responses. Here, we show that CI also occurs over time in a design that minimizes the roles of V1 long-range interactions. We use tilted contours embedded in random orientation noise and moving horizontally behind a fixed vertical slit. Individual contour elements traveling up/down within the slit would be encoded over time by parallel, rather than aligned, V1 neurons. However, we find robust contour detection even when the slit permits only one viewable contour element. Similar to CI over space, CI over time also obeys the rule of collinearity. fMRI evidence shows that while CI over space engages visual areas as early as V1, CI over time mainly engages higher dorsal and ventral visual areas involved in shape processing, as well as posterior parietal regions involved in visual memory that can represent the orientation of temporally integrated contours. These results suggest at least partially dissociable mechanisms for implementing the Gestalt rule of continuity in CI over space and time.


Subject(s)
Contrast Sensitivity/physiology , Magnetic Resonance Imaging , Psychophysics , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Adult , Brain Mapping , Female , Humans , Image Processing, Computer-Assisted , Judgment/drug effects , Judgment/physiology , Male , Oxygen/blood , Photic Stimulation , Time Factors , Young Adult
10.
Front Aging Neurosci ; 7: 105, 2015.
Article in English | MEDLINE | ID: mdl-26113820

ABSTRACT

It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals' cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual's age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan.

11.
Curr Biol ; 23(18): 1799-804, 2013 Sep 23.
Article in English | MEDLINE | ID: mdl-24012311

ABSTRACT

Translating sensory information into perceptual decisions is a core challenge faced by the brain. This ability is understood to rely on weighting sensory evidence in order to form mental templates of the critical differences between objects. Learning is shown to optimize these templates for efficient task performance, but the neural mechanisms underlying this improvement remain unknown. Here, we identify the mechanisms that the brain uses to implement templates for perceptual decisions through experience. We trained observers to discriminate visual forms that were randomly perturbed by noise. To characterize the internal stimulus template that observers learn when performing this task, we adopted a classification image approach (e.g., [5-7]) for the analysis of both behavioral and fMRI data. By reverse correlating behavioral and multivoxel pattern responses with noisy stimulus trials, we identified the critical image parts that determine the observers' choice. Observers learned to integrate information across locations and weight the discriminative image parts. Training enhanced shape processing in the lateral occipital area, which was shown to reflect size-invariant representations of informative image parts. Our findings demonstrate that learning optimizes mental templates for perceptual decisions by tuning the representation of informative image parts in higher ventral cortex.


Subject(s)
Decision Making , Learning/physiology , Visual Cortex/physiology , Visual Perception/physiology , Brain Mapping , Discrimination, Psychological , Humans , Magnetic Resonance Imaging
12.
Psychol Sci ; 24(4): 412-22, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23447559

ABSTRACT

Despite the central role of learning in visual recognition, it is largely unknown whether visual form learning is maintained in older age. We examined whether training improved performance in both young and older adults at two key stages of visual recognition: integration of local elements and global form discrimination. We used a shape-discrimination task (concentric vs. radial patterns) in which young and older adults showed similar performance before training. Using a parametric stimulus space that allowed us to manipulate global features and background noise, we were able to distinguish integration and discrimination processes. We found that training improves global form discrimination in both young and older adults. However, learning to integrate local elements is impaired in older age, possibly because of reduced tolerance to external noise. These findings suggest that visual selection processes, rather than global feature representations, provide a fundamental limit for learning-dependent plasticity in the aging brain.


Subject(s)
Aging/physiology , Discrimination Learning/physiology , Form Perception/physiology , Adolescent , Adult , Aged , Aged, 80 and over , Humans , Middle Aged , Pattern Recognition, Visual/physiology , Young Adult
13.
Front Psychol ; 4: 110, 2013.
Article in English | MEDLINE | ID: mdl-23471514

ABSTRACT

Learning is known to facilitate performance in a range of perceptual tasks. Behavioral improvement after training is typically shown after practice with highly similar stimuli that are difficult to discriminate (i.e., hard training), or after exposure to dissimilar stimuli that are highly discriminable (i.e., easy training). However, little is known about the processes that mediate learning after training with difficult compared to easy stimuli. Here we investigate the time course of learning when observers were asked to discriminate similar global form patterns after hard vs. easy training. Hard training required observers to discriminate highly similar global forms, while easy training to judge clearly discriminable patterns. Our results demonstrate differences in learning and transfer performance for hard compared to easy training. Hard training resulted in stronger behavioral improvement than easy training. Further, for hard training, performance improved during single sessions, while for easy training performance improved across but not within sessions. These findings suggest that training with difficult stimuli may result in online learning of specific stimulus features that are similar between the training and test stimuli, while training with easy stimuli involves transfer of learning from highly to less discriminable stimuli that may require longer periods of consolidation.

14.
PLoS Biol ; 6(8): e197, 2008 Aug 12.
Article in English | MEDLINE | ID: mdl-18707195

ABSTRACT

Perceptual learning of visual features occurs when multiple stimuli are presented in a fixed sequence (temporal patterning), but not when they are presented in random order (roving). This points to the need for proper stimulus coding in order for learning of multiple stimuli to occur. We examined the stimulus coding rules for learning with multiple stimuli. Our results demonstrate that: (1) stimulus rhythm is necessary for temporal patterning to take effect during practice; (2) learning consolidation is subject to disruption by roving up to 4 h after each practice session; (3) importantly, after completion of temporal-patterned learning, performance is undisrupted by extended roving training; (4) roving is ineffective if each stimulus is presented for five or more consecutive trials; and (5) roving is also ineffective if each stimulus has a distinct identity. We propose that for multi-stimulus learning to occur, the brain needs to conceptually "tag" each stimulus, in order to switch attention to the appropriate perceptual template. Stimulus temporal patterning assists in tagging stimuli and switching attention through its rhythmic stimulus sequence.


Subject(s)
Conditioning, Operant , Learning/physiology , Visual Perception/physiology , Adult , Humans
15.
Vision Res ; 47(4): 512-24, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17223155

ABSTRACT

The purpose of the experiments described here was to investigate global image processing using methods that require global processing while eliminating or compensating for low level abnormalities: visibility, shape perception and positional uncertainty. In order to accomplish this we used a closed figure made up of Gabor patches either in noise or on a blank field. The stimuli were circular or elliptical contours, formed by N equally spaced Gabor patches. We performed two separate experiments: In one experiment we fixed N and varied the aspect ratio using a staircase to determine the threshold aspect ratio; in the second experiment we held the aspect ratio constant (at twice the threshold aspect ratio) and varied N in order to measure the threshold number of elements required to judge the shape. Our results confirm and extend previous studies showing that humans with naturally occurring amblyopia show deficits in contour processing. Our results show that the deficits depend strongly on spatial scale (target size and spatial frequency). The deficit in global contour processing is substantially greater in noise (where contour-linking is required) than on a blank field. The magnitude of the deficits is modest when low-level deficits (reduced visibility, increased positional uncertainty, and abnormal shape perception) are minimized, and does not seem to depend much on acuity, crowding or stereoacuity. The residual deficits reported here cannot be simply ascribed to reduced visibility or increased positional uncertainty, and we therefore conclude that these are genuine deficits in global contour segregation and integration.


Subject(s)
Amblyopia/psychology , Form Perception , Adult , Amblyopia/physiopathology , Contrast Sensitivity , Discrimination, Psychological , Female , Humans , Male , Middle Aged , Photic Stimulation/methods , Psychophysics , Sensory Thresholds , Visual Acuity
16.
J Vis ; 6(12): 1412-20, 2006 Dec 15.
Article in English | MEDLINE | ID: mdl-17209744

ABSTRACT

The visual system integrates discrete but aligned local stimuli to form percept of global contours. Previous experiments using "snake" contours showed that contour integration was mainly present in foveal vision but absent or greatly weakened in peripheral vision. In this study, we demonstrated that, for contour stimuli such as circles and ellipses, which bore good Gestalt properties, contour integration for shape detection and discrimination was nearly constant from the fovea to up to 35 degrees visual periphery! Contour integration was impaired by local orientation and position jitters of contour elements, indicating that the same local contour linking mechanisms revealed with snake contour stimuli also played critical roles in integration of our good Gestalt stimuli. Contour integration was also unaffected by global position jittering up to 20% of the contour size and by dramatic shape jittering, which excluded non-contour integration processes such as detection of various local cues and template matching as alternative mechanisms for uncompromised peripheral perception of good Gestalt stimuli. Peripheral contour integration also presented an interesting upper-lower visual field symmetry after asymmetries of contrast sensitivity and shape discrimination were discounted. The constant peripheral performance might benefit from easy detection of good Gestalt stimuli, which popped out from background noise, from a boost of local contour linking by top-down influences and/or from multielement contour linking by long-range interactions.


Subject(s)
Form Perception/physiology , Perceptual Closure/physiology , Visual Fields/physiology , Contrast Sensitivity/physiology , Cues , Discrimination, Psychological , Humans , Orientation , Photic Stimulation/methods
17.
Nat Neurosci ; 8(11): 1497-9, 2005 Nov.
Article in English | MEDLINE | ID: mdl-16222233

ABSTRACT

Little is known about how temporal stimulus factors influence perceptual learning. Here we demonstrate an essential role of stimulus temporal patterning in enabling perceptual learning by showing that 'unlearnable' contrast and motion-direction discrimination (resulting from random interleaving of stimuli) can be readily learned when stimuli are practiced in a fixed temporal pattern. This temporal patterning does not facilitate learning by reducing stimulus uncertainty; further, learning enabled by temporal patterning can later generalize to randomly presented stimuli.


Subject(s)
Discrimination Learning/physiology , Motion Perception/physiology , Analysis of Variance , Generalization, Stimulus , Humans , Photic Stimulation/methods , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL
...