Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38236686

ABSTRACT

We introduce a novel co-design method for autonomous moving agents' shape attributes and locomotion by combining deep reinforcement learning and evolution with user control. Our main inspiration comes from evolution, which has led to wide variability and adaptation in Nature and has significantly improved design and behavior simultaneously. Our method takes an input agent with optional user-defined constraints, such as leg parts that should not evolve or are only within the allowed ranges of changes. It uses physics-based simulation to determine its locomotion and finds a behavior policy for the input design that is used as a baseline for comparison. The agent is randomly modified within the allowed ranges, creating a new generation of several hundred agents. The generation is trained by transferring the previous policy, which significantly speeds up the training. The best-performing agents are selected, and a new generation is formed using their crossover and mutations. The next generations are then trained until satisfactory results are reached. We show a wide variety of evolved agents, and our results show that even with only 10 the overall performance of the evolved agents improves by 50 experiments' performance will improve even more to 150 structures, and it does not require considerable computation resources as it works on a single GPU and provides results by training thousands of agents within 30 minutes.

2.
Article in English | MEDLINE | ID: mdl-37293199

ABSTRACT

The use of virtual reality (VR) in laboratory skill training is rapidly increasing. In such applications, users often need to explore a large virtual environment within a limited physical space while completing a series of hand-based tasks (e.g., object manipulation). However, the most widely used controller-based teleport methods may conflict with the users' hand operation and result in a higher cognitive load, negatively affecting their training experiences. To alleviate these limitations, we designed and implemented a locomotion method called ManiLoco to enable hands-free interaction and thus avoid conflicts and interruptions from other tasks. Users can teleport to a remote object's position by taking a step toward the object while looking at it. We evaluated ManiLoco and compared it with state-of-the-art Point & Teleport in a within-subject experiment with 16 participants. The results confirmed the viability of our foot- and head-based approach and better support concurrent object manipulation in VR training tasks. Furthermore, our locomotion method does not require any additional hardware. It solely relies on the VR head-mounted display (HMD) and our implementation of detecting the user's stepping activity, and it can be easily applied to any VR application as a plugin.

3.
Front Hum Neurosci ; 16: 883467, 2022.
Article in English | MEDLINE | ID: mdl-36034123

ABSTRACT

Although interest in brain-computer interfaces (BCIs) from researchers and consumers continues to increase, many BCIs lack the complexity and imaginative properties thought to guide users toward successful brain activity modulation. We investigate the possibility of using a complex BCI by developing an experimental story environment with which users interact through cognitive thought strategies. In our system, the user's frontal alpha asymmetry (FAA) measured with electroencephalography (EEG) is linearly mapped to the color saturation of the main character in the story. We implemented a user-friendly experimental design using a comfortable EEG device and short neurofeedback (NF) training protocol. In our system, seven out of 19 participants successfully increased FAA during the course of the study, for a total of ten successful blocks out of 152. We detail our results concerning left and right prefrontal cortical activity contributions to FAA in both successful and unsuccessful story blocks. Additionally, we examine inter-subject correlations of EEG data, and self-reported questionnaire data to understand the user experience of BCI interaction. Results suggest the potential of imaginative story BCI environments for engaging users and allowing for FAA modulation. Our data suggests new research directions for BCIs investigating emotion and motivation through FAA.

4.
Behav Sci (Basel) ; 10(9)2020 Aug 27.
Article in English | MEDLINE | ID: mdl-32867234

ABSTRACT

This paper describes our investigation on how participants coordinate movement behavior in relation to a virtual crowd that surrounds them while immersed in a virtual environment. The participants were immersed in a virtual metropolitan city and were instructed to cross the road and reach the opposite sidewalk. The participants performed the task ten times. The virtual crowd that surrounded them was scripted to move in the same direction. During the experiment, several measurements were obtained to evaluate human movement coordination. Moreover, the time and direction in which the participants started moving toward the opposite sidewalk were also captured. These data were later used to initialize the parameters of simulated characters that were scripted to become part of the virtual crowd. Measurements were extracted from the simulated characters and used as a baseline to evaluate the movement coordination of the participants. By analyzing the data, significant differences between the movement behaviors of the participants and the simulated characters were found. However, simple linear regression analyses indicated that the movement behavior of participants was moderately associated with the simulated characters' movements when performing a locomotive task within a virtual crowd population. This study can be considered as a baseline for further research that evaluates the movement coordination of participants during human-virtual-crowd interactions using measurements obtained by the simulated characters.

5.
Sensors (Basel) ; 17(11)2017 Nov 10.
Article in English | MEDLINE | ID: mdl-29125534

ABSTRACT

This paper presents a method of reconstructing full-body locomotion sequences for virtual characters in real-time, using data from a single inertial measurement unit (IMU). This process can be characterized by its difficulty because of the need to reconstruct a high number of degrees of freedom (DOFs) from a very low number of DOFs. To solve such a complex problem, the presented method is divided into several steps. The user's full-body locomotion and the IMU's data are recorded simultaneously. Then, the data is preprocessed in such a way that would be handled more efficiently. By developing a hierarchical multivariate hidden Markov model with reactive interpolation functionality the system learns the structure of the motion sequences. Specifically, the phases of the locomotion sequence are assigned in the higher hierarchical level, and the frame structure of the motion sequences are assigned at the lower hierarchical level. During the runtime of the method, the forward algorithm is used for reconstructing the full-body motion of a virtual character. Firstly, the method predicts the phase where the input motion belongs (higher hierarchical level). Secondly, the method predicts the closest trajectories and their progression and interpolates the most probable of them to reconstruct the virtual character's full-body motion (lower hierarchical level). Evaluating the proposed method shows that it works on reasonable framerates and minimizes the reconstruction errors compared with previous approaches.

SELECTION OF CITATIONS
SEARCH DETAIL
...