Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
HardwareX ; 12: e00372, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36393916

RESUMO

While for vision and audio the same mass-produced units can be embedded in many different systems from smartphones to robots, tactile sensors have to be built in application-specific shapes and sizes. To use a commercially available tactile sensor, it can be necessary to develop the entire system around an existing sensor model. We present a set of open-source solutions for designing, manufacturing, reading and integrating custom application-specific tactile matrix sensors. Our manufacturing process only requires an off-the-shelf cutting plotter and widely available plastic and metal foils. This allows creating sensors of diverse sizes, shapes, and layouts, which can be adapted to various specific use cases as demonstrated with exemplary robot integrations. For interfacing and readout, we develop an Arduino-like prototype board (Tacduino) with amplifier circuits to ensure good resolution and to suppress crosstalk. As an example, we give step-by-step instructions to build tactile fingertips for the RobotiQ 3-Finger Gripper, and we provide design files for the readout circuit board together with Arduino firmware and driver software. Both, wired and wireless communication between the sensors and a host PC are supported by this system. The hardware was originally presented and investigated in [1].

2.
IEEE Trans Cybern ; PP2022 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-36179009

RESUMO

Markerless vision-based teleoperation that leverages innovations in computer vision offers the advantages of allowing natural and noninvasive finger motions for multifingered robot hands. However, current pose estimation methods still face inaccuracy issues due to the self-occlusion of the fingers. Herein, we develop a novel vision-based hand-arm teleoperation system that captures the human hands from the best viewpoint and at a suitable distance. This teleoperation system consists of an end-to-end hand pose regression network and a controlled active vision system. The end-to-end pose regression network (Transteleop), combined with an auxiliary reconstruction loss function, captures the human hand through a low-cost depth camera and predicts joint commands of the robot based on the image-to-image translation method. To obtain the optimal observation of the human hand, an active vision system is implemented by a robot arm at the local site that ensures the high accuracy of the proposed neural network. Human arm motions are simultaneously mapped to the slave robot arm under relative control. Quantitative network evaluation and a variety of complex manipulation tasks, for example, tower building, pouring, and multitable cup stacking, demonstrate the practicality and stability of the proposed teleoperation system.

3.
Front Neurorobot ; 16: 829437, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35308311

RESUMO

We propose a vision-proprioception model for planar object pushing, efficiently integrating all necessary information from the environment. A Variational Autoencoder (VAE) is used to extract compact representations from the task-relevant part of the image. With the real-time robot state obtained easily from the hardware system, we fuse the latent representations from the VAE and the robot end-effector position together as the state of a Markov Decision Process. We use Soft Actor-Critic to train the robot to push different objects from random initial poses to target positions in simulation. Hindsight Experience replay is applied during the training process to improve the sample efficiency. Experiments demonstrate that our algorithm achieves a pushing performance superior to a state-based baseline model that cannot be generalized to a different object and outperforms state-of-the-art policies which operate on raw image observations. At last, we verify that our trained model has a good generalization ability to unseen objects in the real world.

4.
Front Robot AI ; 7: 540565, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33501309

RESUMO

The quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age. Here, we propose a new approach to research to which degree the different factors contribute to crossmodal processing and the age-related decline by replicating a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks (ANN). We implemented two ANN models to specifically focus on the relevance of early integration of sensory information during the crossmodal processing stream as a mechanism proposed for efficient processing in the human brain. Applying an adaptive staircase procedure, we approached comparable unimodal classification performance for both modalities in the human participants as well as the ANN. This allowed us to compare crossmodal performance between and within the systems, independent of the underlying unimodal processes. Our data show that unimodal classification accuracies of the tactile sensing technology are comparable to humans. For crossmodal discrimination of the ANN the integration of high-level unimodal features on earlier stages of the crossmodal processing stream shows higher accuracies compared to the late integration of independent unimodal classifications. In comparison to humans, the ANN show higher accuracies than older participants in the unimodal as well as the crossmodal condition, but lower accuracies than younger participants in the crossmodal task. Taken together, we can show that state-of-the-art tactile sensing technology is able to perform a complex tactile recognition task at levels comparable to humans. For crossmodal processing, human inspired early sensory integration seems to improve the performance of artificial neural networks. Still, younger participants seem to employ more efficient crossmodal integration mechanisms than modeled in the proposed ANN. Our work demonstrates how collaborative research in neuroscience and embodied artificial neurocognitive models can help to derive models to inform the design of future neurocomputational architectures.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...