Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Biomedicines ; 12(6)2024 Jun 13.
Artigo em Inglês | MEDLINE | ID: mdl-38927516

RESUMO

This article addresses the semantic segmentation of laparoscopic surgery images, placing special emphasis on the segmentation of structures with a smaller number of observations. As a result of this study, adjustment parameters are proposed for deep neural network architectures, enabling a robust segmentation of all structures in the surgical scene. The U-Net architecture with five encoder-decoders (U-Net5ed), SegNet-VGG19, and DeepLabv3+ employing different backbones are implemented. Three main experiments are conducted, working with Rectified Linear Unit (ReLU), Gaussian Error Linear Unit (GELU), and Swish activation functions. The applied loss functions include Cross Entropy (CE), Focal Loss (FL), Tversky Loss (TL), Dice Loss (DiL), Cross Entropy Dice Loss (CEDL), and Cross Entropy Tversky Loss (CETL). The performance of Stochastic Gradient Descent with momentum (SGDM) and Adaptive Moment Estimation (Adam) optimizers is compared. It is qualitatively and quantitatively confirmed that DeepLabv3+ and U-Net5ed architectures yield the best results. The DeepLabv3+ architecture with the ResNet-50 backbone, Swish activation function, and CETL loss function reports a Mean Accuracy (MAcc) of 0.976 and Mean Intersection over Union (MIoU) of 0.977. The semantic segmentation of structures with a smaller number of observations, such as the hepatic vein, cystic duct, Liver Ligament, and blood, verifies that the obtained results are very competitive and promising compared to the consulted literature. The proposed selected parameters were validated in the YOLOv9 architecture, which showed an improvement in semantic segmentation compared to the results obtained with the original architecture.

2.
Sensors (Basel) ; 24(2)2024 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-38257584

RESUMO

This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information processing, providing significant advantages in the representation of temporal information in robotic arm control compared to conventional neural networks. Exploring specific implementations of SNN in robot control, this study analyzes neuron models and learning mechanisms inherent to SNN. Based on the principles of the Neural Engineering Framework (NEF), a novel spiking PID controller is designed and simulated for a 3-DoF robotic arm using Nengo and MATLAB R2022b. The controller demonstrated good accuracy and efficiency in following designated trajectories, showing minimal deviations, overshoots, or oscillations. A thorough quantitative assessment, utilizing performance metrics like root mean square error (RMSE) and the integral of the absolute value of the time-weighted error (ITAE), provides additional validation for the efficacy of the SNN-based controller. Competitive performance was observed, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance. This work highlights the utility of NEF and SNN in developing effective robotic controllers, laying the groundwork for future research focused on SNN adaptability in dynamic environments and advanced robotic applications.

3.
Sensors (Basel) ; 23(24)2023 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-38139492

RESUMO

This work addresses the design and implementation of a novel PhotoBiological Filter Classifier (PhBFC) to improve the accuracy of a static sign language translation system. The captured images are preprocessed by a contrast enhancement algorithm inspired by the capacity of retinal photoreceptor cells from mammals, which are responsible for capturing light and transforming it into electric signals that the brain can interpret as images. This sign translation system not only supports the effective communication between an agent and an operator but also between a community with hearing disabilities and other people. Additionally, this technology could be integrated into diverse devices and applications, further broadening its scope, and extending its benefits for the community in general. The bioinspired photoreceptor model is evaluated under different conditions. To validate the advantages of applying photoreceptors cells, 100 tests were conducted per letter to be recognized, on three different models (V1, V2, and V3), obtaining an average of 91.1% of accuracy on V3, compared to 63.4% obtained on V1, and an average of 55.5 Frames Per Second (FPS) in each letter classification iteration for V1, V2, and V3, demonstrating that the use of photoreceptor cells does not affect the processing time while also improving the accuracy. The great application potential of this system is underscored, as it can be employed, for example, in Deep Learning (DL) for pattern recognition or agent decision-making trained by reinforcement learning, etc.


Assuntos
Gestos , Língua de Sinais , Humanos , Animais , Redes Neurais de Computação , Células Fotorreceptoras , Algoritmos , Mamíferos
4.
Sensors (Basel) ; 21(2)2021 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-33445582

RESUMO

This paper presents the results of the design, simulation, and implementation of a virtual vehicle. Such a process employs the Unity videogame platform and its Machine Learning-Agents library. The virtual vehicle is implemented in Unity considering mechanisms that represent accurately the dynamics of a real automobile, such as motor torque curve, suspension system, differential, and anti-roll bar, among others. Intelligent agents are designed and implemented to drive the virtual automobile, and they are trained using imitation or reinforcement. In the former method, learning by imitation, a human expert interacts with an intelligent agent through a control interface that simulates a real vehicle; in this way, the human expert receives motion signals and has stereoscopic vision, among other capabilities. In learning by reinforcement, a reward function that stimulates the intelligent agent to exert a soft control over the virtual automobile is designed. In the training stage, the intelligent agents are introduced into a scenario that simulates a four-lane highway. In the test stage, instead, they are located in unknown roads created based on random spline curves. Finally, graphs of the telemetric variables are presented, which are obtained from the automobile dynamics when the vehicle is controlled by the intelligent agents and their human counterpart, both in the training and the test track.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA