Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Animal ; 18(9): 101293, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39216153

RESUMO

Methane (CH4) from ruminant production systems produces greenhouse gases that contribute to global warming. Our goal was to determine whether monoammonium glycyrrhizinate could inhibit CH4 emissions over the long term without affecting animal performance and immune indices in Karakul sheep. This study aimed to assess the effects of medium-term (60 days) addition of monoammonium glycyrrhizinate on growth performance, apparent digestibility, CH4 emissions, methanogens, fibre-degrading bacteria and blood characteristics in Karakul sheep. Twelve male Karakul sheep (40.1 ± 3.59 kg) with fistula were randomly divided into two groups (n = 6): the Control group received a basal diet + the same volume of distilled water (30 ml) and the Treatment group received a basal diet + 8.75 g/kg monoammonium glycyrrhizinate injected via fistula. The adaptation stage was 15 days, and the measurement stage was 60 days. The sampling during the measurement stage was divided into two stages, stage I (1 ∼ 30 d) and stage II (31 ∼ 60 d). The results showed that monoammonium glycyrrhizinate significantly reduced the relative abundance of Bacteroides caccae, daily CH4 emission and protozoa population, significantly increased the relative abundance of Lachnospiraceae bacterium AD3010, Lachnospiraceae bacterium FE2018, Lachnospiraceae bacterium NK3A20, Lachnospiraceae bacterium NK4A179 and Lachnospiraceae bacterium V9D3004 in stage I (P < 0.05); significantly increased the relative abundance of Lachnospiraceae bacterium AD3010, but significantly decreased the relative abundance of Lachnospiraceae bacterium NK4A179 and Lachnospiraceae bacterium C6A11 in stage II (P < 0.05). Therefore, monoammonium glycyrrhizinate could be used as a CH4 inhibitor to limit the rumen CH4 emissions of Karakul sheep in short-term period (30 days) without affecting the growth performance, fibre digestibility and blood parameters.


Assuntos
Ração Animal , Ácido Glicirrízico , Metano , Rúmen , Animais , Metano/metabolismo , Ácido Glicirrízico/farmacologia , Masculino , Ovinos , Rúmen/microbiologia , Rúmen/metabolismo , Ração Animal/análise , Dieta/veterinária , Digestão/efeitos dos fármacos
2.
Ann Med Surg (Lond) ; 86(5): 2437-2441, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38694288

RESUMO

Introduction: To explore the feasibility and safety of retroperitoneal laparoscopic partial nephrectomy (RLPN) with selective artery clamp (SAC) in patients with renal cell carcinoma (RCC). Methods: The authors recruited three men and two women who underwent RLPN for T1 RCC between December 2022 and May 2023 at a tertiary hospital. The median age of the patients was 32 years (range, 25-70 years). The tumour size ranged from 3 to 4.5 cm. The R.E.N.A.L scores were 4x, 5p, 8a, 5a, and 8ah. The median preoperative eGFR was 96.9 (74.3-105.2). Renal computed tomography angiography was performed before the surgery to evaluate the artery branches. The operation time, number of clamped arteries, warm ischaemic time (WIT), intraoperative blood loss, RCC type, postoperative hospital stay, changes in renal function, and complications were evaluated. The follow-up duration was 6 months. Results: The median operation time was 120 (75-150) minutes. One artery was clamped in four patients, while three were clamped in one patient. The median WIT was 22 (15-30) min, and the median blood loss was 150 (100-300) ml. No complications were recorded, and the resection margin was negative in all patients. The median decrease in eGFR was 6 (4-30%). Conclusions: RLPN with SAC for T1 RCC is safe and feasible in clinical practice.

3.
Sensors (Basel) ; 23(11)2023 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-37299848

RESUMO

Human activity recognition (HAR) is an important research problem in computer vision. This problem is widely applied to building applications in human-machine interactions, monitoring, etc. Especially, HAR based on the human skeleton creates intuitive applications. Therefore, determining the current results of these studies is very important in selecting solutions and developing commercial products. In this paper, we perform a full survey on using deep learning to recognize human activity based on three-dimensional (3D) human skeleton data as input. Our research is based on four types of deep learning networks for activity recognition based on extracted feature vectors: Recurrent Neural Network (RNN) using extracted activity sequence features; Convolutional Neural Network (CNN) uses feature vectors extracted based on the projection of the skeleton into the image space; Graph Convolution Network (GCN) uses features extracted from the skeleton graph and the temporal-spatial function of the skeleton; Hybrid Deep Neural Network (Hybrid-DNN) uses many other types of features in combination. Our survey research is fully implemented from models, databases, metrics, and results from 2019 to March 2023, and they are presented in ascending order of time. In particular, we also carried out a comparative study on HAR based on a 3D human skeleton on the KLHA3D 102 and KLYOGA3D datasets. At the same time, we performed analysis and discussed the obtained results when applying CNN-based, GCN-based, and Hybrid-DNN-based deep learning networks.


Assuntos
Aprendizado Profundo , Humanos , Redes Neurais de Computação , Bases de Dados Factuais , Atividades Humanas , Esqueleto
4.
Sensors (Basel) ; 23(6)2023 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-36991971

RESUMO

Hand detection and classification is a very important pre-processing step in building applications based on three-dimensional (3D) hand pose estimation and hand activity recognition. To automatically limit the hand data area on egocentric vision (EV) datasets, especially to see the development and performance of the "You Only Live Once" (YOLO) network over the past seven years, we propose a study comparing the efficiency of hand detection and classification based on the YOLO-family networks. This study is based on the following problems: (1) systematizing all architectures, advantages, and disadvantages of YOLO-family networks from version (v)1 to v7; (2) preparing ground-truth data for pre-trained models and evaluation models of hand detection and classification on EV datasets (FPHAB, HOI4D, RehabHand); (3) fine-tuning the hand detection and classification model based on the YOLO-family networks, hand detection, and classification evaluation on the EV datasets. Hand detection and classification results on the YOLOv7 network and its variations were the best across all three datasets. The results of the YOLOv7-w6 network are as follows: FPHAB is P = 97% with TheshIOU = 0.5; HOI4D is P = 95% with TheshIOU = 0.5; RehabHand is larger than 95% with TheshIOU = 0.5; the processing speed of YOLOv7-w6 is 60 fps with a resolution of 1280 × 1280 pixels and that of YOLOv7 is 133 fps with a resolution of 640 × 640 pixels.


Assuntos
Mãos , Redes Neurais de Computação , Humanos
5.
Sensors (Basel) ; 22(14)2022 Jul 20.
Artigo em Inglês | MEDLINE | ID: mdl-35891099

RESUMO

Three-dimensional human pose estimation is widely applied in sports, robotics, and healthcare. In the past five years, the number of CNN-based studies for 3D human pose estimation has been numerous and has yielded impressive results. However, studies often focus only on improving the accuracy of the estimation results. In this paper, we propose a fast, unified end-to-end model for estimating 3D human pose, called YOLOv5-HR-TCM (YOLOv5-HRet-Temporal Convolution Model). Our proposed model is based on the 2D to 3D lifting approach for 3D human pose estimation while taking care of each step in the estimation process, such as person detection, 2D human pose estimation, and 3D human pose estimation. The proposed model is a combination of best practices at each stage. Our proposed model is evaluated on the Human 3.6M dataset and compared with other methods at each step. The method achieves high accuracy, not sacrificing processing speed. The estimated time of the whole process is 3.146 FPS on a low-end computer. In particular, we propose a sports scoring application based on the deviation angle between the estimated 3D human posture and the standard (reference) origin. The average deviation angle evaluated on the Human 3.6M dataset (Protocol #1-Pro #1) is 8.2 degrees.


Assuntos
Postura , Robótica , Humanos
6.
Sensors (Basel) ; 21(24)2021 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-34960491

RESUMO

Human segmentation and tracking often use the outcome of person detection in the video. Thus, the results of segmentation and tracking depend heavily on human detection results in the video. With the advent of Convolutional Neural Networks (CNNs), there are excellent results in this field. Segmentation and tracking of the person in the video have significant applications in monitoring and estimating human pose in 2D images and 3D space. In this paper, we performed a survey of many studies, methods, datasets, and results for human segmentation and tracking in video. We also touch upon detecting persons as it affects the results of human segmentation and human tracking. The survey is performed in great detail up to source code paths. The MADS (Martial Arts, Dancing and Sports) dataset comprises fast and complex activities. It has been published for the task of estimating human posture. However, before determining the human pose, the person needs to be detected as a segment in the video. Moreover, in the paper, we publish a mask dataset to evaluate the segmentation and tracking of people in the video. In our MASK MADS dataset, we have prepared 28 k mask images. We also evaluated the MADS dataset for segmenting and tracking people in the video with many recently published CNNs methods.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA