Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Endocrinol (Lausanne) ; 14: 1236401, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37900143

RESUMO

Objective: This investigation sought to elucidate the potential correlation between a recently characterized adiposity metric, termed the Weight-Adjusted-Waist Index (WWI) and hyperuricemia. Methods: A cross-sectional design was employed in this study, featuring both hyperuricemic and non-hyperuricemic subjects with complete WWI data, sourced from the National Health and Nutrition Examination Survey (NHANES) spanning 2017 to March 2020. WWI was calculated utilizing the formula which involves the division of waist circumference (WC) by the square root of the body weight. In order to determine the relationship between WWI and hyperuricemia, both univariate and multivariate logistic regression models, appropriately weighted, were employed in the analysis. The linearity of relationships was validated using smooth curve fitting. Additionally, subgroup evaluations and interaction assessments were conducted. Results: The study sample comprised 7437 subjects, yielding a hyperuricemia prevalence of 18.22%. Stratifying WWI into tertiles, a progressive rise in hyperuricemia prevalence was evident with increasing WWI (Tertile 1: 11.62%, Tertile 2: 17.91%, Tertile 3: 25.13%). The odds ratio (OR) demonstrated that individuals within the highest WWI tertile were significantly more prone to hyperuricemia than those in the lowest tertile (OR = 2.41, 95% CI: 1.88-3.08). Conclusion: This study provides evidence that an elevated WWI is correlated with an increased risk of hyperuricemia in the adult population of the United States. These results suggest that WWI may serve as a viable anthropometric indicator for predicting hyperuricemia.


Assuntos
Hiperuricemia , Humanos , Adulto , Estados Unidos/epidemiologia , Hiperuricemia/epidemiologia , Inquéritos Nutricionais , Fatores de Risco , Índice de Massa Corporal , Estudos Transversais
2.
Sci Rep ; 12(1): 14877, 2022 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-36050407

RESUMO

Chronic kidney disease (CKD) has become a worldwide public health problem and accurate assessment of renal function in CKD patients is important for the treatment. Although the glomerular filtration rate (GFR) can accurately evaluate the renal function, the procedure of measurement is complicated. Therefore, endogenous markers are often chosen to estimate GFR indirectly. However, the accuracy of the equations for estimating GFR is not optimistic. To estimate GFR more precisely, we constructed a classification decision tree model to select the most befitting GFR estimation equation for CKD patients. By searching the HIS system of the First Affiliated Hospital of Zhejiang Chinese Medicine University for all CKD patients who visited the hospital from December 1, 2018 to December 1, 2021 and underwent Gate's method of 99mTc-DTPA renal dynamic imaging to detect GFR, we eventually collected 518 eligible subjects, who were randomly divided into a training set (70%, 362) and a test set (30%, 156). Then, we used the training set data to build a classification decision tree model that would choose the most accurate equation from the four equations of BIS-2, CKD-EPI(CysC), CKD-EPI(Cr-CysC) and Ruijin, and the equation was selected by the model to estimate GFR. Next, we utilized the test set data to verify our tree model, and compared the GFR estimated by the tree model with other 13 equations. Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and Bland-Altman plot were used to evaluate the accuracy of the estimates by different methods. A classification decision tree model, including BSA, BMI, 24-hour Urine protein quantity, diabetic nephropathy, age and RASi, was eventually retrieved. In the test set, the RMSE and MAE of GFR estimated by the classification decision tree model were 12.2 and 8.5 respectively, which were lower than other GFR estimation equations. According to Bland-Altman plot of patients in the test set, the eGFR was calculated based on this model and had the smallest degree of variation. We applied the classification decision tree model to select an appropriate GFR estimation equation for CKD patients, and the final GFR estimation was based on the model selection results, which provided us with greater accuracy in GFR estimation.


Assuntos
Insuficiência Renal Crônica , Creatinina , Árvores de Decisões , Taxa de Filtração Glomerular , Humanos , Rim , Testes de Função Renal/métodos , Insuficiência Renal Crônica/diagnóstico
3.
Front Bioeng Biotechnol ; 10: 908056, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35992348

RESUMO

The rapid development of mobile device applications put tremendous pressure on edge nodes with limited computing capabilities, which may cause poor user experience. To solve this problem, collaborative cloud-edge computing is proposed. In the cloud-edge computing, an edge node with limited local resources can rent more resources from a cloud node. According to the nature of cloud service, cloud service can be divided into private cloud and public cloud. In a private cloud environment, the edge node must allocate resources between the cloud node and the edge node. In a public cloud environment, since public cloud service providers offer various pricing modes for users' different computing demands, the edge node also must select the appropriate pricing mode of cloud service; which is a sequential decision problem. In this stydy, we model it as a Markov decision process and parameterized action Markov decision process, and we propose a resource allocation algorithm cost efficient resource allocation with private cloud (CERAI) and cost efficient resource allocation with public cloud (CERAU) in the collaborative cloud-edge environment based on the deep reinforcement learning algorithm deep deterministic policy gradient and P-DQN. Next, we evaluated CERAI and CERAU against three typical resource allocation algorithms based on synthetic and real data of Google datasets. The experimental results demonstrate that CERAI and CERAU can effectively reduce the long-term operating cost of collaborative cloud-side computing in various demanding settings. Our analysis can provide some useful insights for enterprises to design the resource allocation strategy in the collaborative cloud-side computing system.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...