Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.314
Filtrar
2.
Journal of Sun Yat-sen University(Medical Sciences) ; (6): 161-170, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1007288

RESUMO

ObjectiveSleep-related painful erections (SRPE) is a rare sleep disorder characterized by repeated awakening due to painful interruptions of penile erections during nighttime sleep, and its etiology is currently unclear. The purpose of this study is to explore the impact of potential risk factors on the incidence of SRPE. MethodsInformation was collected through questionnaires administered to patients who presented at the urology department and suffered from SRPE or did not suffer from SRPE. A total of 290 participants completed the study, including 145 controls and 145 cases. Logistic regression analysis was used to assess the impact of age, occupation, sleep initiation time per night, frequency of sexual intercourse per week, psychological status, erectile dysfunction, chronic prostatitis, prostate enlargement, lumbar spine disease, central nervous system disease, hypertension, diabetes and family history on the onset of SRPE. ResultsSingle-factor logistic regression analysis found that a history of chronic prostatitis, intellectual labor occupation, central nervous system disease, late sleep onset, frequency of sexual activity, and anxiety status might be related to the onset of SRPE. After incorporating these factors into a multivariate regression analysis model, it was found that having sexual activity ≥2 times/week (OR 95%CI = 0.326(0.179,0.592) and late sleep onset (after 24:00) (OR 95%CI = 0.494(0.265,0.918)might be protective factors for SRPE, while a history of chronic prostatitis(OR 95%CI = 3.779(2.082,6.859) might be a risk factor for SRPE. However, there was no significant statistical difference in the impact of central nervous system diseases and occupation on multivariate analysis. ConclusionChronic prostatitis and anxiety status may be independent risk factors for SRPE; having sexual activity ≥2 times/week and delaying sleep time appropriately may be independent protective factors.

3.
Journal of Prevention and Treatment for Stomatological Diseases ; (12): 235-240, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1006870

RESUMO

@#Risk assessment models for periodontal disease provide dentists with a precise and consolidated evaluation of the prognosis of periodontitis, enabling the formulation of personalized treatment plans. Periodontal risk assessment systems have been widely applied in clinical practice and research. The application fields of periodontal risk assessment systems vary based on the distinctions between clinical periodontal parameters and risk factors. The assessment models listed below are commonly used in clinical practice, including the periodontal risk calculator (PRC), which is an individual-based periodontal risk assessment tool that collects both periodontal and systemic information for prediction; the periodontal assessment tool (PAT), which allows for quantitative differentiation of stages of periodontal disease; the periodontal risk assessment (PRA) and modified periodontal risk assessment (mPRA), which are easy to use; and the classification and regression trees (CART), which assess the periodontal prognosis based on a single affected tooth. Additionally, there are orthodontic-periodontal combined risk assessment systems and implant periapical risk assessment systems tailored for patients needing multidisciplinary treatment. This review focuses on the current application status of periodontal risk assessment systems.

4.
China Pharmacy ; (12): 333-338, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1006619

RESUMO

OBJECTIVE To evaluate the global cancer-associated thromboembolism risk assessment tools based on evidence- based methods, and to provide methodological reference and evidence-based basis for constructing a specific tool in China. METHODS A comprehensive search was conducted on 6 databases, including CNKI, Wanfang data, VIP, CBM, PubMed, and Embase, as well as on the websites of NCCN, ASCO, ESMO and so on with a deadline of June 30, 2022. Furthermore, a supplementary search was conducted in January 2023. The essential characteristics and methodological quality of included risk assessment tools were described and analyzed qualitatively, focusing on comparing each assessment stratification ability. RESULTS Totally 14 risk assessment tools were included in the study, with a sample size of 208-18 956 cases and an average age distribution of 53.1-74.0 years. The applicable population included outpatient cancer student@sina.com patients, lymphoma patients, and multiple myeloma patients,etc. The common predictive factors were body mass index, venous thromboembolism history, and tumor site. All tools had undergone methodological validation, with 9 presented in a weighted scoring format. Only seven tools were used simultaneously for specificity, sensitivity, negative predictive value (NPV), positive predictive value (PPV) and area under the curve (AUC) or C statistical analysis. CONCLUSIONS The risk of bias in constructing existing tools is high, and the heterogeneity of tool validation results is significant. The overall methodological quality must be improved, and its risk stratification ability must also be investigated. There are still certain limitations in clinical practice in China.

5.
Journal of Environmental and Occupational Medicine ; (12): 54-61, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1006457

RESUMO

Background Polycyclic aromatic hydrocarbons (PAHs), one of the main components of fine particulate matter (PM2.5), have a certain impact on ambient air quality, and long-term exposure to PAHs may pose potential health risks to human beings. Objective To identify the distribution characteristics and sources of PAHs in atmospheric PM2.5 in a district of Taizhou City from 2019 to 2021, and to evaluate the health risks of PAHs to the population in the area through the inhalation pathway. Methods From 2019 to 2021, air PM2.5 sampling was carried out at a state-controlled surveillance point in a district of Taizhou City for 7 consecutive days on the 10th-16th of each month, the sampling time was 24 h·d−1, and the sampling flow rate was 100 L·min−1. PM2.5 mass concentration was calculated by gravimetric method. A total of 16 PAHs were determined by ultrasonic extraction-liquid chromatography. Kruskal-Wallis H test was used to compare the distribution charac teristics of PAHs concentrations by years and seasons, characteristic ratio and principal component analysis (PCA) was used to analyze their sources, and a lifetime carcinogenic risk (ILCR) model was used to assess the health risk of PAHs. Results From 2019 to 2021, the annual average concentrations [M (P25, P75)] of ∑PAHs in atmospheric PM2.5 in the selected district of Taizhou City were 6.52 (2.46, 10.59), 8.52 (4.56, 12.29), and 3.72 (1.51, 7.11) ng·m−3, respectively, and the annual benzo[a]pyrene (BaP) excess rates (national limit: 1 ng·m−3) were 27.38% (23/84), 47.62% (40/84), and 19.04% (16/84), respectively, both presenting 2020> 2019 > 2021 (P<0.001, P<0.05). The ∑PAHs concentration distribution showed a seasonal variation, with the highest value in winter and the lowest value in summer (P<0.05). Among the atmospheric PM2.5 samples, the proportion of 5-ring PAHs was the highest, the proportion of 2-3-ring PAHs was the lowest; the proportion of 2-4-ring PAHs showed a yearly upward trend, and the proportion of 5-6-ring PAHs showed yearly downward trend (P<0.05). The characteristic ratio and PCA results suggested that the sources of sampled PAHs were mainly mixed sources such as dust, fossil fuel (natural gas), coal combustion, industrial emissions, and motor vehicle exhaust emissions. The ILCR (RILCR) of PAHs by inhalation for men, women, and children were 1.83×10−6, 2.35×10−6, and 2.04×10−6, respectively, and the annual average RILCR was 2.07×10−6, all greater than 1×10−6. Conclusion For the sampled time period, the main sources of PAHs pollution in atmospheric PM2.5 in the target district of Taizhou City are dust, fossil fuel (natural gas), coal combustion, industrial emissions, motor vehicle emissions, etc., and PAHs may have a potential carcinogenic risk to local residents.

6.
Journal of Public Health and Preventive Medicine ; (6): 70-73, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1005909

RESUMO

Objective To evaluate the noise hazard level of a coal mining enterprise, and identify high-risk operation types and people, and to provide a basis for preventing and controlling the health damage caused by noise. Methods A large coal mining enterprise in Shaanxi Province was selected as the research object. The noise monitoring data of the coal mine over the years was used to calculate the noise exposure matrix of each post in the enterprise, and the classification of occupational hazards at workplaces (GBZ/T 229.4-2012) was used to assess the occupational health risk levels. Results Among the 22 noise-exposed positions in the enterprise, the 8-hour working day equivalent sound level in positions of shearer driver, horseshoe driver, crusher driver, shuttle driver, relaxation screen driver, and grading screen driver were all higher than the occupational exposure limit of noise. In 2021, the noise exposure levels of shearer drivers, crusher drivers, and coal-selecting workers were all higher than 90 dB (A), and the occupational hazard level was moderate hazard level. In addition, the noise exposure levels of most other jobs also exceeded the occupational exposure limit. Conclusion The noise hazards in the coal mine industry are mainly concentrated in the posts of the coal mining system, tunneling system, and screening workshop. Among them, the shearer driver, the crusher driver, and the coal preparation workers have higher noise exposure levels. It is recommended to take corresponding noise reduction measures and strengthen the protection level to reduce the noise exposure risk of workers.

7.
Shanghai Journal of Preventive Medicine ; (12): 179-185, 2024.
Artigo em Chinês | WPRIM | ID: wpr-1016548

RESUMO

ObjectiveThree methods were applied to conduct occupational health risk assessment for the working positions exposed to silicon dusts in a sanitary ceramic manufacturing factory, and the evaluation results were compared to explore the applicability of different occupational health risk assessment methods. MethodsOne large sanitary ceramic product manufacturing enterprise in Songjiang District, Shanghai was selected to conduct occupational health risk assessment for the working positions exposed to silicon dusts, using occupational hazard risk index evaluation method, exposure ratio evaluation method, and International Council on Mining and Metals (ICMM) quantitative occupational health risk assessment method . The consistency of the evaluation results of the three methods was tested using weighted Kappa method. ResultsFourteen working positions exposed to silicon dusts were identified, and three positions had excessive dust concentration: composite forming position of phase 2 workshop (0.80 mg·m-3), addition forming position of phase 2 workshop (1.00 mg·m-3), and glazing position of 1F in phase 2 workshop (1.50 mg·m-3), with an excessive rate of 21.42%. The occupational hazard risk index evaluation method assessed 6 positions with no harm, 6 positions with mild harm, and 2 positions with moderate harm. The ICMM quantitative occupational health risk assessment method assessed 6 positions with potential risks, 2 positions with tolerable risks, and 6 positions with intolerable risks. The exposure ratio evaluation method assessed 8 positions with medium risk, 5 positions with high risk, and 1 position with extremely high risk. The consistency test results of the three evaluation methods were poor. The Kappa coefficient between the occupational hazard risk index evaluation method and the ICMM quantitative occupational health risk assessment method was 0.15. The Kappa coefficient between the occupational hazard risk index evaluation method and the exposure ratio evaluation method was -0.09. The Kappa coefficient between the ICMM quantitative occupational health risk assessment method and the exposure ratio evaluation method was 0.04. The RR values obtained by the three evaluation methods were significantly correlated: the correlation coefficients between RRICMM quantitative assessment method and RRexposure ratio evaluation method, RROccupational hazard risk index evaluation method and RRICMM quantitative assessment method, RROccupational hazard risk index evaluation method and RRexposure ratio evaluation method were 0.915, 0.604, and 0.594, respectively. The correlation between the assessment result level and CTWA was strong. ConclusionThe occupational hazard risk index evaluation method is suitable for the working positions with low silicon dust exposure concentration, the ICMM quantitative occupational health risk assessment method and the exposure ratio evaluation method are suitable for the positions with high silicon dust exposure concentration, but all these three evaluation methods have limitations. It is more reasonable to use multiple methods at the same time in actual evaluation work.

8.
ABCD arq. bras. cir. dig ; 37: e1802, 2024. tab
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1556602

RESUMO

ABSTRACT BACKGROUND: Hepatic retransplantation is associated with higher morbidity and mortality when compared to primary transplantation. Given the scarcity of organs and the need for efficient allocation, evaluating parameters that can predict post-retransplant survival is crucial. AIMS: This study aimed to analyze prognostic scores and outcomes of hepatic retransplantation. METHODS: Data on primary transplants and retransplants carried out in the state of Paraná in 2019 and 2020 were analyzed. The two groups were compared based on 30-day survival and the main prognostic scores of the donor and recipient, namely Model for End-Stage Liver Disease (MELD), MELD-albumin (MELD-a), Donor MELD (D-MELD), Survival Outcomes Following Liver Transplantation (SOFT), Preallocation Score to Predict Survival Outcomes Following Liver Transplantation (P-SOFT), and Balance of Risk (BAR). RESULTS: A total of 425 primary transplants and 30 retransplants were included in the study. The main etiology of hepatopathy in primary transplantation was ethylism (n=140; 31.0%), and the main reasons for retransplantation were primary graft dysfunction (n=10; 33.3%) and hepatic artery thrombosis (n=8; 26.2%). The 30-day survival rate was higher in primary transplants than in retransplants (80.5% vs. 36.7%, p=0.001). Prognostic scores were higher in retransplants than in primary transplants: MELD 30.6 vs. 20.7 (p=0.001); MELD-a 31.5 vs. 23.5 (p=0.001); D-MELD 1234.4 vs. 834.0 (p=0.034); SOFT 22.3 vs. 8.2 (p=0.001); P-SOFT 22.2 vs. 7.8 (p=0.001); and BAR 15.6 vs. 8.3 (p=0.001). No difference was found in terms of Donor Risk Index (DRI). CONCLUSIONS: Retransplants exhibited lower survival rates at 30 days, as predicted by prognostic scores, but unrelated to the donor's condition.


RESUMO RACIONAL: O retransplante hepático está associado a maior morbimortalidade do que o transplante primário. Dada a escassez de órgãos e a necessidade de alocação eficiente, avaliar parâmetros que possam prever a sobrevida pós-retransmissão é crucial. OBJETIVOS: Analisar os resultados dos retransplantes hepáticos em relação aos principais escores prognósticos. MÉTODOS: Foram analisados os transplantes primários e os retransplantes realizados no Estado do Paraná nos anos de 2019 e 2020. Os dois grupos foram comparados em relação à sobrevida em 30 dias e aos principais escores prognósticos do doador e do receptor: Model for End-Stage Liver Disease (MELD), MELD-albumin (MELD-a), Donor MELD (D-MELD), Survival Outcomes Following Liver Transplantation (SOFT), Preallocation Score to Predict Survival Outcomes Following Liver Transplantation (P-SOFT) e Balance of Risk (BAR). RESULTADOS: Foram incluídos 425 transplantes primários e 30 retransplantes. A principal etiologia da hepatopatia no transplante primário dos pacientes retransplantados foi o etilismo (n=140; 31,0%), e os principais motivos para os retransplantes foram o não funcionamento primário do enxerto (n=10; 33,3%) e a trombose da artéria hepática (n=8; 26,2%). A sobrevida em 30 dias foi maior nos transplantes primários em relação aos retransplantes (80,5% vs 36,7%; p=0,001). Os escores prognósticos foram mais elevados nos retransplantes em relação aos transplantes primários: MELD 30,6 vs 20,7 (p=0,001); MELD-a 31,5 vs 23,5 (p=0,001); D-MELD 1234,4 vs 834,0 (p=0,034); SOFT 22,3 vs 8,2 (p=0,001); P-SOFT 22,2 vs 7,8 (p=0,001); e BAR 15,6 vs 8,3 (p=0,001). Não foi observada diferença em relação ao Índice de Risco do Doador. CONCLUSÕES: Os retransplantes apresentam menor sobrevida em 30 dias, prevista nos escores prognósticos, porém sem relação com a qualidade dos doadores.

9.
Arch. endocrinol. metab. (Online) ; 68: e230245, 2024. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1556933

RESUMO

ABSTRACT Objective: Thyroid nodules are very common in clinical practice, and ultrasound has long been used as a screening tool for their evaluation. Several risk assessment systems based on ultrasonography have been developed to stratify the risk of malignancy and determine the need for fine-needle aspiration in thyroid nodules, including the American Thyroid Association (ATA) system and the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS). The aim of this study was to compare the performance of the ATA and ACR TI-RADS systems in predicting malignancy in thyroid nodules based on the nodules' final histopathology reports. Materials and methods: We performed a retrospective review of medical records to identify patients who underwent thyroid surgery at King Abdulaziz University from 2017 to 2022. The ultrasound features of the nodules with confirmed histopathology (benign versus malignant) were evaluated. Both ATA and ACR TI-RADS scores were documented. Results: The analysis included 191 patients who underwent thyroid surgery and fulfilled the inclusion criteria. Hemithyroidectomy was performed in 22.5% of the patients, and total thyroidectomy was performed in 77.0% of them. In all, 91 patients (47.6%) were found to have malignant nodules on histopathology. We then compared the histopathology reports with the preoperative ultrasonographic risk scores. The estimated sensitivity and specificity in identifying malignant nodules were, respectively, 52% and 80% with the ATA system and 51.6% and 90% with the ACR TI-RADS system. Conclusion: Both ATA and ACR TI-RADS risk stratification systems are valuable tools for assessing the malignancy risk in thyroid nodules. In our study, the ACR TI-RADS system had superior specificity compared with the ATA system in predicting malignancy among high-risk lesions.

10.
Arch. endocrinol. metab. (Online) ; 68: e220506, 2024. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1556937

RESUMO

ABSTRACT Objective: Despite a favorable prognosis, some patients with papillary thyroid carcinoma (PTC) develop recurrence. The objective of this study was to examine the impact of the combination of initial American Thyroid Association (ATA) risk stratification with serum level of postoperative stimulated thyroglobulin (s-Tg) in predicting recurrence in patients with PTC and compare the results with an assessment of response to initial therapy (dynamic risk stratification). Subjects and methods: We retrospectively analyzed 1,611 patients who had undergone total thyroidectomy for PTC, followed in most cases (87.3%) by radioactive iodine (RAI) administration. Clinicopathological features and s-Tg levels obtained 3 months postoperatively were evaluated. The patients were stratified according to ATA risk categories. Nonstimulated thyroglobulin levels and imaging studies obtained during the first year of follow-up were used to restage the patients based on response to initial therapy. Results: After a mean follow-up of 61.5 months (range 12-246 months), tumor recurrence was diagnosed in 99 (6.1%) patients. According to ATA risk, recurrence was identified in 2.3% of the low-risk, 9% of the intermediate-risk, and 25% of the high-risk patients (p < 0.001). Using a receiver operating characteristic curve approach, a postoperative s-Tg level of 10 ng/mL emerged as the ideal cutoff value, with positive and negative predictive values of 24% and 97.8%, respectively (p < 0.001). Patients with low to intermediate ATA risk with postoperative s-Tg levels < 10 ng/mL and excellent response to treatment had a very low recurrence rate (<0.8%). In contrast, higher recurrence rates were observed in intermediate-risk to high-risk patients with postoperative s-Tg ≥ 10 ng/mL and indeterminate response (25%) and in those with incomplete response regardless of ATA category or postoperative s-Tg value (38.5-87.5%). Using proportion of variance explained (PVE), the predicted recurrence using the ATA initial risk assessment alone was 12.7% and increased to 29.9% when postoperative s-Tg was added to the logistic regression model and 49.1% with dynamic risk stratification. Conclusions: The combination of ATA staging system and postoperative s-Tg can better predict the risk of PTC recurrence. Initial risk estimates can be refined based on dynamic risk assessment following response to therapy, thus providing a useful guide for follow-up recommendations.

11.
Int. arch. otorhinolaryngol. (Impr.) ; 28(1): 12-21, 2024. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1558011

RESUMO

Abstract Introduction The most common postoperative complication of total thyroidectomy is hypocalcemia, usually monitored using serum parathyroid hormone and calcium values. Objective To identify the most accurate predictors of hypocalcemia, construct a risk assesment algorithm and analyze the impact of using several calcium correction formulas in practice. Methods A prospective, single-center, non-randomized longitudinal cohort study on 205 patients undergoing total thyroidectomy. Parathyroid hormone, serum, and ionized calcium were sampled post-surgery, with the presence of symptomatic or laboratory-verified asymptomatic hypocalcemia designated as primary outcome measures. Results Parathyroid hormone sampled on the first postoperative day was the most sensitive predictor of symptomatic hypocalcemia development (sensitivity 80.22%, cut-off value ≤ 2.03 pmol/L). A combination of serum calcium and parathyroid concentration sampled on the first postoperative day predicted the development of hypocalcemia during recovery with the highest sensitivity and specificity (94% sensitivity, cut-off ≤2.1 mmol/L, and 89% specificity, cut-off ≤1.55 pmol/L, respectively). The use of algorithms and correction formulas did not improve the accuracy of predicting symptomatic or asymptomatic hypocalcemia. Conclusions The most sensitive predictor of symptomatic hypocalcemia present on the fifth postoperative day was PTH sampled on the first postoperative day. The need for algorithms and correction formulas is limited.

12.
Rev. bras. cir. cardiovasc ; 39(2): e20230104, 2024. tab
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1535539

RESUMO

ABSTRACT Introduction: Along with cardiopulmonary bypass time, aortic cross-clamping time is directly related to the risk of complications after heart surgery. The influence of the time difference between cardiopulmonary bypass and cross-clamping times (TDC-C) remains poorly understood. Objective: To assess the impact of cardiopulmonary bypass time in relation to cross-clamping time on immediate results after coronary artery bypass grafting in the Registro Paulista de Cirurgia Cardiovascular (REPLICCAR) II. Methods: Analysis of 3,090 patients included in REPLICCAR II database was performed. The Society of Thoracic Surgeons outcomes were evaluated (mortality, kidney failure, deep wound infection, reoperation, cerebrovascular accident, and prolonged ventilation time). A cutoff point was adopted, from which the increase of this difference would affect each outcome. Results: After a cutoff point determination, all patients were divided into Group 1 (cardiopulmonary bypass time < 140 min., TDC-C < 30 min.), Group 2 (cardiopulmonary bypass time < 140 min., TDC-C > 30 min.), Group 3 (cardiopulmonary bypass time > 140 min., TDC-C < 30 min.), and Group 4 (cardiopulmonary bypass time > 140 min., TDC-C > 30 min.). After univariate logistic regression, Group 2 showed significant association with reoperation (odds ratio: 1.64, 95% confidence interval: 1.01-2.66), stroke (odds ratio: 3.85, 95% confidence interval: 1.99-7.63), kidney failure (odds ratio: 1.90, 95% confidence interval: 1.32-2.74), and in-hospital mortality (odds ratio: 2.17, 95% confidence interval: 1.30-3.60). Conclusion: TDC-C serves as a predictive factor for complications following coronary artery bypass grafting. We strongly recommend that future studies incorporate this metric to improve the prediction of complications.

13.
Rev. panam. salud pública ; 48: e1, 2024. tab, graf
Artigo em Português | LILACS-Express | LILACS | ID: biblio-1536669

RESUMO

RESUMO Objetivo. Realizar uma revisão sistemática de publicações científicas que abordaram experiências de aplicação de métodos de estratificação para definir áreas de risco de transmissão de sarampo. Métodos. Foram selecionados artigos publicados nos idiomas inglês, português e espanhol em periódicos indexados nas bases SciELO, PubMed e LILACS. A busca utilizou os descritores risk assessment AND measles, sem delimitação de período. Foram excluídos editoriais, artigos de opinião, estudos observacionais de nível individual e publicações que não tratavam da aplicação de métodos de estratificação de áreas de risco de transmissão de sarampo. As informações de ano de publicação, autoria, país de realização do estudo, objetivo, escala geográfica, método utilizado, indicadores e limitações foram extraídas por meio de formulário. Resultados. Foram selecionados 13 artigos publicados entre 2011 e 2022 em nove países das seis regiões da Organização Mundial da Saúde (OMS). Desses, 10 tiveram como referência a ferramenta Measles Risk Assessment Tool desenvolvida pela OMS/Centers for Disease Control and Prevention. Apenas um estudo adaptou a ferramenta ao contexto local. Os indicadores utilizados para a estratificação de risco enfocaram uma combinação das dimensões imunidade populacional, qualidade dos sistemas de vigilância e situação epidemiológica. Como dificuldades para a estratificação de risco, destaca-se a produção sistemática de dados com cobertura e qualidade adequadas. Conclusão. As estratégias de estratificação do risco de transmissão de sarampo parecem ser ainda pouco difundidas, especialmente na escala local. Reitera-se a necessidade de estímulo à capacitação de recursos humanos para processamento e interpretação das análises de risco nas rotinas dos serviços de vigilância.


ABSTRACT Objective. To perform a systematic review of scientific publications addressing the use of stratification methods to define risk areas for measles transmission. Method. Articles published in English, Portuguese, and Spanish in journals indexed in the SciELO, PubMed, and LILACS databases were selected. The search terms risk assessment AND measles were used without date limits. Editorials, opinion articles, individual-level observational studies, and publications that did not focus on the application of methods to stratify measles transmission risk areas were excluded. Year of publication, authorship, country where the study was performed, objective, geographic level of analysis, method used, indicators, and limitations were recorded in a data form. Results. Thirteen articles published between 2011 and 2022 in nine countries from the six World Health Organization (WHO) regions were selected. Of these, 10 referred to the Measles Risk Assessment Tool developed by the WHO/Centers for Disease Control and Prevention. Only one study adapted the tool to the local context. The risk stratification indicators used in the selected studies focused on a combination of the following dimensions: population immunity, quality of surveillance systems, and epidemiologic status. The systematic output of data with adequate quality and coverage was a noteworthy aspect hindering risk stratification. Conclusion. There seems to be limited dissemination of measles risk stratification strategies, especially at local levels. The need to train human resources to process and interpret risk analyses as part of the routine of surveillance services is emphasized.


RESUMEN Objetivo. Realizar una revisión sistemática de las publicaciones científicas en las que se han abordado experiencias de aplicación de métodos de estratificación para definir las zonas de riesgo de transmisión del sarampión. Métodos. Se seleccionaron artículos publicados en español, inglés o portugués en revistas indizadas en las bases de datos SciELO, PubMed y LILACS. En la búsqueda se utilizaron los descriptores "risk assessment" y "measles", sin limitaciones en la fecha de publicación. Se excluyeron editoriales, artículos de opinión, estudios de observación de pacientes individuales y publicaciones que no tratasen de la aplicación de métodos de estratificación de zonas de riesgo de transmisión del sarampión. Se empleó un formulario para extraer la información sobre año de publicación, autoría, país de realización del estudio, objetivo, escala geográfica, método utilizado, indicadores y limitaciones. Resultados. Se seleccionaron 13 artículos publicados entre el 2011 y el 2022 en nueve países de las seis regiones de la Organización Mundial de la Salud (OMS). En 10 de ellos se utilizó como referencia la herramienta de evaluación del riesgo de sarampión creada por la OMS y los Centros para el Control y la Prevención de Enfermedades de Estados Unidos. Solamente en un estudio se adaptó la herramienta al contexto local. Los indicadores utilizados para la estratificación del riesgo se basaron en una combinación de las dimensiones de inmunidad poblacional, calidad de los sistemas de vigilancia y situación epidemiológica. Entre las dificultades de la estratificación del riesgo se destaca la de generación sistemática de datos con una cobertura y calidad adecuadas. Conclusión. Las estrategias de estratificación del riesgo de transmisión del sarampión siguen sin estar, al parecer, muy extendidas, en especial a nivel local. Cabe reiterar la necesidad de fomentar la capacitación de recursos humanos para procesar e interpretar los análisis de riesgo en las operaciones habituales de los servicios de vigilancia.

14.
Rev. latinoam. enferm. (Online) ; 31: e3983, Jan.-Dec. 2023. tab, graf
Artigo em Espanhol | LILACS, BDENF | ID: biblio-1515332

RESUMO

Objetivo: mapear los instrumentos para la evaluación del riesgo de lesiones por presión en adultos en situación crítica en una unidad de terapia intensiva; identificar los indicadores de desempeño de los instrumentos y la apreciación de los usuarios con respecto al uso/limitaciones de los instrumentos. Método: scoping review. Para redactar el estudio se utilizó la extensión Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews. La investigación se realizó mediante la herramienta de búsqueda EBSCOhost en 8 bases de datos, resultando 1846 estudios, de los cuales 22 conforman la muestra. Resultados: se identificaron dos grandes grupos de instrumentos: los generalistas [Braden, Braden (ALB), Emina, Norton-MI, RAPS y Waterlow]; y los específicos (CALCULATE, Cubbin & Jackson, EVARUCI, RAPS-ICU, Song & Choi, Suriaidi y Sanada y el índice COMHON). En cuanto al valor predictivo, EVARUCI y CALCULATE mostraron los mejores resultados de indicadores de desempeño. En cuanto a las apreciaciones/limitaciones señaladas por los usuarios, destaca la escala CALCULATE, seguida de la EVARUCI y la RAPS-ICU, aunque aún necesitan ajustes futuros. Conclusión: el mapeo mostró que las evidencias son suficientes para indicar uno o más instrumentos para la evaluación del riesgo de lesiones por presión en adultos críticos en una unidad de cuidados intensivos.


Objective: to map the instruments for risk assessment of pressure ulcers in adults in critical situation in intensive care units; identify performance indicators of the instrument, and the appreciation of users regarding the instruments' use/limitations. Method: a scoping review. We used the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews in the writing of the study. We carried out the searches in the EBSCOhost search tool for 8 databases, resulting in 1846 studies, of which 22 studies compose the sample. Results: we identified two big instrument groups: generalist [Braden, Braden (ALB), Emina, Norton-MI, RAPS, and Waterlow]; and specific (CALCULATE, Cubbin & Jackson, EVARUCI, RAPS-ICU, Song & Choi, Suriaidi and Sanada, and COMHON index). Regarding the predictive value, EVARUCI and CALCULATE presented better results for performance indicators. Concerning appreciation/limitations indicated by users, we highlight the CALCULATE scale, followed by EVARUCI and RAPS-ICU, although they still need future adjustments. Conclusion: the mapping of the literature showed that the evidence is sufficient to indicate one or more instruments for the risk assessment of pressure ulcers for adults in critical situation in intensive care units.


Objetivo: mapear os instrumentos para avaliação do risco de lesões por pressão nos adultos em situação crítica em unidade de cuidados intensivos; identificar os indicadores de desempenho dos instrumentos e a apreciação dos utilizadores quanto ao uso/às limitações dos instrumentos. Método: scoping review. O Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews foi utilizado para a redação do estudo. A pesquisa foi realizada na ferramenta de busca EBSCOhost em oito bases de dados, resultando em 1846 estudos, dos quais 22 compõem a amostra. Resultados: identificaram-se dois grandes grupos de instrumentos: os genéricos [Braden, Braden (ALB), Emina, Norton-MI, RAPS e Waterlow]; e os específicos (CALCULATE, Cubbin & Jackson, EVARUCI, RAPS-ICU, Song & Choi, Suriaidi e Sanada e o índice de COMHON). Quanto ao valor preditivo, a EVARUCI e a CALCULATE apresentaram os melhores resultados de indicadores de desempenho. Em relação à apreciação/às limitações apontadas pelos utilizadores, destacam-se a escala CALCULATE, seguindo-se da EVARUCI e da RAPS-ICU, embora ainda necessitem de ajustes futuros. Conclusão: o mapeamento mostrou que as evidências são suficientes para indicar um ou mais instrumentos para avaliação do risco de lesões por pressão nos adultos em situação crítica em unidade de cuidados intensivos.


Assuntos
Humanos , Adulto , Medição de Risco/métodos , Úlcera por Pressão/diagnóstico , Unidades de Terapia Intensiva
15.
Rev. latinoam. enferm. (Online) ; 31: e3977, Jan.-Dec. 2023. tab
Artigo em Espanhol | LILACS, BDENF | ID: biblio-1515327

RESUMO

Objetivo: evaluar la asociación entre las categorías de clasificación de riesgo y el Modified Early Warning Score y los resultados de los pacientes con COVID-19 en el servicio de emergencia Método: estudio transversal, realizado con 372 pacientes hospitalizados con diagnóstico de COVID-19 atendidos en la Recepción con Clasificación de Riesgo en Urgencias. En este estudio, el Modified Early Warning Score de los pacientes se clasificó como sin y con deterioro clínico, de 0 a 4 y de 5 a 9, respectivamente. Se consideró que había deterioro clínico cuando presentaban insuficiencia respiratoria aguda, shock y paro cardiorrespiratorio. Resultados: el Modified Early Warning Score promedio fue de 3,34. En cuanto al deterioro clínico de los pacientes, se observó que en el 43% de los casos el tiempo de deterioro fue menor a 24 horas y que el 65,9% ocurrió en urgencias. El deterioro más frecuente fue la insuficiencia respiratoria aguda (69,9%) y el resultado fue alta hospitalaria (70,3%). Conclusión: los pacientes con COVID-19 que presentaban Modified Early Warning Score 4 se asociaron a las categorías de clasificación de riesgo urgente, muy urgente y emergente y tuvieron más deterioro clínico, como insuficiencia respiratoria y shock, y murieron, lo que demuestra que el Protocolo de Clasificación de Riesgo priorizó correctamente a los pacientes con riesgo vital.


Objective: to evaluate the association of the risk classification categories with the Modified Early Warning Score and the outcomes of COVID-19 patients in the emergency service Method: a crosssectional study carried out with 372 patients hospitalized with a COVID-19 diagnosis and treated at the Risk Classification Welcoming area from the Emergency Room. In this study, the patients' Modified Early Warning Score was categorized into without and with clinical deterioration, from 0 to 4 and from 5 to 9, respectively. Clinical deterioration was considered to be acute respiratory failure, shock and cardiopulmonary arrest Results: the mean Modified Early Warning Score was 3.34. In relation to the patients' clinical deterioration, it was observed that, in 43%, the time for deterioration was less than 24 hours and that 65.9% occurred in the Emergency Room. The most frequent deterioration was acute respiratory failure (69.9%) and the outcome was hospital discharge (70.3%). Conclusion: COVID-19 patients who had a Modified Early Warning Scores > 4 were associated with the urgent, very urgent and emergency risk classification categories, had more clinical deterioration, such as respiratory failure and shock, and evolved more to death, which shows that the Risk Classification Protocol correctly prioritized patients at risk of life.


Objetivo: avaliar a associação das categorias de classificação de risco com o Modified Early Warning Score e os desfechos dos pacientes com COVID-19 no serviço de emergência Método: estudo transversal, realizado com 372 pacientes internados com diagnóstico de COVID-19 atendidos no Acolhimento com Classificação de Risco no Pronto-Atendimento. Neste estudo, o Modified Early Warning Score dos pacientes foi categorizado em sem e com deterioração clínica, de 0 a 4 e de 5 a 9, respectivamente. Foram consideradas deteriorações clínicas a insuficiência respiratória aguda, choque e parada cardiorrespiratória. Resultados: o Modified Early Warning Score médio foi de 3,34. Em relação à deterioração clínica dos pacientes, observou-se que em 43% o tempo para deterioração foi menor de 24 horas e que 65,9% delas ocorreu no pronto-socorro. A deterioração mais frequente foi a insuficiência respiratória aguda (69,9%) e o desfecho foi o de alta hospitalar (70,3%). Conclusão: pacientes com COVID-19 que tiveram Modified Early Warning Score 4 foram associados às categorias da classificação de risco urgente, muito urgente e emergente e tiveram mais deterioração clínica, como a insuficiência respiratória e o choque, e evoluíram mais a óbito, o que demonstra que o Protocolo de Classificação de Risco priorizou corretamente os pacientes com risco de vida.


Assuntos
Humanos , Deterioração Clínica , Escore de Alerta Precoce , Teste para COVID-19 , COVID-19/diagnóstico , Hospitais
16.
Radiol. bras ; 56(5): 229-234, Sept.-Oct. 2023. tab, graf
Artigo em Inglês | LILACS-Express | LILACS | ID: biblio-1529319

RESUMO

Abstract Objective: To evaluate the results obtained with an artificial intelligence-based software for predicting the risk of malignancy in breast masses from ultrasound images. Materials and Methods: This was a retrospective, single-center study evaluating 555 breast masses submitted to percutaneous biopsy at a cancer referral center. Ultrasonographic findings were classified in accordance with the BI-RADS lexicon. The images were analyzed by using Koios DS Breast software and classified as benign, probably benign, low to intermediate suspicion, high suspicion, or probably malignant. The histological classification was considered the reference standard. Results: The mean age of the patients was 51 years, and the mean mass size was 16 mm. The radiologist evaluation had a sensitivity and specificity of 99.1% and 34.0%, respectively, compared with 98.2% and 39.0%, respectively, for the software evaluation. The positive predictive value for malignancy for the BI-RADS categories was similar between the radiologist and software evaluations. Two false-negative results were identified in the radiologist evaluation, the masses in question being classified as suspicious by the software, whereas four false-negative results were identified in the software evaluation, the masses in question being classified as suspicious by the radiologist. Conclusion: In our sample, the performance of artificial intelligence-based software was comparable to that of a radiologist.


Resumo Objetivo: O objetivo deste trabalho foi avaliar os resultados de um software baseado em algoritmo de inteligência artificial para predição do risco de malignidade em nódulos mamários. Materiais e Métodos: Estudo retrospectivo e unicêntrico que avaliou 555 nódulos mamários submetidos a biópsia percutânea em um centro de referência oncológico. Os achados ultrassonográficos foram classificados de acordo com o léxico do BI-RADS. As imagens foram analisadas pelo software Koios DS Breast e divididas em benigna ou provavelmente benigna, suspeita baixa ou intermediária, suspeita alta ou provavelmente maligna. O resultado histopatológico foi considerado como padrão ouro. Resultados: A média de idade das pacientes foi de 51 anos e o tamanho médio dos nódulos foi de 16 mm. A sensibilidade e a especificidade foram de 99,1% e 34,0% para o radiologista e 98,2% e 39,0% para o software, respectivamente. O valor preditivo positivo para malignidade para as categorias BIRADS foi semelhante para o radiologista e para o software. Foram identificados dois resultados falso-negativos na avaliação pelo radiologista que foram classificados como suspeitos pelo software, e quatro resultados falso-negativos na avaliação pelo software que foram classificados como suspeitos pelo radiologista. Conclusão: Na nossa amostra, o software de inteligência artificial demonstrou resultados comparáveis à avaliação pelo radiologista.

17.
ARS med. (Santiago, En línea) ; 48(3): 48-61, 30 sept. 2023.
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1512551

RESUMO

El dolor torácico es un motivo de consulta frecuente en los servicios de urgencia. Su espectro de presentaciones y su diagnóstico diferencial es amplio, con patologías de elevada morbilidad y mortalidad asociadas. Es el síntoma principal en pacientes con un síndrome coronario agudo y, ante su sospecha es mandatorio realizar una evaluación inicial centrada en la estratificación de riesgo de sufrir eventos adversos en cada paciente, para así definir su tratamiento y disposición posterior de forma correcta. Objetivo: presentar los elementos que componen la evaluación inicial del dolor torácico ante una sospecha de síndrome coronario agudo y las herramientas disponibles para realizar la estratificación de riesgo y así guiar la disposición desde el servicio de urgencia. Método: Se realizó una revisión bibliográfica de la literatura sobre la estratificación de riesgo del dolor torácico, buscando la evidencia actual respecto a las herramientas diagnósticas utilizadas habitualmente en el servicio de urgencia. Resultados: Se presenta una revisión con generalidades del dolor torácico, sus diagnósticos diferenciales, los elementos de la evaluación inicial y las herramientas clínicas para la evaluación de riesgo de pacientes con dolor torácico y sospecha de síndrome coronario agudo en el servicio de urgencia. Discusión y conclusiones: La presentación del síndrome coronario agudo es variable en la población. Ante la presencia de un cuadro de dolor atípico y/o un electrocardiograma no diagnóstico, recomendamos el uso de un sistema de puntaje validado como el HEART / HEART pathway para reducir la posibilidad de una inadecuada estratificación de riesgo en el servicio de urgencia


Chest pain is a common complaint in emergency departments. The spectrum of presentation and its differential diagnosis are broad, including pathologies associated with high morbidity and mortality, and it is the main symptom in patients suffering from acute coronary syndrome. If suspected, it is mandatory to work out an initial evaluation focused on the risk stratification of adverse events for each patient to define their correct treatment and disposition. Objective: show the elements that involve the initial evaluation of chest pain suspicious of an acute coronary syndrome, the clinical tools available to perform risk stratification, and guide the disposition from the emergency department. Method: a review of the literature on chest pain risk stratification was performed, looking for current evidence of the most commonly used diagnostic tools in emergency departments. Results: we present a literature review of generalities about chest pain and its differential diagnoses, the elements to consider in the initial evaluation, and clinical tools for risk stratification of patients with suspected acute coronary syndrome at the emergency department. Discussion and conclusions: the presentation of acute coronary syndrome is variable in the population. In the presence of atypical chest pain or a non-diagnostic electrocardiogram, we recommend using a validated score as the HEART / HEART Pathway to reduce the chance of inadequate risk stratification in the emergency department.

18.
Acta méd. peru ; 40(3)jul. 2023.
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1527631

RESUMO

Establecer la capacidad discriminativa del puntaje de riesgo finlandés para disglucemia en usuarios de una unidad de medicina familiar localizada en zona conurbana del Estado de Guerrero, México. Material y métodos: Realizamos un estudio transversal de marzo a diciembre del 2021 en una Unidad de Medicina Familiar. Previo consentimiento informado aplicamos a 200 personas de 20 a 60 años, el puntaje de riesgo finlandés para detección de disglucemia, obtuvimos medidas somatométricas y cifras de glucosa plasmática en ayuno. Estimamos sensibilidad, especificidad, valor predictivo positivo y negativo, razón de verosimilitud positiva y negativa, y calculamos el área bajo la curva (AUC) para estimar la capacidad discriminativa del puntaje de riesgo, donde la prueba de referencia fue la glucosa en ayuno. Realizamos análisis bivariado para identificar factores asociados a disglucemia, obteniendo Odds Ratio (OR), e intervalos de confianza del 95 % (IC95%). La ocurrencia de disglucemia fue de 26.5 % (53/200). El AUC de la curva ROC del puntaje finlandés para disglucemia fue de 0.65 (IC95% 0.57-0.74). Los factores asociados a diabetes fueron ≥40 años (OR 2.1; IC95% 1.1-3.9), índice de masa corporal ≥25 Kg/m2 (OR 2.8; IC95% 1.2-6.7) y padecer hipertensión arterial (OR 2.2; IC95% 1.1-4.4). El FINDRISC demostró por AUC ser una mala herramienta para detectar personas en riesgo de padecer disglucemia, en población adscrita a unidad médica conurbana.


To establish the discriminative capacity of the Finnish risk score for dysglycemia in users of a family medicine unit located in the suburbs of the State of Guerrero, Mexico. We conducted a cross-sectional study from March to December 2021 in a Family Medicine Unit. With prior informed consent, we applied the Finnish risk score for the detection of dysglycemia to 200 people between the ages of 20 and 60, we obtained somatometric measurements and fasting plasma glucose figures. We estimated sensitivity, specificity, positive and negative predictive value, positive and negative likelihood ratio, and calculated the area under the curve (AUC) to estimate the discriminative ability of the risk score, where the reference test was fasting glucose. We performed bivariate analysis to identify factors associated with dysglycemia, obtaining Odds Ratio (OR) and 95% confidence intervals (95%CI). Result: The occurrence of dysglycemia was 26.5% (53/200). The AUC of the ROC curve of the Finnish score for dysglycemia was 0.65 (95%CI 0.57-0.74). The factors associated with diabetes were ≥40 years (OR 2.1; 95%CI 1.1-3.9), body mass index ≥25 Kg/m2 (OR 2.8; 95%CI 1.2-6.7) and suffering from arterial hypertension (OR 2.2; 95%CI 1.1 -4.4). The FINDRISC was shown by AUC to be a poor tool for detecting people at risk of suffering from dysglycemia, in a population attached to a suburban medical unit

19.
Rev. argent. cardiol ; 91(2): 109-116, jun. 2023. graf
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1529588

RESUMO

RESUMEN Introducción : Los puntajes de riesgo cardiovascular tienen limitaciones relacionadas con la calibración, la discriminación y la baja sensibilidad. Se han identificado diferentes "moduladores de riesgo" que permiten mejorar la estratificación del riesgo cardiovascular: placa aterosclerótica carotídea (PAC), puntaje de calcio arterial coronario (pCAC) y lipoproteína(a) [Lp(a)]. Objetivos : 1) determinar la prevalencia de los moduladores de riesgo citados en una población en prevención primaria; 2) determinar la concordancia entre los 2 métodos de detección de aterosclerosis subclínica; 3) establecer qué proporción de pacientes deberían recibir estatinas inicialmente, según su puntaje de riesgo, y posteriormente con el conocimiento de los moduladores de riesgo. Material y métodos : Se incluyeron individuos de 18 a 79 años, que asistieron para una evaluación de riesgo cardiovascular y que no estaban recibiendo tratamiento hipolipemiante. Se calculó el puntaje de riesgo (ASCVD Risk Estimator) en cada paciente. Se evaluó la presencia de PAC, el pCAC y el nivel plasmático de Lp(a). Resultados : Se incluyeron 348 pacientes (edad media 55,6 ± 12,2 años, 45,4% hombres). En la población total, 29,8%, 36,8% y 53,2% de los pacientes mostraron un valor de Lp(a) ≥ 50 mg/dL, PAC o un pCAC > 0, respectivamente. La prevalencia de PAC y pCAC fue progresivamente mayor según la categoría de riesgo cardiovascular; sin embargo, la proporción de sujetos de bajo riesgo que tenían moduladores de riesgo fue considerable (Lp(a) ≥ 50 mg/dl: 25,7%; PAC: 22%; pCAC > 0: 33%). En los 60 individuos menores de 45 años la prevalencia de pCAC > 0 y PAC fue de 18,3% y 10%, respectivamente. La concordancia entre los dos métodos para determinar la presencia de ateromatosis subclínica fue discreta (kappa 0,33). La indicación del tratamiento con estatinas aumentó un 31,6% luego de evaluar la presencia de moduladores. Conclusión : La presencia de moduladores de riesgo fue frecuente en esta población en prevención primaria, incluso en sujetos de bajo riesgo o menores de 45 años. La detección de moduladores de riesgo podría mejorar la estratificación inicial y llevar a reconsiderar el tratamiento con estatinas.


ABSTRACT Background : Cardiovascular risk scores have limitations related to calibration, discrimination, and low sensitivity. Different "risk modulators" have been identified to improve cardiovascular risk stratification: carotid atherosclerotic plaque (CAP), coronary artery calcium (CAC) score and lipoprotein(a) [Lp(a)]. Objectives : The aims of this study were: 1) to determine the prevalence of risk modulators mentioned in a primary prevention population; 2) determine the concordance between the 2 methods of detecting subclinical atherosclerosis; and 3) establish which proportion of patients should receive statins according to the initial risk stratification and after being recategorized by screening for risk modulators. Methods : Individuals aged 18 to 79 years who consulted for cardiovascular risk assessment and who were not receiving lipid-lowering treatment were included. The risk score was calculated in each patient using ASCVD Risk Estimator. The presence of CAP, CAC score and Lp(a) level were evaluated. Results : The cohort was made up of 348 patients; mean age was 55.6 ± 12.2 years and 45.4% were men. In the total population, 29.8%, 36.8%, and 53.2% of patients showed Lp(a) value ≥50 mg/dL, CAP, or a CAC score >0, respectively. The prevalence of CAP and CAC score was progressively higher according to the cardiovascular risk category; however, the proportion of low-risk subjects who had risk modulators was considerable (Lp(a) ≥50 mg/dl: 25.7%; CAP: 22%; CAC score >0: 33%). In the 60 subjects <45 years, the prevalence of CAC score >0 and CAP was 18.3% and 10%, respectively. The agreement between the two methods for quantifying subclinical atheromatosis was fair (kappa= 0.33). The indication for statin treatment increased by 31.6% after evaluating the presence of modulators. Conclusion : The presence of risk modulators was common in this population in primary prevention, even in low-risk subjects or < 45 years. Detection of risk modulators could improve initial stratification and lead to reconsideration of statin treatment.

20.
Rev. argent. cardiol ; 91(2): 138-143, jun. 2023. graf
Artigo em Espanhol | LILACS-Express | LILACS | ID: biblio-1529591

RESUMO

RESUMEN Introducción : Las guías europeas de hipertensión arterial pulmonar (HAP) estratifican el riesgo valiéndose de características clínicas y estudios complementarios entre los cuales está la prueba cardiopulmonar de ejercicio (PCPE), de la cual toma en cuenta 3 parámetros: el consumo de O2 (VO2) pico, su porcentaje respecto del predicho y la pendiente ventilación minuto/ producción de dióxido de carbono (VE/VCO2). Sin embargo, ninguno de los modelos que validaron esta forma de estratificar el riesgo incluyeron la PCPE entre sus variables. Objetivos : Determinar qué proporción de pacientes con HAP del grupo I considerados de bajo riesgo y que caminan >440 metros en la prueba de caminata de 6 minutos (PC6M) tienen en la PCPE parámetros considerados de riesgo moderado o alto. Material y métodos : Se incluyeron pacientes >18 años con diagnóstico de HAP del grupo I considerados de bajo riesgo con una PC6M >400 metros a los que se les realizó una PCPE en la que se registró el VO2 pico, su porcentaje respecto del VO2 predicho y la pendiente VE/VCO2. Se determinó qué proporción de pacientes presentaban estos parámetros en un estrato de riesgo mayor a bajo riesgo (VO2 pico <15 ml/kg/min, su porcentaje respecto del predicho <65% y la pendiente VE/VCO2 >36). Resultados : Se incluyeron 18 pacientes. A pesar de ser pacientes de bajo riesgo y con buena clase funcional todos presentaron un VO2 pico menor al 85% del predicho, lo cual determina un deterioro al menos leve de la capacidad funcional. Un único paciente (6%) presentó los tres parámetros evaluados en bajo riesgo, 8 pacientes (44%) tuvieron al menos un parámetro alterado, 7 pacientes (39%) presentaron 2 parámetros alterados y en 2 pacientes (11%) todos los parámetros estuvieron alterados. Los parámetros que más frecuentemente se vieron alterados fueron el porcentaje respecto del VO2 predicho y la pendiente VE/VCO2, en el 67% de los casos. Solo 4 pacientes presentaron un VO2 pico <15 ml/k/m. Ningún paciente presentó valores de VO2 pico o porcentaje respecto del predicho en la categoría de alto riesgo. Sin embargo, 6 pacientes (33%) presentaron una pendiente VE/VCO2 considerada de alto riesgo. Conclusión : El 94% de los pacientes considerados de bajo riesgo presentaron al menos una variable en la PCPE que no corresponde a un perfil de riesgo bajo. La pendiente VE/VCO2 y el porcentaje de VO2 pico respecto del predicho fueron las variables más frecuentemente alteradas. La pendiente VE/VCO2 fue la única que mostró valores considerados de alto riesgo. La PCPE podría tener un lugar en la estratificación de precisión de pacientes de bajo riesgo. El valor de este hallazgo deberá ser evaluado en estudios prospectivos, al tiempo que genera las bases para el planteo de hipótesis respecto de la estratificación de riesgo y la intensidad del tratamiento en pacientes que aparentan estar en bajo riesgo.


ABSTRACT Background : European guidelines for pulmonary arterial hypertension (PAH) stratify the risk using clinical characteristics and complementary studies, including the cardiopulmonary exercise test (CPET). This takes into account 3 parameters: peak O2 consumption (peak VO2), its percentage with respect to the predicted VO2, and the minute ventilation/carbon dioxide production (VE/VCO2) slope. However, none of the models that validated this way of stratifying risk included PCPE among their variables. Objectives : To determine what proportion of patients with group I PAH considered to be at low risk and who walk >440 meters in the 6-minute walk test (6MWT) have parameters considered to be of moderate or high risk in the PCPE. Methods : Patients >18 years of age, diagnosed with group I PAH at low risk of events, who walked >440 meters in the 6MWT and had NT-proBNP value <300 pg/dL were included. A CPET was performed in which the peak VO2, its percentage with respect to the predicted VO2, and the VE/VCO2 slope were recorded. It was determined what proportion of patients presented these parameters in a higher than low risk stratum (peak VO2 consumption ≤15 ml/min/Kg, its percentage with respect to the predicted VO2 ≤65% and the VE/VCO2 slope ≥36). Results : Eighteen patients were included. Despite being low-risk patients with a good functional class, all patients presented a peak VO2 less than 85% of predicted, which determines a deterioration of functional capacity. A single patient (6%) presented the three parameters evaluated at low risk, 8 patients (44%) had at least one altered parameter, 7 patients (39%) presented 2 altered parameters and in 2 patients (11%) all parameters were altered. The parameters that were most frequently altered were the percentage of predicted peak VO2 and the VE/VCO2 slope in 67% of the cases. Only 4 patients presented a peak VO2 <15 ml/kg/m. No patient presented peak VO2 values or percentage of predicted VO2 in the high-risk category. However, 6 patients (33%) presented a high-risk VE/VCO2 slope. Conclusion : Majority (92%) of the patients considered low risk and who walk more than 440 meters in 6 minutes presented at least one altered variable in the CPET. The VE/VCO2 slope and the percentage of predicted peak VO2 consumption were the most frequently altered variables. The VE/VCO2 slope was the only one that showed values considered high risk. CPET could have a place in the precision stratification of low-risk patients. The value of this finding should be evaluated in prospective studies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA