Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
One Health ; 15: 100439, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36277100

RESUMO

The complex, unpredictable nature of pathogen occurrence has required substantial efforts to accurately predict infectious diseases (IDs). With rising popularity of Machine Learning (ML) and Deep Learning (DL) techniques combined with their unique ability to uncover connections between large amounts of diverse data, we conducted a PRISMA systematic review to investigate advances in ID prediction for human and animal diseases using ML and DL. This review included the type of IDs modeled, ML and DL techniques utilized, geographical distribution, prediction tasks performed, input features utilized, spatial and temporal scales, error metrics used, computational efficiency, uncertainty quantification, and missing data handling methods. Among 237 relevant articles published between January 2001 and May 2021, highly contagious diseases in humans were most often represented, including COVID-19 (37.1%), influenza/influenza-like illnesses (9.3%), dengue (8.9%), and malaria (5.1%). Out of 37 diseases identified, 51.4% were zoonotic, 37.8% were human-only, and 8.1% were animal-only, with only 1.6% economically significant, non-zoonotic livestock diseases. Despite the number of zoonoses, 86.5% of articles modeled humans whereas only a few articles (5.1%) contained more than one host species. Eastern Asia (32.5%), North America (17.7%), and Southern Asia (13.1%) were the most represented locations. Frequent approaches included tree-based ML (38.4%) and feed-forward neural networks (26.6%). Articles predicted temporal incidence (66.7%), disease risk (38.0%), and/or spatial movement (31.2%). Less than 10% of studies addressed uncertainty quantification, computational efficiency, and missing data, which are essential to operational use and deployment. This study highlights trends and gaps in ML and DL for ID prediction, providing guidelines for future works to better support biopreparedness and response. To fully utilize ML and DL for improved ID forecasting, models should include the full disease ecology in a One-Health context, important food and agricultural diseases, underrepresented hotspots, and important metrics required for operational deployment.

2.
Pathogens ; 11(2)2022 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-35215129

RESUMO

Accurate infectious disease forecasting can inform efforts to prevent outbreaks and mitigate adverse impacts. This study compares the performance of statistical, machine learning (ML), and deep learning (DL) approaches in forecasting infectious disease incidences across different countries and time intervals. We forecasted three diverse diseases: campylobacteriosis, typhoid, and Q-fever, using a wide variety of features (n = 46) from public datasets, e.g., landscape, climate, and socioeconomic factors. We compared autoregressive statistical models to two tree-based ML models (extreme gradient boosted trees [XGB] and random forest [RF]) and two DL models (multi-layer perceptron and encoder-decoder model). The disease models were trained on data from seven different countries at the region-level between 2009-2017. Forecasting performance of all models was assessed using mean absolute error, root mean square error, and Poisson deviance across Australia, Israel, and the United States for the months of January through August of 2018. The overall model results were compared across diseases as well as various data splits, including country, regions with highest and lowest cases, and the forecasted months out (i.e., nowcasting, short-term, and long-term forecasting). Overall, the XGB models performed the best for all diseases and, in general, tree-based ML models performed the best when looking at data splits. There were a few instances where the statistical or DL models had minutely smaller error metrics for specific subsets of typhoid, which is a disease with very low case counts. Feature importance per disease was measured by using four tree-based ML models (i.e., XGB and RF with and without region name as a feature). The most important feature groups included previous case counts, region name, population counts and density, mortality causes of neonatal to under 5 years of age, sanitation factors, and elevation. This study demonstrates the power of ML approaches to incorporate a wide range of factors to forecast various diseases, regardless of location, more accurately than traditional statistical approaches.

3.
Blood Adv ; 5(10): 2447-2455, 2021 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-33988700

RESUMO

Inadequate diagnostics compromise cancer care across lower- and middle-income countries (LMICs). We hypothesized that an inexpensive gene expression assay using paraffin-embedded biopsy specimens from LMICs could distinguish lymphoma subtypes without pathologist input. We reviewed all biopsy specimens obtained at the Instituto de Cancerología y Hospital Dr. Bernardo Del Valle in Guatemala City between 2006 and 2018 for suspicion of lymphoma. Diagnoses were established based on the World Health Organization classification and then binned into 9 categories: nonmalignant, aggressive B-cell, diffuse large B-cell, follicular, Hodgkin, mantle cell, marginal zone, natural killer/T-cell, or mature T-cell lymphoma. We established a chemical ligation probe-based assay (CLPA) that quantifies expression of 37 genes by capillary electrophoresis with reagent/consumable cost of approximately $10/sample. To assign bins based on gene expression, 13 models were evaluated as candidate base learners, and class probabilities from each model were then used as predictors in an extreme gradient boosting super learner. Cases with call probabilities < 60% were classified as indeterminate. Four (2%) of 194 biopsy specimens in storage <3 years experienced assay failure. Diagnostic samples were divided into 70% (n = 397) training and 30% (n = 163) validation cohorts. Overall accuracy for the validation cohort was 86% (95% confidence interval [CI]: 80%-91%). After excluding 28 (17%) indeterminate calls, accuracy increased to 94% (95% CI: 89%-97%). Concordance was 97% for a set of high-probability calls (n = 37) assayed by CLPA in both the United States and Guatemala. Accuracy for a cohort of relapsed/refractory biopsy specimens (n = 39) was 79% and 88%, respectively, after excluding indeterminate cases. Machine-learning analysis of gene expression accurately classifies paraffin-embedded lymphoma biopsy specimens and could transform diagnosis in LMICs.


Assuntos
Países em Desenvolvimento , Linfoma de Células T Periférico , Biópsia , Humanos
4.
Pain Physician ; 20(3): E437-E444, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28339444

RESUMO

BACKGROUND: Studies of radiofrequency ablation (RFA) of genicular nerves have reportedly significantly decreased pain up to 3 months post ablation, but no longer term effects have been reported. We performed an analysis of long-term pain relief of 31 RFA procedures of the genicular nerves to analyze the degree of pain relief past 3 months, culminating at 6 months. STUDY DESIGN: Chart review and study design was approved by Newark Health Sciences Institutional Review Board (IRB). Chart review and follow-up was performed on all patients who underwent genicular nerve RFA during the period of February 2014 through August of 2015. During this inclusion period 41 genicular nerve RFAs were performed on 31 patients, 5 patients received RFA procedure in both knees. Patient follow-up was performed via telephone interview or in-office visit at least 3 months and 6 months post RFA. SETTINGS: Procedures were performed in Medical Special Procedures at University Hospital in Newark, NJ, and the Pain Management Center at Overlook Medical Arts Center in Summit, NJ. METHODS: Chart review and study design was approved by Newark Health Sciences IRB. Chart review was performed from February 2014 and continued through August 2015. Patient follow-up was conducted at 3 and at least 6 months post treatment to gauge degree of pain relief (0 - none, 100% - complete), their current day's pain score, other treatment modalities tried before RFA, and the medications used. Patients were asked to quantify their satisfaction with procedure length, pre-procedure anxiety, complications, and if they would recommend this procedure to others. Primary and secondary goals were the duration of pain relief after RFA, the quality of pain relief, and the efficacy of our approach for RFA of genicular nerves versus prior published techniques. RESULTS: At 3 month follow-up, the average pain relief was 67% improvement from baseline knee pain, 0% being no relief and 100% being complete relief, and average 0 - 10 pain score was 2.9. At 6 month follow-up, of those who described pain relief at 3 months, 95% still described pain relief. This group's average percent pain relief was 64% and average day's 0 - 10 pain score 3.3. LIMITATIONS: Our study included a retrospective component in chart review followed by prospective follow-up, only 76% of patients were able to participate in the interview process. Furthermore, some patients suffered from other chronic pain ailments, most commonly chronic back pain, which at times disturbed the patient's ability to focus on solely knee pain. CONCLUSIONS: Based on patient interviews and data collection, RFA of genicular nerves can supply on average greater than 60% pain relief in our patient population for as long as 6 months.Key words: Osteoarthritis, knee osteoarthritis, chronic knee pain, radiofrequency ablation, nerve ablation, genicular nerves, long-term pain relief.


Assuntos
Ablação por Cateter , Articulação do Joelho/inervação , Osteoartrite do Joelho/cirurgia , Manejo da Dor/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
5.
J Perianesth Nurs ; 31(5): 371-80, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27667343

RESUMO

BACKGROUND: The lack of a preoperative screening tool to detect obstructive sleep apnea (OSA) may lead to an increase in postoperative complications. AIM: The aim of the study was to implement a prescreening tool to identify diagnosed or undiagnosed OSA before a surgical procedure. SETTING: The study was conducted in the surgical admission center and postanesthesia care unit at a military treatment facility in Hawaii. PARTICIPANTS: Participants of the study included military personnel, military family members, veterans, and veteran beneficiaries. METHODS: The STOP-BANG (snore/tired/obstruction/pressure-body mass index/age/neck/gender) tool was used between April and June 2013 to identify and stratify 1,625 patients into low-risk, intermediate-risk, high-risk, and known OSA categories. RESULTS: The STOP-BANG tool confirmed the diagnosed OSA rate to be 13.48%, and increased at-risk OSA detection by 24.69%. Hawaiians/Pacific Islanders were more frequently found to be at risk with known OSA, likely to have complications, and be transferred to PACU 23-hour extended stay compared to other races and intermediate-risk and high-risk categories. CONCLUSION: The STOP-BANG tool identified and stratified surgical patients at risk for OSA and standardized OSA assessments.


Assuntos
Militares , Cuidados Pré-Operatórios , Apneia Obstrutiva do Sono/diagnóstico , Adolescente , Adulto , Idoso , Feminino , Havaí , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...