Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 12.600
Filter
1.
Clin Oral Investig ; 28(6): 301, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38710794

ABSTRACT

OBJECTIVES: To undertake a cost-effectiveness analysis of restorative treatments for a first permanent molar with severe molar incisor hypomineralization from the perspective of the Brazilian public system. MATERIALS AND METHODS: Two models were constructed: a one-year decision tree and a ten-year Markov model, each based on a hypothetical cohort of one thousand individuals through Monte Carlo simulation. Eight restorative strategies were evaluated: high viscosity glass ionomer cement (HVGIC); encapsulated GIC; etch and rinse adhesive + composite; self-etch adhesive + composite; preformed stainless steel crown; HVGIC + etch and rinse adhesive + composite; HVGIC + self-etch adhesive + composite, and encapsulated GIC + etch and rinse adhesive + composite. Effectiveness data were sourced from the literature. Micro-costing was applied using 2022 USD market averages with a 5% variation. Incremental cost-effectiveness ratio (ICER), net monetary benefit (%NMB), and the budgetary impact were obtained. RESULTS: Cost-effective treatments included HVGIC (%NMB = 0%/ 0%), encapsulated GIC (%NMB = 19.4%/ 19.7%), and encapsulated GIC + etch and rinse adhesive + composite (%NMB = 23.4%/ 24.5%) at 1 year and 10 years, respectively. The benefit gain of encapsulated GIC + etch and rinse adhesive + composite in relation to encapsulated GIC was small when compared to the cost increase at 1 year (gain of 3.28% and increase of USD 24.26) and 10 years (gain of 4% and increase of USD 15.54). CONCLUSION: Within the horizon and perspective analyzed, the most cost-effective treatment was encapsulated GIC restoration. CLINICAL RELEVANCE: This study can provide information for decision-making.


Subject(s)
Cost-Benefit Analysis , Dental Enamel Hypoplasia , Dental Restoration, Permanent , Glass Ionomer Cements , Humans , Brazil , Dental Enamel Hypoplasia/therapy , Dental Restoration, Permanent/methods , Dental Restoration, Permanent/economics , Glass Ionomer Cements/therapeutic use , Decision Trees , Molar , Monte Carlo Method , Markov Chains , Molar Hypomineralization
2.
Sci Rep ; 14(1): 11496, 2024 05 20.
Article in English | MEDLINE | ID: mdl-38769444

ABSTRACT

According to the European Society of Cardiology, globally the number of patients with heart failure nearly doubled from 33.5 million in 1990 to 64.3 million in 2017, and is further projected to increase dramatically in this decade, still remaining a leading cause of morbidity and mortality. One of the most frequently applied heart failure classification systems that physicians use is the New York Heart Association (NYHA) Functional Classification. Each NYHA class describes a patient's symptoms while performing physical activities, delivering a strong indicator of the heart performance. In each case, a NYHA class is individually determined routinely based on the subjective assessment of the treating physician. However, such diagnosis can suffer from bias, eventually affecting a valid assessment. To tackle this issue, we take advantage of the machine learning approach to develop a decision-tree, along with a set of decision rules, which can serve as additional blinded investigator tool to make unbiased assessment. On a dataset containing 434 observations, the supervised learning approach was initially employed to train a Decision Tree model. In the subsequent phase, ensemble learning techniques were utilized to develop both the Voting Classifier and the Random Forest model. The performance of all models was assessed using 10-fold cross-validation with stratification.The Decision Tree, Random Forest, and Voting Classifier models reported accuracies of 76.28%, 96.77%, and 99.54% respectively. The Voting Classifier led in classifying NYHA I and III with 98.7% and 100% accuracy. Both Random Forest and Voting Classifier flawlessly classified NYHA II at 100%. However, for NYHA IV, Random Forest achieved a perfect score, while the Voting Classifier reported 90%. The Decision Tree showed the least effectiveness among all the models tested. In our opinion, the results seem satisfactory in terms of their supporting role in clinical practice. In particular, the use of a machine learning tool could reduce or even eliminate the bias in the physician's assessment. In addition, future research should consider testing other variables in different datasets to gain a better understanding of the significant factors affecting heart failure.


Subject(s)
Decision Trees , Heart Failure , Machine Learning , Humans , Heart Failure/classification , Heart Failure/diagnosis , Male , Female , Aged
3.
Clin Respir J ; 18(5): e13769, 2024 May.
Article in English | MEDLINE | ID: mdl-38736274

ABSTRACT

BACKGROUND: Lung cancer is the leading cause of cancer-related death worldwide. This study aimed to establish novel multiclassification prediction models based on machine learning (ML) to predict the probability of malignancy in pulmonary nodules (PNs) and to compare with three published models. METHODS: Nine hundred fourteen patients with PNs were collected from four medical institutions (A, B, C and D), which were organized into tables containing clinical features, radiologic features and laboratory test features. Patients were divided into benign lesion (BL), precursor lesion (PL) and malignant lesion (ML) groups according to pathological diagnosis. Approximately 80% of patients in A (total/male: 632/269, age: 57.73 ± 11.06) were randomly selected as a training set; the remaining 20% were used as an internal test set; and the patients in B (total/male: 94/53, age: 60.04 ± 11.22), C (total/male: 94/47, age: 59.30 ± 9.86) and D (total/male: 94/61, age: 62.0 ± 11.09) were used as an external validation set. Logical regression (LR), decision tree (DT), random forest (RF) and support vector machine (SVM) were used to establish prediction models. Finally, the Mayo model, Peking University People's Hospital (PKUPH) model and Brock model were externally validated in our patients. RESULTS: The AUC values of RF model for MLs, PLs and BLs were 0.80 (95% CI: 0.73-0.88), 0.90 (95% CI: 0.82-0.99) and 0.75 (95% CI: 0.67-0.88), respectively. The weighted average AUC value of the RF model for the external validation set was 0.71 (95% CI: 0.67-0.73), and its AUC values for MLs, PLs and BLs were 0.71 (95% CI: 0.68-0.79), 0.98 (95% CI: 0.88-1.07) and 0.68 (95% CI: 0.61-0.74), respectively. The AUC values of the Mayo model, PKUPH model and Brock model were 0.68 (95% CI: 0.62-0.74), 0.64 (95% CI: 0.58-0.70) and 0.57 (95% CI: 0.49-0.65), respectively. CONCLUSIONS: The RF model performed best, and its predictive performance was better than that of the three published models, which may provide a new noninvasive method for the risk assessment of PNs.


Subject(s)
Lung Neoplasms , Machine Learning , Multiple Pulmonary Nodules , Aged , Female , Humans , Male , Middle Aged , Decision Trees , Lung Neoplasms/pathology , Lung Neoplasms/diagnosis , Lung Neoplasms/diagnostic imaging , Multiple Pulmonary Nodules/diagnostic imaging , Multiple Pulmonary Nodules/pathology , Multiple Pulmonary Nodules/diagnosis , Predictive Value of Tests , Retrospective Studies , ROC Curve , Solitary Pulmonary Nodule/diagnostic imaging , Solitary Pulmonary Nodule/pathology , Solitary Pulmonary Nodule/diagnosis , Support Vector Machine , Tomography, X-Ray Computed/methods
4.
BMC Oral Health ; 24(1): 534, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724990

ABSTRACT

OBJECTIVES: The objectives of this study were to evaluate the cost-effectiveness and cost-benefit of fluoride varnish (FV) interventions for preventing caries in the first permanent molars (FPMs) among children in rural areas in Guangxi, China. METHODS: This study constituted a secondary analysis of data from a randomised controlled trial, analysed from a social perspective. A total of 1,335 children aged 6-8 years in remote rural areas of Guangxi were enrolled in this three-year follow-up controlled study. Children in the experimental group (EG) and the control group (CG) received oral health education and were provided with a toothbrush and toothpaste once every six months. Additionally, FV was applied in the EG. A decision tree model was developed, and single-factor and probabilistic sensitivity analyses were conducted. RESULTS: After three years of intervention, the prevalence of caries in the EG was 50.85%, with an average decayed, missing, and filled teeth (DMFT) index score of 1.12, and that in the CG was 59.04%, with a DMFT index score of 1.36. The total cost of caries intervention and postcaries treatment was 42,719.55 USD for the EG and 46,622.13 USD for the CG. The incremental cost-effectiveness ratio (ICER) of the EG was 25.36 USD per caries prevented, and the cost-benefit ratio (CBR) was 1.74 USD benefits per 1 USD cost. The results of the sensitivity analyses showed that the increase in the average DMFT index score was the largest variable affecting the ICER and CBR. CONCLUSIONS: Compared to oral health education alone, a comprehensive intervention combining FV application with oral health education is more cost-effective and beneficial for preventing caries in the FPMs of children living in economically disadvantaged rural areas. These findings could provide a basis for policy-making and clinical choices to improve children's oral health.


Subject(s)
Cariostatic Agents , Cost-Benefit Analysis , DMF Index , Dental Caries , Fluorides, Topical , Humans , Dental Caries/prevention & control , Dental Caries/economics , China , Fluorides, Topical/therapeutic use , Fluorides, Topical/economics , Child , Cariostatic Agents/therapeutic use , Cariostatic Agents/economics , Male , Female , Health Education, Dental/economics , Toothbrushing/economics , Toothpastes/therapeutic use , Toothpastes/economics , Follow-Up Studies , Molar , Decision Trees
5.
PLoS One ; 19(5): e0302947, 2024.
Article in English | MEDLINE | ID: mdl-38728288

ABSTRACT

In recent years, researchers have proven the effectiveness and speediness of machine learning-based cancer diagnosis models. However, it is difficult to explain the results generated by machine learning models, especially ones that utilized complex high-dimensional data like RNA sequencing data. In this study, we propose the binarilization technique as a novel way to treat RNA sequencing data and used it to construct explainable cancer prediction models. We tested our proposed data processing technique on five different models, namely neural network, random forest, xgboost, support vector machine, and decision tree, using four cancer datasets collected from the National Cancer Institute Genomic Data Commons. Since our datasets are imbalanced, we evaluated the performance of all models using metrics designed for imbalance performance like geometric mean, Matthews correlation coefficient, F-Measure, and area under the receiver operating characteristic curve. Our approach showed comparative performance while relying on less features. Additionally, we demonstrated that data binarilization offers higher explainability by revealing how each feature affects the prediction. These results demonstrate the potential of data binarilization technique in improving the performance and explainability of RNA sequencing based cancer prediction models.


Subject(s)
Machine Learning , Neoplasms , Sequence Analysis, RNA , Humans , Neoplasms/genetics , Sequence Analysis, RNA/methods , Neural Networks, Computer , Support Vector Machine , ROC Curve , Decision Trees
6.
J Rehabil Med ; 56: jrm35095, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38712968

ABSTRACT

OBJECTIVE: This study aimed to investigate the predictive functional factors influencing the acquisition of basic activities of daily living performance abilities during the early stages of stroke rehabilitation using classification and regression analysis trees. METHODS: The clinical data of 289 stroke patients who underwent rehabilitation during hospitalization (164 males; mean age: 62.2 ± 13.9 years) were retrospectively collected and analysed. The follow-up period between admission and discharge was approximately 6 weeks. Medical records, including demographic characteristics and various functional assessments with item scores, were extracted. The modified Barthel Index on discharge served as the target outcome for analysis. A "good outcome" was defined as a modified Barthel Index score ≥ 75 on discharge, while a modified Barthel Index score < 75 was classified as a "poor outcome." RESULTS: Two classification and regression analysis tree models were developed. The first model, predicting activities of daily living outcomes based on early motor functions, achieved an accuracy of 92.4%. Among patients with a "good outcome", 70.9% exhibited (i) ≥ 4 points in the "sitting-to-standing" category in the motor assessment scale and (ii) 32 points on the Berg Balance Scale score. The second model, predicting activities of daily living outcome based on early cognitive functions, achieved an accuracy of 82.7%. Within the "poor outcome" group, 52.2% had (i) ≤ 21 points in the "visuomotor organization" category of Lowenstein Occupational Therapy Cognitive Assessment, (ii) ≤ 1 point in the "time orientation" category of the Mini Mental State Examination. CONCLUSION: The ability to perform "sitting-to-standing" and visuomotor organization functions at the beginning of rehabilitation emerged as the most significant predictors for achieving successful basic activities of daily living on discharge after stroke.


Subject(s)
Activities of Daily Living , Decision Trees , Stroke Rehabilitation , Humans , Stroke Rehabilitation/methods , Male , Female , Middle Aged , Aged , Retrospective Studies , Stroke/physiopathology , Recovery of Function/physiology , Disability Evaluation , Treatment Outcome , Independent Living
7.
PLoS One ; 19(5): e0302882, 2024.
Article in English | MEDLINE | ID: mdl-38718059

ABSTRACT

Winter wheat is one of the most important crops in the world. It is great significance to obtain the planting area of winter wheat timely and accurately for formulating agricultural policies. Due to the limited resolution of single SAR data and the susceptibility of single optical data to weather conditions, it is difficult to accurately obtain the planting area of winter wheat using only SAR or optical data. To solve the problem of low accuracy of winter wheat extraction only using optical or SAR images, a decision tree classification method combining time series SAR backscattering feature and NDVI (Normalized Difference Vegetation Index) was constructed in this paper. By synergy using of SAR and optical data can compensate for their respective shortcomings. First, winter wheat was distinguished from other vegetation by NDVI at the maturity stage, and then it was extracted by SAR backscattering feature. This approach facilitates the semi-automated extraction of winter wheat. Taking Yucheng City of Shandong Province as study area, 9 Sentinel-1 images and one Sentinel-2 image were taken as the data sources, and the spatial distribution of winter wheat in 2022 was obtained. The results indicate that the overall accuracy (OA) and kappa coefficient (Kappa) of the proposed method are 96.10% and 0.94, respectively. Compared with the supervised classification of multi-temporal composite pseudocolor image and single Sentinel-2 image using Support Vector Machine (SVM) classifier, the OA are improved by 10.69% and 5.66%, respectively. Compared with using only SAR feature for decision tree classification, the producer accuracy (PA) and user accuracy (UA) for extracting the winter wheat are improved by 3.08% and 8.25%, respectively. The method proposed in this paper is rapid and accurate, and provide a new technical method for extracting winter wheat.


Subject(s)
Decision Trees , Seasons , Triticum , Triticum/growth & development , China , Crops, Agricultural/growth & development
8.
Clin Biomech (Bristol, Avon) ; 115: 106262, 2024 May.
Article in English | MEDLINE | ID: mdl-38744224

ABSTRACT

BACKGROUND: Falls among the elderly are a major societal problem. While observations of medium-distance walking using inertial sensors identified potential fall predictors, classifying individuals at risk based on single gait cycles remains elusive. This challenge stems from individual variability and step-to-step fluctuations, making accurate classification difficult. METHODS: We recruited 44 participants, equally divided into high and low fall-risk groups. A smartphone secured on their second sacral spinous process recorded data during indoor walking. Features were extracted at each gait cycle from a 6-dimensional time series (tri-axial angular velocity and tri-axial acceleration) and classified using the gradient boosting decision tree algorithm. FINDINGS: Mean accuracy across five-fold cross-validation was 0.936. "Age" was the most influential individual feature, while features related to acceleration in the gait direction held the highest total relative importance when aggregated by axis (0.5365). INTERPRETATION: Combining acceleration, angular velocity data, and the gradient boosting decision tree algorithm enabled accurate fall risk classification in the elderly, previously challenging due to lack of discernible features. We reveal the first-ever identification of three-dimensional pelvic motion characteristics during single gait cycles in the high-risk group. This novel method, requiring only one gait cycle, is valuable for individuals with physical limitations hindering repetitive or long-distance walking or for use in spaces with limited walking areas. Additionally, utilizing readily available smartphones instead of dedicated equipment has potential to improve gait analysis accessibility.


Subject(s)
Accidental Falls , Gait , Machine Learning , Humans , Accidental Falls/prevention & control , Aged , Gait/physiology , Female , Male , Algorithms , Walking/physiology , Acceleration , Risk Assessment/methods , Accelerometry/methods , Smartphone , Aged, 80 and over , Biomechanical Phenomena , Decision Trees , Middle Aged
9.
Ann Epidemiol ; 94: 81-90, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38710239

ABSTRACT

PURPOSE: Identifying predictors of opioid overdose following release from prison is critical for opioid overdose prevention. METHODS: We leveraged an individually linked, state-wide database from 2015-2020 to predict the risk of opioid overdose within 90 days of release from Massachusetts state prisons. We developed two decision tree modeling schemes: a model fit on all individuals with a single weight for those that experienced an opioid overdose and models stratified by race/ethnicity. We compared the performance of each model using several performance measures and identified factors that were most predictive of opioid overdose within racial/ethnic groups and across models. RESULTS: We found that out of 44,246 prison releases in Massachusetts between 2015-2020, 2237 (5.1%) resulted in opioid overdose in the 90 days following release. The performance of the two predictive models varied. The single weight model had high sensitivity (79%) and low specificity (56%) for predicting opioid overdose and was more sensitive for White non-Hispanic individuals (sensitivity = 84%) than for racial/ethnic minority individuals. CONCLUSIONS: Stratified models had better balanced performance metrics for both White non-Hispanic and racial/ethnic minority groups and identified different predictors of overdose between racial/ethnic groups. Across racial/ethnic groups and models, involuntary commitment (involuntary treatment for alcohol/substance use disorder) was an important predictor of opioid overdose.


Subject(s)
Decision Trees , Opiate Overdose , Humans , Male , Opiate Overdose/epidemiology , Adult , Female , Massachusetts/epidemiology , Opioid-Related Disorders/epidemiology , Opioid-Related Disorders/ethnology , Prisoners/statistics & numerical data , Prisons/statistics & numerical data , Middle Aged , Analgesics, Opioid/poisoning , Analgesics, Opioid/adverse effects , Ethnicity/statistics & numerical data , Young Adult
10.
Int J Med Inform ; 187: 105468, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38703744

ABSTRACT

PURPOSE: Our research aims to compare the predictive performance of decision tree algorithms (DT) and logistic regression analysis (LR) in constructing models, and develop a Post-Thrombotic Syndrome (PTS) risk stratification tool. METHODS: We retrospectively collected and analyzed relevant case information of 618 patients diagnosed with DVT from January 2012 to December 2021 in three different tertiary hospitals in Jiangxi Province as the modeling group. Additionally, we used the case information of 212 patients diagnosed with DVT from January 2022 to January 2023 in two tertiary hospitals in Hubei Province and Guangdong Province as the validation group. We extracted electronic medical record information including general patient data, medical history, laboratory test indicators, and treatment data for analysis. We established DT and LR models and compared their predictive performance using receiver operating characteristic (ROC) curves and confusion matrices. Internal and external validations were conducted. Additionally, we utilized LR to generate nomogram charts, calibration curves, and decision curves analysis (DCA) to assess its predictive accuracy. RESULTS: Both DT and LR models indicate that Year, Residence, Cancer, Varicose Vein Operation History, DM, and Chronic VTE are risk factors for PTS occurrence. In internal validation, DT outperforms LR (0.962 vs 0.925, z = 3.379, P < 0.001). However, in external validation, there is no significant difference in the area under the ROC curve between the two models (0.963 vs 0.949, z = 0.412, P = 0.680). The validation results of calibration curves and DCA demonstrate that LR exhibits good predictive accuracy and clinical effectiveness. A web-based calculator software of nomogram (https://sunxiaoxuan.shinyapps.io/dynnomapp/) was utilized to visualize the logistic regression model. CONCLUSIONS: The combination of decision tree and logistic regression models, along with the web-based calculator software of nomogram, can assist healthcare professionals in accurately assessing the risk of PTS occurrence in individual patients with lower limb DVT.


Subject(s)
Postthrombotic Syndrome , Venous Thrombosis , Humans , Venous Thrombosis/diagnosis , Postthrombotic Syndrome/diagnosis , Postthrombotic Syndrome/etiology , Female , Male , Middle Aged , Risk Assessment/methods , Retrospective Studies , Lower Extremity/blood supply , Risk Factors , Logistic Models , Adult , Decision Trees , Aged , ROC Curve , Algorithms , Nomograms
11.
Oral Oncol ; 153: 106834, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38718458

ABSTRACT

OBJECTIVES: To meet the demand for personalized treatment, effective stratification of patients with metastatic nasopharyngeal carcinoma (mNPC) is essential. Hence, our study aimed to establish an M1 subdivision for prognostic prediction and treatment planning in patients with mNPC. MATERIALS AND METHODS: This study included 1239 patients with mNPC from three medical centers divided into the synchronous mNPC cohort (smNPC, n = 556) to establish an M1 stage subdivision and the metachronous mNPC cohort (mmNPC, n = 683) to validate this subdivision. The primary endpoint was overall survival. Univariate and multivariate Cox analyses identified covariates for the decision-tree model, proposing an M1 subdivision. Model performance was evaluated using time-dependent receiver operating characteristic curves, Harrell's concordance index, calibration plots, and decision curve analyses. RESULTS: The proposed M1 subdivisions were M1a (≤5 metastatic lesions), M1b (>5 metastatic lesions + absent liver metastases), and M1c (>5 metastatic lesions + existing liver metastases) with median OS of 34, 22, and 13 months, respectively (p < 0.001). This M1 subdivision demonstrated superior discrimination (C-index = 0.698; 3-year AUC = 0.707) and clinical utility over those of existing staging systems. Calibration curves exhibited satisfactory agreement between predictions and actual observations. Internal and mmNPC cohort validation confirmed the robustness. Survival benefits from local metastatic treatment were observed in M1a, while immunotherapy improved survival in patients with M1b and M1c disease. CONCLUSION: This novel M1 staging strategy provides a refined approach for prognostic prediction and treatment planning in patients with mNPC, emphasizing the potential benefits of local and immunotherapeutic interventions based on individualized risk stratification.


Subject(s)
Decision Trees , Nasopharyngeal Carcinoma , Humans , Male , Female , Middle Aged , Nasopharyngeal Carcinoma/pathology , Nasopharyngeal Carcinoma/mortality , Nasopharyngeal Carcinoma/therapy , Retrospective Studies , Adult , Neoplasm Staging , Nasopharyngeal Neoplasms/pathology , Nasopharyngeal Neoplasms/therapy , Nasopharyngeal Neoplasms/mortality , Prognosis , Aged
12.
Sci Rep ; 14(1): 11128, 2024 05 15.
Article in English | MEDLINE | ID: mdl-38750112

ABSTRACT

This study focused on comparing distributed learning models with centralized and local models, assessing their efficacy in predicting specific delivery and patient-related outcomes in obstetrics using real-world data. The predictions focus on key moments in the obstetric care process, including discharge and various stages of hospitalization. Our analysis: using 6 different machine learning methods like Decision Trees, Bayesian methods, Stochastic Gradient Descent, K-nearest neighbors, AdaBoost, and Multi-layer Perceptron and 19 different variables with various distributions and types, revealed that distributed models were at least equal, and often superior, to centralized versions and local versions. We also describe thoroughly the preprocessing stage in order to help others implement this method in real-world scenarios. The preprocessing steps included cleaning and harmonizing missing values, handling missing data and encoding categorical variables with multisite logic. Even though the type of machine learning model and the distribution of the outcome variable can impact the result, we reached results of 66% being superior to the centralized and local counterpart and 77% being better than the centralized with AdaBoost. Our experiments also shed light in the preprocessing steps required to implement distributed models in a real-world scenario. Our results advocate for distributed learning as a promising tool for applying machine learning in clinical settings, particularly when privacy and data security are paramount, thus offering a robust solution for privacy-concerned clinical applications.


Subject(s)
Machine Learning , Obstetrics , Humans , Female , Pregnancy , Bayes Theorem , Decision Trees
13.
Sci Rep ; 14(1): 10445, 2024 05 07.
Article in English | MEDLINE | ID: mdl-38714774

ABSTRACT

Conventional endoscopy is widely used in the diagnosis of early gastric cancers (EGCs), but the graphical features were loosely defined and dependent on endoscopists' experience. We aim to establish a more accurate predictive model for infiltration depth of early gastric cancer including a standardized colorimetric system, which demonstrates promising clinical implication. A retrospective study of 718 EGC cases was performed. Clinical and pathological characteristics were included, and Commission Internationale de l'Eclariage (CIE) standard colorimetric system was used to evaluate the chromaticity of lesions. The predicting models were established in the derivation set using multivariate backward stepwise logistic regression, decision tree model, and random forest model. Logistic regression shows location, macroscopic type, length, marked margin elevation, WLI color difference and histological type are factors significantly independently associated with infiltration depth. In the decision tree model, margin elevation, lesion located in the lower 1/3 part, WLI a*color value, b*color value, and abnormal thickness in enhanced CT were selected, which achieved an AUROC of 0.810. A random forest model was established presenting the importance of each feature with an accuracy of 0.80, and an AUROC of 0.844. Quantified color metrics can improve the diagnostic precision in the invasion depth of EGC. We have developed a nomogram model using logistic regression and machine learning algorithms were also explored, which turned out to be helpful in decision-making progress.


Subject(s)
Machine Learning , Neoplasm Invasiveness , Stomach Neoplasms , Stomach Neoplasms/pathology , Stomach Neoplasms/diagnosis , Humans , Male , Female , Middle Aged , Retrospective Studies , Aged , Color , Gastric Mucosa/pathology , Gastric Mucosa/diagnostic imaging , Early Detection of Cancer/methods , Logistic Models , Gastroscopy/methods , Decision Trees
14.
BMC Geriatr ; 24(1): 405, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714934

ABSTRACT

BACKGROUND: Cognitive dysfunction is one of the leading causes of disability and dependence in older adults and is a major economic burden on the public health system. The aim of this study was to investigate the risk factors for cognitive dysfunction and their predictive value in older adults in Northwest China. METHODS: A cross-sectional study was conducted using a multistage sampling method. The questionnaires were distributed through the Elderly Disability Monitoring Platform to older adults aged 60 years and above in Northwest China, who were divided into cognitive dysfunction and normal cognitive function groups. In addition to univariate analyses, logistic regression and decision tree modelling were used to construct a model to identify factors that can predict the occurrence of cognitive dysfunction in older adults. RESULTS: A total of 12,494 valid questionnaires were collected, including 2617 from participants in the cognitive dysfunction group and 9877 from participants in the normal cognitive function group. Univariate analysis revealed that ethnicity, BMI, age, educational attainment, marital status, type of residence, residency status, current work status, main economic source, type of chronic disease, long-term use of medication, alcohol consumption, participation in social activities, exercise status, social support, total scores on the Balanced Test Assessment, total scores on the Gait Speed Assessment total score, and activities of daily living (ADL) were significantly different between the two groups (all P < 0.05). According to logistic regression analyses, ethnicity, BMI, educational attainment, marital status, residency, main source of income, chronic diseases, annual medical examination, alcohol consumption, exercise status, total scores on the Balanced Test Assessment, and activities of daily living (ADLs) were found to influence cognitive dysfunction in older adults (all P < 0.05). In the decision tree model, the ability to perform activities of daily living was the root node, followed by total scores on the Balanced Test Assessment, marital status, educational attainment, age, annual medical examination, and ethnicity. CONCLUSIONS: Traditional risk factors (including BMI, literacy, and alcohol consumption) and potentially modifiable risk factors (including balance function, ability to care for oneself in daily life, and widowhood) have a significant impact on the increased risk of cognitive dysfunction in older adults in Northwest China. The use of decision tree models can help health care workers better assess cognitive function in older adults and develop personalized interventions. Further research could help to gain insight into the mechanisms of cognitive dysfunction and provide new avenues for prevention and intervention.


Subject(s)
Decision Trees , Humans , Male , Female , China/epidemiology , Aged , Cross-Sectional Studies , Middle Aged , Aged, 80 and over , Logistic Models , Risk Factors , Cognition Disorders/epidemiology , Cognition Disorders/psychology , Cognition Disorders/diagnosis , Cognitive Dysfunction/epidemiology , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Surveys and Questionnaires , Activities of Daily Living
15.
Orthod Fr ; 95(1): 19-33, 2024 05 03.
Article in French | MEDLINE | ID: mdl-38699915

ABSTRACT

Introduction: Common Temporomandibular Disorders (TMD) involve the masticatory muscles, temporomandibular joints, and/or their associated structures. Clinical manifestations can vary, including sounds (cracking, crepitus), pain, and/or dyskinesias, often corresponding to a limitation of mandibular movements. Signs or symptoms of muscular or joint disorders of the masticatory system may be present before the initiation of orthodontic treatment, emerge during treatment, or worsen to the point of stopping treatment. How do you screen for common TMD in orthodontic treatment? Materials and Methods: The main elements of the interview and clinical examination for screening common TMD in the context of orthodontic treatment are clarified and illustrated with photographs. Moreover, complementary examinations are also detailed. Results: A clinical screening form for common TMD is proposed. A synthetic decision tree helping in the screening of TMD is also presented. Conclusion: In the context of an orthodontic treatment, the screening examination for common TMD includes gathering information (interview), a clinical evaluation, and possibly complementary investigations. The orthodontist is supported in this approach through the development of a clinical form and a dedicated synthetic decision tree for the screening of TMDs. Systematically screening for common TMD before initiating orthodontic treatment allows the orthodontist to suggest additional diagnostic measures, implement appropriate therapeutic interventions, and/or refer to a specialist in the field if necessary.


Introduction: Les dysfonctionnements temporo-mandibulaires (DTM) concernent les muscles masticateurs, les articulations temporo- mandibulaires et/ou leurs structures associées. Les manifestations cliniques peuvent être diverses : bruits (craquements, crépitements), algies et/ou dyscinésies correspondant le plus souvent à une limitation des mouvements mandibulaires. Or, des signes ou symptômes de troubles musculaires ou articulaires de l'appareil manducateur peuvent être présents avant le début de la prise en charge orthodontique, voire apparaître en cours de traitement ou s'aggraver au point de remettre en question la poursuite du traitement engagé. Comment conduire un dépistage de DTM communs dans le cadre d'une prise en charge orthodontique ? Matériel et méthodes: Les éléments essentiels de l'entretien et de l'examen clinique d'un dépistage des DTM communs dans le cadre d'une consultation d'orthodontie sont clarifiés et illustrés à l'aide de photographies. Le recours aux examens complémentaires a également été détaillé. Résultats: Une fiche clinique de dépistage des DTM communs est proposée. Un arbre décisionnel synthétique aidant au dépistage des DTM est présenté. Conclusion: Dans le cadre d'une consultation d'orthopédie dento-faciale, l'examen de dépistage des DTM communs inclut un recueil d'informations (entretien), une évaluation clinique et éventuellement des examens complémentaires. L'orthodontiste est soutenu dans cette démarche par la création d'une fiche clinique et d'un arbre décisionnel synthétique dédiés au dépistage des DTM. Effectuer systématiquement un dépistage des DTM communs avant d'initier un traitement orthodontique permettra à l'orthodontiste de proposer des moyens diagnostiques supplémentaires si nécessaire, et de mettre en place la prise en charge adéquate et/ou de référer à un spécialiste du domaine pour démarrer le traitement orthodontique dans les meilleures conditions.


Subject(s)
Temporomandibular Joint Disorders , Humans , Temporomandibular Joint Disorders/diagnosis , Temporomandibular Joint Disorders/therapy , Orthodontics/methods , Physical Examination/methods , Mass Screening/methods , Decision Trees
16.
Zhonghua Wei Zhong Bing Ji Jiu Yi Xue ; 36(4): 345-352, 2024 Apr.
Article in Chinese | MEDLINE | ID: mdl-38813626

ABSTRACT

OBJECTIVE: To construct and validate the best predictive model for 28-day death risk in patients with septic shock based on different supervised machine learning algorithms. METHODS: The patients with septic shock meeting the Sepsis-3 criteria were selected from Medical Information Mart for Intensive Care-IV v2.0 (MIMIC-IV v2.0). According to the principle of random allocation, 70% of these patients were used as the training set, and 30% as the validation set. Relevant predictive variables were extracted from three aspects: demographic characteristics and basic vital signs, serum indicators within 24 hours of intensive care unit (ICU) admission and complications possibly affecting indicators, functional scoring and advanced life support. The predictive efficacy of models constructed using five mainstream machine learning algorithms including decision tree classification and regression tree (CART), random forest (RF), support vector machine (SVM), linear regression (LR), and super learner [SL; combined CART, RF and extreme gradient boosting (XGBoost)] for 28-day death in patients with septic shock was compared, and the best algorithm model was selected. The optimal predictive variables were determined by intersecting the results from LASSO regression, RF, and XGBoost algorithms, and a predictive model was constructed. The predictive efficacy of the model was validated by drawing receiver operator characteristic curve (ROC curve), the accuracy of the model was assessed using calibration curves, and the practicality of the model was verified through decision curve analysis (DCA). RESULTS: A total of 3 295 patients with septic shock were included, with 2 164 surviving and 1 131 dying within 28 days, resulting in a mortality of 34.32%. Of these, 2 307 were in the training set (with 792 deaths within 28 days, a mortality of 34.33%), and 988 in the validation set (with 339 deaths within 28 days, a mortality of 34.31%). Five machine learning models were established based on the training set data. After including variables at three aspects, the area under the ROC curve (AUC) of RF, SVM, and LR machine learning algorithm models for predicting 28-day death in septic shock patients in the validation set was 0.823 [95% confidence interval (95%CI) was 0.795-0.849], 0.823 (95%CI was 0.796-0.849), and 0.810 (95%CI was 0.782-0.838), respectively, which were higher than that of the CART algorithm model (AUC = 0.750, 95%CI was 0.717-0.782) and SL algorithm model (AUC = 0.756, 95%CI was 0.724-0.789). Thus above three algorithm models were determined to be the best algorithm models. After integrating variables from three aspects, 16 optimal predictive variables were identified through intersection by LASSO regression, RF, and XGBoost algorithms, including the highest pH value, the highest albumin (Alb), the highest body temperature, the lowest lactic acid (Lac), the highest Lac, the highest serum creatinine (SCr), the highest Ca2+, the lowest hemoglobin (Hb), the lowest white blood cell count (WBC), age, simplified acute physiology score III (SAPS III), the highest WBC, acute physiology score III (APS III), the lowest Na+, body mass index (BMI), and the shortest activated partial thromboplastin time (APTT) within 24 hours of ICU admission. ROC curve analysis showed that the Logistic regression model constructed with above 16 optimal predictive variables was the best predictive model, with an AUC of 0.806 (95%CI was 0.778-0.835) in the validation set. The calibration curve and DCA curve showed that this model had high accuracy and the highest net benefit could reach 0.3, which was significantly outperforming traditional models based on single functional score [APS III score, SAPS III score, and sequential organ failure assessment (SOFA) score] with AUC (95%CI) of 0.746 (0.715-0.778), 0.765 (0.734-0.796), and 0.625 (0.589-0.661), respectively. CONCLUSIONS: The Logistic regression model, constructed using 16 optimal predictive variables including pH value, Alb, body temperature, Lac, SCr, Ca2+, Hb, WBC, SAPS III score, APS III score, Na+, BMI, and APTT, is identified as the best predictive model for the 28-day death risk in patients with septic shock. Its performance is stable, with high discriminative ability and accuracy.


Subject(s)
Algorithms , Shock, Septic , Supervised Machine Learning , Support Vector Machine , Humans , Shock, Septic/mortality , Shock, Septic/diagnosis , Female , Prognosis , Intensive Care Units , Male , Middle Aged , Machine Learning , Decision Trees
17.
Article in German | MEDLINE | ID: mdl-38701797

ABSTRACT

OBJECTIVE: Four parameters of a decision tree for Selective Dry Cow Treatment (SDCT), examined in a previous study, were analyzed regarding their efficacy in detecting cows for dry cow treatment (DCT, use of intramammary antimicrobials). This study set out to review wether all parameters (somatic cell count [SCC≥ 200 000 SC/ml 3 months' milk yield recordings prior dry off (DO)], clinical mastitis history during lactation [≥1 CM], culturing [14d prior DO, detection of major pathogens] and California-Mastitis-Test [CMT, > rate 1/+ at DO]) are necessary for accurate decision making, whether there are possible alternatives to replace culturing, and whether a simplified model could replace the decision tree. MATERIAL AND METHODS: Records of 18 Bavarian dairy farms from June 2015 to August 2017 were processed. Data analysis was carried out by means of descriptive statistics, as well as employing a binary cost sensitive classification tree and logit-models. For statistical analyses the outcomes of the full 4-parameter decision tree were taken as ground truth. RESULTS: 848 drying off procedures in 739 dairy cows (CDO) were included. SCC and CMT selected 88.1%, in combination with CM 95.6% of the cows that received DCT (n=494). Without culturing, 22 (4.4%) with major pathogens (8x Staphylococcus [S.] aureus) infected CDO would have been misclassified as not needing DCT. The average of geometric mean SCC (within 100 d prior DO) for CDO with negative results in culturing was<100 000 SC/ml milk, 100 000-150 000 SC/ml for CDO infected with minor pathogens, and ≥ 150 000 SC/ml for CDO infected with major pathogens (excluding S.aureus). Using SCC during lactation (at least 1x > 200 000 SC/ml) and positive CMT to select CDO for DCT, contrary to the decision tree, 37 CDO (4.4%) would have been treated "incorrectly without" and 43 CDO (5.1%) "unnecessarily with" DCT. Modifications were identified, such as SCC<131 000 SC/ml within 100 d prior to DO for detecting CDO with no growth or minor pathogens in culturing. The best model for grading CDO for or against DCT (CDO without CM and SCC<200 000 SC/ml [last 3 months prior DO]) had metrics of AUC=0.74, Accuracy=0.778, balanced Accuracy=0.63, Sensitivity=0.92 and Specificity=0.33. CONCLUSIONS: Combining the decision tree's parameters SCC, CMT and CM renders suitable selection criteria under the conditions of this study. When omitting culturing, lower thresholds for SCC should be considered for each farm individually to select CDO for DCT. Nonetheless, the most accurate model could not replace the full decision tree.


Subject(s)
Dairying , Decision Trees , Mastitis, Bovine , Animals , Cattle , Female , Mastitis, Bovine/microbiology , Mastitis, Bovine/diagnosis , Dairying/methods , Germany , Milk/cytology , Milk/microbiology , Lactation/physiology
18.
Sensors (Basel) ; 24(10)2024 May 17.
Article in English | MEDLINE | ID: mdl-38794052

ABSTRACT

Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the 'glass box' group of Decision Tree, among others, and the 'black box' group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.


Subject(s)
Algorithms , Artificial Intelligence , Machine Learning , Powders , Ribes , Powders/chemistry , Ribes/chemistry , Decision Trees
19.
J Int AIDS Soc ; 27(5): e26275, 2024 May.
Article in English | MEDLINE | ID: mdl-38801731

ABSTRACT

INTRODUCTION: In 2018, the Mozambique Ministry of Health launched guidelines for implementing differentiated service delivery models (DSDMs) to optimize HIV service delivery, improve retention in care, and ultimately reduce HIV-associated mortality. The models were fast-track, 3-month antiretrovirals dispensing, community antiretroviral therapy groups, adherence clubs, family approach and three one-stop shop models: adolescent-friendly health services, maternal and child health, and tuberculosis. We conducted a cost-effectiveness analysis and budget impact analysis to compare these models to conventional services. METHODS: We constructed a decision tree model based on the percentage of enrolment in each model and the probability of the outcome (12-month retention in treatment) for each year of the study period-three for the cost-effectiveness analysis (2019-2021) and three for the budget impact analysis (2022-2024). Costs for these analyses were primarily estimated per client-year from the health system perspective. A secondary cost-effectiveness analysis was conducted from the societal perspective. Budget impact analysis costs included antiretrovirals, laboratory tests and service provision interactions. Cost-effectiveness analysis additionally included start-up, training and clients' opportunity costs. Effectiveness was estimated using an uncontrolled interrupted time series analysis comparing the outcome before and after the implementation of the differentiated models. A one-way sensitivity analysis was conducted to identify drivers of uncertainty. RESULTS: After implementation of the DSDMs, there was a mean increase of 14.9 percentage points (95% CI: 12.2, 17.8) in 12-month retention, from 47.6% (95% CI, 44.9-50.2) to 62.5% (95% CI, 60.9-64.1). The mean cost difference comparing DSDMs and conventional care was US$ -6 million (173,391,277 vs. 179,461,668) and -32.5 million (394,705,618 vs. 433,232,289) from the health system and the societal perspective, respectively. Therefore, DSDMs dominated conventional care. Results were most sensitive to conventional care interaction costs in the one-way sensitivity analysis. For a population of 1.5 million, the base-case 3-year financial costs associated with the DSDMs was US$550 million, compared with US$564 million for conventional care. CONCLUSIONS: DSDMs were less expensive and more effective in retaining clients 12 months after antiretroviral therapy initiation and were estimated to save approximately US$14 million for the health system from 2022 to 2024.


Subject(s)
Cost-Benefit Analysis , HIV Infections , Mozambique , Humans , HIV Infections/drug therapy , HIV Infections/economics , Delivery of Health Care/economics , Female , Anti-HIV Agents/therapeutic use , Anti-HIV Agents/economics , Decision Trees , Adolescent , Male
20.
PLoS Comput Biol ; 20(4): e1011985, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38626220

ABSTRACT

Animal psychophysics can generate rich behavioral datasets, often comprised of many 1000s of trials for an individual subject. Gradient-boosted models are a promising machine learning approach for analyzing such data, partly due to the tools that allow users to gain insight into how the model makes predictions. We trained ferrets to report a target word's presence, timing, and lateralization within a stream of consecutively presented non-target words. To assess the animals' ability to generalize across pitch, we manipulated the fundamental frequency (F0) of the speech stimuli across trials, and to assess the contribution of pitch to streaming, we roved the F0 from word token to token. We then implemented gradient-boosted regression and decision trees on the trial outcome and reaction time data to understand the behavioral factors behind the ferrets' decision-making. We visualized model contributions by implementing SHAPs feature importance and partial dependency plots. While ferrets could accurately perform the task across all pitch-shifted conditions, our models reveal subtle effects of shifting F0 on performance, with within-trial pitch shifting elevating false alarms and extending reaction times. Our models identified a subset of non-target words that animals commonly false alarmed to. Follow-up analysis demonstrated that the spectrotemporal similarity of target and non-target words rather than similarity in duration or amplitude waveform was the strongest predictor of the likelihood of false alarming. Finally, we compared the results with those obtained with traditional mixed effects models, revealing equivalent or better performance for the gradient-boosted models over these approaches.


Subject(s)
Decision Trees , Ferrets , Animals , Computational Biology , Acoustic Stimulation , Auditory Perception/physiology , Behavior, Animal/physiology , Reaction Time/physiology , Male , Machine Learning , Female , Decision Making/physiology , Speech Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...