Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Eur J Dent ; 2024 May 14.
Article in English | MEDLINE | ID: mdl-38744326

ABSTRACT

OBJECTIVE: A 5-year survival rate is a predictor for the assessment of oral cancer prognosis. The purpose of this study is to analyze oral cancer data to discover and rank the prognostic factors associated with oral cancer 5-year survival using the association rule mining (ARM) technique. MATERIALS AND METHODS: This study is a retrospective analysis of 897 oral cancer patients from a regional cancer center between 2011 and 2017. The 5-year survival rate was assessed. The multivariable Cox proportional hazards analysis was performed to determine prognostic factors. ARM was applied to clinicopathologic and treatment modalities data to identify and rank the prognostic factors associated with oral cancer 5-year survival. RESULTS: The 5-year overall survival rate was 35.1%. Multivariable Cox proportional hazards analysis showed that tumor (T) stage, lymph node metastasis, surgical margin, extranodal extension, recurrence, and distant metastasis of tumor were significantly associated with overall survival rate (p < 0.05). The top associated death within 5 years rule was positive extranodal extension, followed by positive perineural and lymphovascular invasion, with confidence levels of 0.808, 0.808, and 0.804, respectively. CONCLUSION: This study has shown that extranodal extension, and perineural and lymphovascular invasion were the top ranking and major deadly prognostic factors affecting the 5-year survival of oral cancer.

2.
BMC Oral Health ; 24(1): 519, 2024 May 02.
Article in English | MEDLINE | ID: mdl-38698358

ABSTRACT

BACKGROUND: Oral cancer is a deadly disease and a major cause of morbidity and mortality worldwide. The purpose of this study was to develop a fuzzy deep learning (FDL)-based model to estimate the survival time based on clinicopathologic data of oral cancer. METHODS: Electronic medical records of 581 oral squamous cell carcinoma (OSCC) patients, treated with surgery with or without radiochemotherapy, were collected retrospectively from the Oral and Maxillofacial Surgery Clinic and the Regional Cancer Center from 2011 to 2019. The deep learning (DL) model was trained to classify survival time classes based on clinicopathologic data. Fuzzy logic was integrated into the DL model and trained to create FDL-based models to estimate the survival time classes. RESULTS: The performance of the models was evaluated on a test dataset. The performance of the DL and FDL models for estimation of survival time achieved an accuracy of 0.74 and 0.97 and an area under the receiver operating characteristic (AUC) curve of 0.84 to 1.00 and 1.00, respectively. CONCLUSIONS: The integration of fuzzy logic into DL models could improve the accuracy to estimate survival time based on clinicopathologic data of oral cancer.


Subject(s)
Deep Learning , Fuzzy Logic , Mouth Neoplasms , Humans , Mouth Neoplasms/pathology , Mouth Neoplasms/mortality , Retrospective Studies , Female , Male , Middle Aged , Carcinoma, Squamous Cell/pathology , Carcinoma, Squamous Cell/mortality , Carcinoma, Squamous Cell/therapy , Survival Analysis , Aged , Survival Rate , Adult
3.
BMC Oral Health ; 24(1): 212, 2024 Feb 10.
Article in English | MEDLINE | ID: mdl-38341571

ABSTRACT

BACKGROUND: Oral cancer is a life-threatening malignancy, which affects the survival rate and quality of life of patients. The aim of this systematic review was to review deep learning (DL) studies in the diagnosis and prognostic prediction of oral cancer. METHODS: This systematic review was conducted following the PRISMA guidelines. Databases (Medline via PubMed, Google Scholar, Scopus) were searched for relevant studies, from January 2000 to June 2023. RESULTS: Fifty-four qualified for inclusion, including diagnostic (n = 51), and prognostic prediction (n = 3). Thirteen studies showed a low risk of biases in all domains, and 40 studies low risk for concerns regarding applicability. The performance of DL models was reported of the accuracy of 85.0-100%, F1-score of 79.31 - 89.0%, Dice coefficient index of 76.0 - 96.3% and Concordance index of 0.78-0.95 for classification, object detection, segmentation, and prognostic prediction, respectively. The pooled diagnostic odds ratios were 2549.08 (95% CI 410.77-4687.39) for classification studies. CONCLUSIONS: The number of DL studies in oral cancer is increasing, with a diverse type of architectures. The reported accuracy showed promising DL performance in studies of oral cancer and appeared to have potential utility in improving informed clinical decision-making of oral cancer.


Subject(s)
Deep Learning , Mouth Neoplasms , Humans , Quality of Life , Mouth Neoplasms/diagnosis , Clinical Decision-Making , Databases, Factual
4.
Stud Health Technol Inform ; 310: 1495-1496, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269713

ABSTRACT

Temporomandibular joint (TMJ) disorders have been misinterpreted by various normal TMJ features leading to treatment failure. This study assessed deep learning algorithms, DenseNet-121 and InceptionV3, for multi-class classification of TMJ normal variations and disorders in 1,710 panoramic radiographs. The overall accuracy of DenseNet-121 and InceptionV3 were 0.99 and 0.95, respectively. The AUC from 0.99 to 1.00, indicating high performance for TMJ disorders classification in panoramic radiographs.


Subject(s)
Deep Learning , Temporomandibular Joint Disorders , Humans , Algorithms , Temporomandibular Joint Disorders/diagnostic imaging
5.
Stud Health Technol Inform ; 310: 1497-1498, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269714

ABSTRACT

This study deploys the deep learning-based object detection algorithms to detect midfacial fractures in computed tomography (CT) images. The object detection models were created using faster R-CNN and RetinaNet from 2,000 CT images. The best detection model, faster R-CNN, yielded an average precision of 0.79 and an area under the curve (AUC) of 0.80. In conclusion, faster R-CNN model has good potential for detecting midfacial fractures in CT images.


Subject(s)
Deep Learning , Fractures, Bone , Humans , Algorithms , Area Under Curve
6.
Sci Rep ; 13(1): 3434, 2023 03 01.
Article in English | MEDLINE | ID: mdl-36859660

ABSTRACT

The purpose of this study was to evaluate the performance of convolutional neural network-based models for the detection and classification of maxillofacial fractures in computed tomography (CT) maxillofacial bone window images. A total of 3407 CT images, 2407 of which contained maxillofacial fractures, were retrospectively obtained from the regional trauma center from 2016 to 2020. Multiclass image classification models were created by using DenseNet-169 and ResNet-152. Multiclass object detection models were created by using faster R-CNN and YOLOv5. DenseNet-169 and ResNet-152 were trained to classify maxillofacial fractures into frontal, midface, mandibular and no fracture classes. Faster R-CNN and YOLOv5 were trained to automate the placement of bounding boxes to specifically detect fracture lines in each fracture class. The performance of each model was evaluated on an independent test dataset. The overall accuracy of the best multiclass classification model, DenseNet-169, was 0.70. The mean average precision of the best multiclass detection model, faster R-CNN, was 0.78. In conclusion, DenseNet-169 and faster R-CNN have potential for the detection and classification of maxillofacial fractures in CT images.


Subject(s)
Fractures, Bone , Humans , Retrospective Studies , Face , Neural Networks, Computer , Tomography, X-Ray Computed
7.
BMC Med Res Methodol ; 22(1): 281, 2022 11 01.
Article in English | MEDLINE | ID: mdl-36316659

ABSTRACT

BACKGROUND: The aim of this study was to evaluate the most effective combination of autoregressive integrated moving average (ARIMA), a time series model, and association rule mining (ARM) techniques to identify meaningful prognostic factors and predict the number of cases for efficient COVID-19 crisis management. METHODS: The 3685 COVID-19 patients admitted at Thailand's first university field hospital following the four waves of infections from March 2020 to August 2021 were analyzed using the autoregressive integrated moving average (ARIMA), its derivative to exogenous variables (ARIMAX), and association rule mining (ARM). RESULTS: The ARIMA (2, 2, 2) model with an optimized parameter set predicted the number of the COVID-19 cases admitted at the hospital with acceptable error scores (R2 = 0.5695, RMSE = 29.7605, MAE = 27.5102). Key features from ARM (symptoms, age, and underlying diseases) were selected to build an ARIMAX (1, 1, 1) model, which yielded better performance in predicting the number of admitted cases (R2 = 0.5695, RMSE = 27.7508, MAE = 23.4642). The association analysis revealed that hospital stays of more than 14 days were related to the healthcare worker patients and the patients presented with underlying diseases. The worsening cases that required referral to the hospital ward were associated with the patients admitted with symptoms, pregnancy, metabolic syndrome, and age greater than 65 years old. CONCLUSIONS: This study demonstrated that the ARIMAX model has the potential to predict the number of COVID-19 cases by incorporating the most associated prognostic factors identified by ARM technique to the ARIMA model, which could be used for preparation and optimal management of hospital resources during pandemics.


Subject(s)
COVID-19 , Humans , Aged , COVID-19/epidemiology , Time Factors , Models, Statistical , Pandemics , Forecasting , Data Mining
8.
PLoS One ; 17(8): e0273508, 2022.
Article in English | MEDLINE | ID: mdl-36001628

ABSTRACT

Artificial intelligence (AI) applications in oncology have been developed rapidly with reported successes in recent years. This work aims to evaluate the performance of deep convolutional neural network (CNN) algorithms for the classification and detection of oral potentially malignant disorders (OPMDs) and oral squamous cell carcinoma (OSCC) in oral photographic images. A dataset comprising 980 oral photographic images was divided into 365 images of OSCC, 315 images of OPMDs and 300 images of non-pathological images. Multiclass image classification models were created by using DenseNet-169, ResNet-101, SqueezeNet and Swin-S. Multiclass object detection models were fabricated by using faster R-CNN, YOLOv5, RetinaNet and CenterNet2. The AUC of multiclass image classification of the best CNN models, DenseNet-196, was 1.00 and 0.98 on OSCC and OPMDs, respectively. The AUC of the best multiclass CNN-base object detection models, Faster R-CNN, was 0.88 and 0.64 on OSCC and OPMDs, respectively. In comparison, DenseNet-196 yielded the best multiclass image classification performance with AUC of 1.00 and 0.98 on OSCC and OPMD, respectively. These values were inline with the performance of experts and superior to those of general practictioners (GPs). In conclusion, CNN-based models have potential for the identification of OSCC and OPMDs in oral photographic images and are expected to be a diagnostic tool to assist GPs for the early detection of oral cancer.


Subject(s)
Carcinoma, Squamous Cell , Mouth Neoplasms , Oral Ulcer , Artificial Intelligence , Carcinoma, Squamous Cell/diagnostic imaging , Carcinoma, Squamous Cell/pathology , Early Detection of Cancer , Humans , Mouth Neoplasms/diagnostic imaging , Mouth Neoplasms/pathology , Neural Networks, Computer
9.
Article in English | MEDLINE | ID: mdl-34886359

ABSTRACT

This study aims to analyze the patient characteristics and factors related to clinical outcomes in the crisis management of the COVID-19 pandemic in a field hospital. We conducted retrospective analysis of patient clinical data from March 2020 to August 2021 at the first university-based field hospital in Thailand. Multivariable logistic regression models were used to evaluate the factors associated with the field hospital discharge destination. Of a total of 3685 COVID-19 patients, 53.6% were women, with the median age of 30 years. General workers accounted for 97.5% of patients, while 2.5% were healthcare workers. Most of the patients were exposed to coronavirus from the community (84.6%). At the study end point, no patients had died, 97.7% had been discharged home, and 2.3% had been transferred to designated high-level hospitals due to their condition worsening. In multivariable logistic regression analysis, older patients with one or more underlying diseases who showed symptoms of COVID-19 and whose chest X-rays showed signs of pneumonia were in a worse condition than other patients. In conclusion, the university-based field hospital has the potential to fill acute gaps and prevent public agencies from being overwhelmed during crisis events.


Subject(s)
COVID-19 , Adult , Female , Health Personnel , Humans , Pandemics , Retrospective Studies , SARS-CoV-2
10.
J Oral Pathol Med ; 50(9): 911-918, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34358372

ABSTRACT

BACKGROUND: Oral cancer is a deadly disease among the most common malignant tumors worldwide, and it has become an increasingly important public health problem in developing and low-to-middle income countries. This study aims to use the convolutional neural network (CNN) deep learning algorithms to develop an automated classification and detection model for oral cancer screening. METHODS: The study included 700 clinical oral photographs, collected retrospectively from the oral and maxillofacial center, which were divided into 350 images of oral squamous cell carcinoma and 350 images of normal oral mucosa. The classification and detection models were created by using DenseNet121 and faster R-CNN, respectively. Four hundred and ninety images were randomly selected as training data. In addition, 70 and 140 images were assigned as validating and testing data, respectively. RESULTS: The classification accuracy of DenseNet121 model achieved a precision of 99%, a recall of 100%, an F1 score of 99%, a sensitivity of 98.75%, a specificity of 100%, and an area under the receiver operating characteristic curve of 99%. The detection accuracy of a faster R-CNN model achieved a precision of 76.67%, a recall of 82.14%, an F1 score of 79.31%, and an area under the precision-recall curve of 0.79. CONCLUSION: The DenseNet121 and faster R-CNN algorithm were proved to offer the acceptable potential for classification and detection of cancerous lesions in oral photographic images.


Subject(s)
Carcinoma, Squamous Cell , Deep Learning , Mouth Neoplasms , Algorithms , Carcinoma, Squamous Cell/diagnostic imaging , Humans , Mouth Neoplasms/diagnostic imaging , Retrospective Studies
11.
Eur J Dent ; 15(4): 812-816, 2021 Oct.
Article in English | MEDLINE | ID: mdl-34428837

ABSTRACT

A variety of black-pigmented lesions of the oral cavity can be found, ranging from harmless benign lesions such as melanotic macule, smoker's melanosis, amalgam/graphite tattoos, and pigmented nevus to a life-threatening oral malignant melanoma. Oral melanoma is a rare and aggressive malignant tumor that originates from melanocytes' proliferation and accounts for only 0.5% of all oral malignancies. The etiology is unknown. Most oral melanomas are present at the palate and the upper alveolar ridge, whereas occurrences at the buccal mucosa, the lower alveolar ridge, and the lip are rare, with only a few reports in the literature. The diagnosis is confirmed by a biopsy. The prognosis is poor, with a 5-year survival rate of ~20%. In this report, we present a case of large oral melanoma at the right buccal mucosa involving the right lower alveolar ridge and lip commissure, which are relatively unusual locations for oral melanoma. In addition, immunohistochemical markers used for diagnostic, therapeutic, and prognostic decision-making of oral melanoma are also discussed.

SELECTION OF CITATIONS
SEARCH DETAIL
...