Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Oral Health ; 24(1): 387, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38532414

ABSTRACT

OBJECTIVE: Panoramic radiographs (PRs) provide a comprehensive view of the oral and maxillofacial region and are used routinely to assess dental and osseous pathologies. Artificial intelligence (AI) can be used to improve the diagnostic accuracy of PRs compared to bitewings and periapical radiographs. This study aimed to evaluate the advantages and challenges of using publicly available datasets in dental AI research, focusing on solving the novel task of predicting tooth segmentations, FDI numbers, and tooth diagnoses, simultaneously. MATERIALS AND METHODS: Datasets from the OdontoAI platform (tooth instance segmentations) and the DENTEX challenge (tooth bounding boxes with associated diagnoses) were combined to develop a two-stage AI model. The first stage implemented tooth instance segmentation with FDI numbering and extracted regions of interest around each tooth segmentation, whereafter the second stage implemented multi-label classification to detect dental caries, impacted teeth, and periapical lesions in PRs. The performance of the automated tooth segmentation algorithm was evaluated using a free-response receiver-operating-characteristics (FROC) curve and mean average precision (mAP) metrics. The diagnostic accuracy of detection and classification of dental pathology was evaluated with ROC curves and F1 and AUC metrics. RESULTS: The two-stage AI model achieved high accuracy in tooth segmentations with a FROC score of 0.988 and a mAP of 0.848. High accuracy was also achieved in the diagnostic classification of impacted teeth (F1 = 0.901, AUC = 0.996), whereas moderate accuracy was achieved in the diagnostic classification of deep caries (F1 = 0.683, AUC = 0.960), early caries (F1 = 0.662, AUC = 0.881), and periapical lesions (F1 = 0.603, AUC = 0.974). The model's performance correlated positively with the quality of annotations in the used public datasets. Selected samples from the DENTEX dataset revealed cases of missing (false-negative) and incorrect (false-positive) diagnoses, which negatively influenced the performance of the AI model. CONCLUSIONS: The use and pooling of public datasets in dental AI research can significantly accelerate the development of new AI models and enable fast exploration of novel tasks. However, standardized quality assurance is essential before using the datasets to ensure reliable outcomes and limit potential biases.


Subject(s)
Dental Caries , Tooth, Impacted , Tooth , Humans , Artificial Intelligence , Radiography, Panoramic , Bone and Bones
2.
J Dent ; 143: 104886, 2024 04.
Article in English | MEDLINE | ID: mdl-38342368

ABSTRACT

OBJECTIVE: Secondary caries lesions adjacent to restorations, a leading cause of restoration failure, require accurate diagnostic methods to ensure an optimal treatment outcome. Traditional diagnostic strategies rely on visual inspection complemented by radiographs. Recent advancements in artificial intelligence (AI), particularly deep learning, provide potential improvements in caries detection. This study aimed to develop a convolutional neural network (CNN)-based algorithm for detecting primary caries and secondary caries around restorations using bitewings. METHODS: Clinical data from 7 general dental practices in the Netherlands, comprising 425 bitewings of 383 patients, were utilized. The study used the Mask-RCNN architecture, for instance, segmentation, supported by the Swin Transformer backbone. After data augmentation, model training was performed through a ten-fold cross-validation. The diagnostic accuracy of the algorithm was evaluated by calculating the area under the Free-Response Receiver Operating Characteristics curve, sensitivity, precision, and F1 scores. RESULTS: The model achieved areas under FROC curves of 0.806 and 0.804, and F1-scores of 0.689 and 0.719 for primary and secondary caries detection, respectively. CONCLUSION: An accurate CNN-based automated system was developed to detect primary and secondary caries lesions on bitewings, highlighting a significant advancement in automated caries diagnostics. CLINICAL SIGNIFICANCE: An accurate algorithm that integrates the detection of both primary and secondary caries will permit the development of automated systems to aid clinicians in their daily clinical practice.


Subject(s)
Deep Learning , Dental Caries , Humans , Artificial Intelligence , Dental Caries Susceptibility , Neural Networks, Computer , ROC Curve , Dental Caries/therapy
3.
Sci Rep ; 13(1): 2296, 2023 02 09.
Article in English | MEDLINE | ID: mdl-36759684

ABSTRACT

Oral squamous cell carcinoma (OSCC) is amongst the most common malignancies, with an estimated incidence of 377,000 and 177,000 deaths worldwide. The interval between the onset of symptoms and the start of adequate treatment is directly related to tumor stage and 5-year-survival rates of patients. Early detection is therefore crucial for efficient cancer therapy. This study aims to detect OSCC on clinical photographs (CP) automatically. 1406 CP(s) were manually annotated and labeled as a reference. A deep-learning approach based on Swin-Transformer was trained and validated on 1265 CP(s). Subsequently, the trained algorithm was applied to a test set consisting of 141 CP(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved a classification accuracy of 0.986 and an AUC of 0.99 for classifying OSCC on clinical photographs. Deep learning-based assistance of clinicians may raise the rate of early detection of oral cancer and hence the survival rate and quality of life of patients.


Subject(s)
Carcinoma, Squamous Cell , Head and Neck Neoplasms , Mouth Neoplasms , Humans , Carcinoma, Squamous Cell/diagnosis , Carcinoma, Squamous Cell/pathology , Mouth Neoplasms/diagnosis , Mouth Neoplasms/pathology , Squamous Cell Carcinoma of Head and Neck , Quality of Life
4.
Sci Rep ; 12(1): 19596, 2022 11 15.
Article in English | MEDLINE | ID: mdl-36379971

ABSTRACT

Mandibular fractures are among the most frequent facial traumas in oral and maxillofacial surgery, accounting for 57% of cases. An accurate diagnosis and appropriate treatment plan are vital in achieving optimal re-establishment of occlusion, function and facial aesthetics. This study aims to detect mandibular fractures on panoramic radiographs (PR) automatically. 1624 PR with fractures were manually annotated and labelled as a reference. A deep learning approach based on Faster R-CNN and Swin-Transformer was trained and validated on 1640 PR with and without fractures. Subsequently, the trained algorithm was applied to a test set consisting of 149 PR with and 171 PR without fractures. The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an F1 score of 0.947 and an AUC of 0.977. Deep learning-based assistance of clinicians may reduce the misdiagnosis and hence the severe complications.


Subject(s)
Deep Learning , Mandibular Fractures , Humans , Radiography, Panoramic/methods , Mandibular Fractures/diagnostic imaging , Algorithms , Area Under Curve
SELECTION OF CITATIONS
SEARCH DETAIL
...