Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Clin Oral Investig ; 28(7): 364, 2024 Jun 08.
Article in English | MEDLINE | ID: mdl-38849649

ABSTRACT

OBJECTIVES: Diagnosing oral potentially malignant disorders (OPMD) is critical to prevent oral cancer. This study aims to automatically detect and classify the most common pre-malignant oral lesions, such as leukoplakia and oral lichen planus (OLP), and distinguish them from oral squamous cell carcinomas (OSCC) and healthy oral mucosa on clinical photographs using vision transformers. METHODS: 4,161 photographs of healthy mucosa, leukoplakia, OLP, and OSCC were included. Findings were annotated pixel-wise and reviewed by three clinicians. The photographs were divided into 3,337 for training and validation and 824 for testing. The training and validation images were further divided into five folds with stratification. A Mask R-CNN with a Swin Transformer was trained five times with cross-validation, and the held-out test split was used to evaluate the model performance. The precision, F1-score, sensitivity, specificity, and accuracy were calculated. The area under the receiver operating characteristics curve (AUC) and the confusion matrix of the most effective model were presented. RESULTS: The detection of OSCC with the employed model yielded an F1 of 0.852 and AUC of 0.974. The detection of OLP had an F1 of 0.825 and AUC of 0.948. For leukoplakia the F1 was 0.796 and the AUC was 0.938. CONCLUSIONS: OSCC were effectively detected with the employed model, whereas the detection of OLP and leukoplakia was moderately effective. CLINICAL RELEVANCE: Oral cancer is often detected in advanced stages. The demonstrated technology may support the detection and observation of OPMD to lower the disease burden and identify malignant oral cavity lesions earlier.


Subject(s)
Leukoplakia, Oral , Lichen Planus, Oral , Mouth Neoplasms , Precancerous Conditions , Humans , Mouth Neoplasms/diagnosis , Precancerous Conditions/diagnosis , Lichen Planus, Oral/diagnosis , Leukoplakia, Oral/diagnosis , Sensitivity and Specificity , Photography , Diagnosis, Differential , Carcinoma, Squamous Cell/diagnosis , Male , Female , Photography, Dental , Image Interpretation, Computer-Assisted/methods
2.
Sci Rep ; 13(1): 2296, 2023 02 09.
Article in English | MEDLINE | ID: mdl-36759684

ABSTRACT

Oral squamous cell carcinoma (OSCC) is amongst the most common malignancies, with an estimated incidence of 377,000 and 177,000 deaths worldwide. The interval between the onset of symptoms and the start of adequate treatment is directly related to tumor stage and 5-year-survival rates of patients. Early detection is therefore crucial for efficient cancer therapy. This study aims to detect OSCC on clinical photographs (CP) automatically. 1406 CP(s) were manually annotated and labeled as a reference. A deep-learning approach based on Swin-Transformer was trained and validated on 1265 CP(s). Subsequently, the trained algorithm was applied to a test set consisting of 141 CP(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved a classification accuracy of 0.986 and an AUC of 0.99 for classifying OSCC on clinical photographs. Deep learning-based assistance of clinicians may raise the rate of early detection of oral cancer and hence the survival rate and quality of life of patients.


Subject(s)
Carcinoma, Squamous Cell , Head and Neck Neoplasms , Mouth Neoplasms , Humans , Carcinoma, Squamous Cell/diagnosis , Carcinoma, Squamous Cell/pathology , Mouth Neoplasms/diagnosis , Mouth Neoplasms/pathology , Squamous Cell Carcinoma of Head and Neck , Quality of Life
3.
Sci Rep ; 12(1): 19596, 2022 11 15.
Article in English | MEDLINE | ID: mdl-36379971

ABSTRACT

Mandibular fractures are among the most frequent facial traumas in oral and maxillofacial surgery, accounting for 57% of cases. An accurate diagnosis and appropriate treatment plan are vital in achieving optimal re-establishment of occlusion, function and facial aesthetics. This study aims to detect mandibular fractures on panoramic radiographs (PR) automatically. 1624 PR with fractures were manually annotated and labelled as a reference. A deep learning approach based on Faster R-CNN and Swin-Transformer was trained and validated on 1640 PR with and without fractures. Subsequently, the trained algorithm was applied to a test set consisting of 149 PR with and 171 PR without fractures. The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an F1 score of 0.947 and an AUC of 0.977. Deep learning-based assistance of clinicians may reduce the misdiagnosis and hence the severe complications.


Subject(s)
Deep Learning , Mandibular Fractures , Humans , Radiography, Panoramic/methods , Mandibular Fractures/diagnostic imaging , Algorithms , Area Under Curve
4.
Bioinformatics ; 36(21): 5255-5261, 2021 01 29.
Article in English | MEDLINE | ID: mdl-32702106

ABSTRACT

MOTIVATION: The development of deep, bidirectional transformers such as Bidirectional Encoder Representations from Transformers (BERT) led to an outperformance of several Natural Language Processing (NLP) benchmarks. Especially in radiology, large amounts of free-text data are generated in daily clinical workflow. These report texts could be of particular use for the generation of labels in machine learning, especially for image classification. However, as report texts are mostly unstructured, advanced NLP methods are needed to enable accurate text classification. While neural networks can be used for this purpose, they must first be trained on large amounts of manually labelled data to achieve good results. In contrast, BERT models can be pre-trained on unlabelled data and then only require fine tuning on a small amount of manually labelled data to achieve even better results. RESULTS: Using BERT to identify the most important findings in intensive care chest radiograph reports, we achieve areas under the receiver operation characteristics curve of 0.98 for congestion, 0.97 for effusion, 0.97 for consolidation and 0.99 for pneumothorax, surpassing the accuracy of previous approaches with comparatively little annotation effort. Our approach could therefore help to improve information extraction from free-text medical reports. Availability and implementationWe make the source code for fine-tuning the BERT-models freely available at https://github.com/fast-raidiology/bert-for-radiology. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
Deep Learning , Humans , Information Storage and Retrieval , Machine Learning , Natural Language Processing , Neural Networks, Computer
5.
Comput Biol Med ; 103: 161-166, 2018 12 01.
Article in English | MEDLINE | ID: mdl-30384174

ABSTRACT

BACKGROUND: To evaluate whether Canon's Single-Energy Metal Artifact Reduction (SEMAR) algorithm can significantly improve subjective and objective image quality of patients with nonremovable dental hardware undergoing CT imaging of the oral cavity and oropharynx. MATERIALS AND METHODS: SEMAR was reconstructed from routine Adaptive Iterative Dose Reduction (AIDR) images in 154 patients (46 females and 108 males; mean age 66.3 ±â€¯10.5 years). Subjective SEMAR and AIDR image quality of the mouth floor, sublingual glands, lymphatic ring and overall impression were evaluated by two independent radiologists on a 6-point scale (1 = very good image quality, 6 = poor image quality) and compared to ratings of an oral and maxillofacial surgeon. Interrater agreement was assessed using the intraclass correlation coefficient (ICC). Objective image analysis was performed by placing regions of interest (ROIs) on the mouth floor and measuring CT attenuation in Hounsfield units (HU) and standard deviation (SD). RESULTS: SEMAR significantly improved subjective image quality in all evaluated structures for all raters (p < 0.001). Furthermore, SEMAR significantly reduced objective metal artifacts and image noise (p < 0.001). CONCLUSION: SEMAR significantly improved diagnostic quality of CT images of the oral cavity and oropharynx by reducing artifacts caused by dental hardware.


Subject(s)
Artifacts , Dental Prosthesis , Image Processing, Computer-Assisted/methods , Metals/chemistry , Tomography, X-Ray Computed , Adult , Aged , Aged, 80 and over , Algorithms , Female , Humans , Male , Middle Aged , Tomography, X-Ray Computed/methods , Tomography, X-Ray Computed/standards
SELECTION OF CITATIONS
SEARCH DETAIL
...