Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 1 de 1
Filter
Add more filters










Database
Main subject
Language
Publication year range
1.
J Dent ; 135: 104588, 2023 08.
Article in English | MEDLINE | ID: mdl-37348642

ABSTRACT

OBJECTIVES: Periapical radiographs are oftentimes taken in series to display all teeth present in the oral cavity. Our aim was to automatically assemble such a series of periapical radiographs into an anatomically correct status using a multi-modal deep learning model. METHODS: 4,707 periapical images from 387 patients (on average, 12 images per patient) were used. Radiographs were labeled according to their field of view and the dataset split into a training, validation, and test set, stratified by patient. In addition to the radiograph the timestamp of image generation was extracted and abstracted as follows: A matrix, containing the normalized timestamps of all images of a patient was constructed, representing the order in which images were taken, providing temporal context information to the deep learning model. Using the image data together with the time sequence data a multi-modal deep learning model consisting of two residual convolutional neural networks (ResNet-152 for image data, ResNet-50 for time data) was trained. Additionally, two uni-modal models were trained on image data and time data, respectively. A custom scoring technique was used to measure model performance. RESULTS: Multi-modal deep learning outperformed both uni-modal image-based learning (p<0.001) and time-based learning (p<0.05). The multi-modal deep learning model predicted tooth labels with an F1-score, sensitivity and precision of 0.79, respectively, and an accuracy of 0.99. 37 out of 77 patient datasets were fully correctly assembled by multi-modal learning; in the remaining ones, usually only one image was incorrectly labeled. CONCLUSIONS: Multi-modal modeling allowed automated assembly of periapical radiographs and outperformed both uni-modal models. Dental machine learning models can benefit from additional data modalities. CLINICAL SIGNIFICANCE: Like humans, deep learning models may profit from multiple data sources for decision-making. We demonstrate how multi-modal learning can assist assembling periapical radiographs into an anatomically correct status. Multi-modal learning should be considered for more complex tasks, as clinically a wealth of data is usually available and could be leveraged.


Subject(s)
Deep Learning , Humans , Radiography , Neural Networks, Computer , Mouth , Diagnosis, Oral
SELECTION OF CITATIONS
SEARCH DETAIL
...