Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
1.
Dentomaxillofac Radiol ; 52(5): 20220413, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37192044

ABSTRACT

OBJECTIVES: Lingual mandibular bone depression (LMBD) is a developmental bony defect in the lingual aspect of the mandible that does not require any surgical treatment. It is sometimes confused with a cyst or another radiolucent pathologic lesion on panoramic radiography. Thus, it is important to differentiate LMBD from true pathological radiolucent lesions requiring treatment. This study aimed to develop a deep learning model for the fully automatic differential diagnosis of LMBD from true pathological radiolucent cysts or tumors on panoramic radiographs without a manual process and evaluate the model's performance using a test dataset that reflected real clinical practice. METHODS: A deep learning model using the EfficientDet algorithm was developed with training and validation data sets (443 images) consisting of 83 LMBD patients and 360 patients with true pathological radiolucent lesions. The test data set (1500 images) consisted of 8 LMBD patients, 53 patients with pathological radiolucent lesions, and 1439 healthy patients based on the clinical prevalence of these conditions in order to simulate real-world conditions, and the model was evaluated in terms of accuracy, sensitivity, and specificity using this test data set. RESULTS: The model's accuracy, sensitivity, and specificity were more than 99.8%, and only 10 out of 1500 test images were erroneously predicted. CONCLUSION: Excellent performance was found for the proposed model, in which the number of patients in each group was composed to reflect the prevalence in real-world clinical practice. The model can help dental clinicians make accurate diagnoses and avoid unnecessary examinations in real clinical settings.


Subject(s)
Cysts , Deep Learning , Humans , Radiography, Panoramic , Depression , Mandible/diagnostic imaging
2.
Dentomaxillofac Radiol ; 52(5): 20230007, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37129509

ABSTRACT

OBJECTIVE: We aimed to develop and assess the clinical usefulness of a generative adversarial network (GAN) model for improving image quality in panoramic radiography. METHODS: Panoramic radiographs obtained at Yonsei University Dental Hospital were randomly selected for study inclusion (n = 100). Datasets with degraded image quality (n = 400) were prepared using four different processing methods: blur, noise, blur with noise, and blur in the anterior teeth region. The images were distributed to the training and test datasets in a ratio of 9:1 for each group. The Pix2Pix GAN model was trained using pairs of the original and degraded image datasets for 100 epochs. The peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) were obtained for the test dataset, and two oral and maxillofacial radiologists rated the quality of clinical images. RESULTS: Among the degraded images, the GAN model enabled the greatest improvement in those with blur in the region of the anterior teeth but was least effective in improving images exhibiting blur with noise (PSNR, 36.27 > 32.74; SSIM, 0.90 > 0.82). While the mean clinical image quality score of the original radiographs was 44.6 out of 46.0, the highest and lowest predicted scores were observed in the blur (45.2) and noise (36.0) groups. CONCLUSION: The GAN model developed in this study has the potential to improve panoramic radiographs with degraded image quality, both quantitatively and qualitatively. As the model performs better in refining blurred images, further research is required to identify the most effective methods for handling noisy images.


Subject(s)
Image Processing, Computer-Assisted , Tomography, X-Ray Computed , Humans , Radiography, Panoramic , Tomography, X-Ray Computed/methods , Signal-To-Noise Ratio , Image Processing, Computer-Assisted/methods
3.
Sci Rep ; 13(1): 2734, 2023 02 15.
Article in English | MEDLINE | ID: mdl-36792647

ABSTRACT

The evaluation of the maxillary sinus is very important in dental practice such as tooth extraction and implantation because of its proximity to the teeth, but it is not easy to evaluate because of the overlapping structures such as the maxilla and the zygoma on panoramic radiographs. When doom-shaped retention pseudocysts are observed in sinus on panoramic radiographs, they are often misdiagnosed as cysts or tumors, and additional computed tomography is performed, resulting in unnecessary radiation exposure and cost. The purpose of this study was to develop a deep learning model that automatically classifies retention pseudocysts in the maxillary sinuses on panoramic radiographs. A total of 426 maxillary sinuses from panoramic radiographs of 213 patients were included in this study. These maxillary sinuses included 86 sinuses with retention pseudocysts, 261 healthy sinuses, and 79 sinuses with cysts or tumors. An EfficientDet model first introduced by Tan for detecting and classifying the maxillary sinuses was developed. The developed model was trained for 200 times on the training and validation datasets (342 sinuses), and the model performance was evaluated in terms of accuracy, sensitivity, and specificity on the test dataset (21 retention pseudocysts, 43 healthy sinuses, and 20 cysts or tumors). The accuracy of the model for classifying retention pseudocysts was 81%, and the model also showed higher accuracy for classifying healthy sinuses and cysts or tumors (98% and 90%, respectively). One of the 21 retention pseudocysts in the test dataset was misdiagnosed as a cyst or tumor. The proposed model for automatically classifying retention pseudocysts in the maxillary sinuses on panoramic radiographs showed excellent diagnostic performance. This model could help clinicians automatically diagnose the maxillary sinuses on panoramic radiographs.


Subject(s)
Cysts , Maxillary Sinus , Humans , Maxillary Sinus/diagnostic imaging , Maxillary Sinus/pathology , Radiography, Panoramic , Neural Networks, Computer , Tomography, X-Ray Computed , Cysts/diagnostic imaging , Cysts/pathology
4.
PLoS One ; 18(1): e0280523, 2023.
Article in English | MEDLINE | ID: mdl-36656878

ABSTRACT

Legal age estimation of living individuals is a critically important issue, and radiomics is an emerging research field that extracts quantitative data from medical images. However, no reports have proposed age-related radiomics features of the condylar head or an age classification model using those features. This study aimed to introduce a radiomics approach for various classifications of legal age (18, 19, 20, and 21 years old) based on cone-beam computed tomography (CBCT) images of the mandibular condylar head, and to evaluate the usefulness of the radiomics features selected by machine learning models as imaging biomarkers. CBCT images from 85 subjects were divided into eight age groups for four legal age classifications: ≤17 and ≥18 years old groups (18-year age classification), ≤18 and ≥19 years old groups (19-year age classification), ≤19 and ≥20 years old groups (20-year age classification) and ≤20 and ≥21 years old groups (21-year age classification). The condylar heads were manually segmented by an expert. In total, 127 radiomics features were extracted from the segmented area of each condylar head. The random forest (RF) method was utilized to select features and develop the age classification model for four legal ages. After sorting features in descending order of importance, the top 10 extracted features were used. The 21-year age classification model showed the best performance, with an accuracy of 91.18%, sensitivity of 80%, and specificity of 95.83%. Radiomics features of the condylar head using CBCT showed the possibility of age estimation, and the selected features were useful as imaging biomarkers.


Subject(s)
Cone-Beam Computed Tomography , Mandibular Condyle , Humans , Adolescent , Young Adult , Adult , Pilot Projects , Cone-Beam Computed Tomography/methods , Mandibular Condyle/diagnostic imaging , Machine Learning , Retrospective Studies
5.
Sci Rep ; 12(1): 15402, 2022 09 13.
Article in English | MEDLINE | ID: mdl-36100696

ABSTRACT

This study aimed to develop deep learning models that automatically detect impacted mesiodens on periapical radiographs of primary and mixed dentition using the YOLOv3, RetinaNet, and EfficientDet-D3 algorithms and to compare their performance. Periapical radiographs of 600 pediatric patients (age range, 3-13 years) with mesiodens were used as a training and validation dataset. Deep learning models based on the YOLOv3, RetinaNet, and EfficientDet-D3 algorithms for detecting mesiodens were developed, and each model was trained 300 times using training (540 images) and validation datasets (60 images). The performance of each model was evaluated based on accuracy, sensitivity, and specificity using 120 test images (60 periapical radiographs with mesiodens and 60 periapical radiographs without mesiodens). The accuracy of the YOLOv3, RetinaNet, and EfficientDet-D3 models was 97.5%, 98.3%, and 99.2%, respectively. The sensitivity was 100% for both the YOLOv3 and RetinaNet models and 98.3% for the EfficientDet-D3 model. The specificity was 100%, 96.7%, and 95.0% for the EfficientDet-D3, RetinaNet, and YOLOv3 models, respectively. The proposed models using three deep learning algorithms to detect mesiodens on periapical radiographs showed good performance. The EfficientDet-D3 model showed the highest accuracy for detecting mesiodens on periapical radiographs.


Subject(s)
Deep Learning , Adolescent , Algorithms , Child , Child, Preschool , Humans , Radiography
6.
Sci Rep ; 12(1): 14009, 2022 08 17.
Article in English | MEDLINE | ID: mdl-35978086

ABSTRACT

The detection of maxillary sinus wall is important in dental fields such as implant surgery, tooth extraction, and odontogenic disease diagnosis. The accurate segmentation of the maxillary sinus is required as a cornerstone for diagnosis and treatment planning. This study proposes a deep learning-based method for fully automatic segmentation of the maxillary sinus, including clear or hazy states, on cone-beam computed tomographic (CBCT) images. A model for segmentation of the maxillary sinuses was developed using U-Net, a convolutional neural network, and a total of 19,350 CBCT images were used from 90 maxillary sinuses (34 clear sinuses, 56 hazy sinuses). Post-processing to eliminate prediction errors of the U-Net segmentation results increased the accuracy. The average prediction results of U-Net were a dice similarity coefficient (DSC) of 0.9090 ± 0.1921 and a Hausdorff distance (HD) of 2.7013 ± 4.6154. After post-processing, the average results improved to a DSC of 0.9099 ± 0.1914 and an HD of 2.1470 ± 2.2790. The proposed deep learning model with post-processing showed good performance for clear and hazy maxillary sinus segmentation. This model has the potential to help dental clinicians with maxillary sinus segmentation, yielding equivalent accuracy in a variety of cases.


Subject(s)
Deep Learning , Maxillary Sinus , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods , Maxillary Sinus/diagnostic imaging , Neural Networks, Computer
7.
Imaging Sci Dent ; 52(2): 219-224, 2022 Jun.
Article in English | MEDLINE | ID: mdl-35799970

ABSTRACT

Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III (Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant (Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

8.
Quant Imaging Med Surg ; 12(3): 1909-1918, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35284273

ABSTRACT

Background: Temporomandibular joint disorder (TMD), which is a broad category encompassing disc displacement, is a common condition with an increasing prevalence. This study aimed to develop an automated movement tracing algorithm for mouth opening and closing videos, and to quantitatively analyze the relationship between the results obtained using this developed system and disc position on magnetic resonance imaging (MRI). Methods: Mouth opening and closing videos were obtained with a digital camera from 91 subjects, who underwent MRI. Before video acquisition, an 8.0-mm-diameter circular sticker was attached to the center of the subject's upper and lower lips. The automated mouth opening tracing system based on computer vision was developed in two parts: (I) automated landmark detection of the upper and lower lips in acquired videos, and (II) graphical presentation of the tracing results for detected landmarks and an automatically calculated graph height (mouth opening length) and width (sideways values). The graph paths were divided into three types: straight, sideways-skewed, and limited-straight line graphs. All traced results were evaluated according to disc position groups determined using MRI. Graph height and width were compared between groups using analysis of variance (SPSS version 25.0; IBM Corp., Armonk, NY, USA). Results: Subjects with a normal disc position predominantly (85.72%) showed straight line graphs. The other two types (sideways-skewed or limited-straight line graphs) were found in 85.0% and 89.47% in the anterior disc displacement with reduction group and in the anterior disc displacement without reduction group, respectively, reflecting a statistically significant correlation (χ2=38.113, P<0.001). A statistically significant difference in graph height was found between the normal group and the anterior disc displacement without reduction group, 44.90±9.61 and 35.78±10.24 mm, respectively (P<0.05). Conclusions: The developed mouth opening tracing system was reliable. It presented objective and quantitative information about different trajectories from those associated with a normal disc position in mouth opening and closing movements. This system will be helpful to clinicians when it is difficult to obtain information through MRI.

9.
Imaging Sci Dent ; 52(4): 393-398, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36605858

ABSTRACT

Purpose: This study proposed a generative adversarial network (GAN) model for T2-weighted image (WI) synthesis from proton density (PD)-WI in a temporomandibular joint (TMJ) magnetic resonance imaging (MRI) protocol. Materials and Methods: From January to November 2019, MRI scans for TMJ were reviewed and 308 imaging sets were collected. For training, 277 pairs of PD- and T2-WI sagittal TMJ images were used. Transfer learning of the pix2pix GAN model was utilized to generate T2-WI from PD-WI. Model performance was evaluated with the structural similarity index map (SSIM) and peak signal-to-noise ratio (PSNR) indices for 31 predicted T2-WI (pT2). The disc position was clinically diagnosed as anterior disc displacement with or without reduction, and joint effusion as present or absent. The true T2-WI-based diagnosis was regarded as the gold standard, to which pT2-based diagnoses were compared using Cohen's ĸ coefficient. Results: The mean SSIM and PSNR values were 0.4781(±0.0522) and 21.30(±1.51) dB, respectively. The pT2 protocol showed almost perfect agreement (ĸ=0.81) with the gold standard for disc position. The number of discordant cases was higher for normal disc position (17%) than for anterior displacement with reduction (2%) or without reduction (10%). The effusion diagnosis also showed almost perfect agreement (ĸ=0.88), with higher concordance for the presence (85%) than for the absence (77%) of effusion. Conclusion: The application of pT2 images for a TMJ MRI protocol useful for diagnosis, although the image quality of pT2 was not fully satisfactory. Further research is expected to enhance pT2 quality.

10.
Dentomaxillofac Radiol ; 51(4): 20210383, 2022 May 01.
Article in English | MEDLINE | ID: mdl-34826252

ABSTRACT

OBJECTIVES: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) data set. METHODS: In total, 2760 DPRs from 746 subjects who had 2-17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test data set included the latest DPR of each subject (746 images) and the other DPRs (2014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, -3, and -5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)-applied images. RESULTS: This model had rank-1, -3, and -5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 s per epoch, and the prediction time for 746 test DPRs was short (3.2 s/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. CONCLUSION: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Subject(s)
Forensic Anthropology , Tooth , Humans , Neural Networks, Computer , Radiography , Radiography, Panoramic
11.
Sci Rep ; 11(1): 23061, 2021 11 29.
Article in English | MEDLINE | ID: mdl-34845320

ABSTRACT

This study aimed to develop an artificial intelligence model that can detect mesiodens on panoramic radiographs of various dentition groups. Panoramic radiographs of 612 patients were used for training. A convolutional neural network (CNN) model based on YOLOv3 for detecting mesiodens was developed. The model performance according to three dentition groups (primary, mixed, and permanent dentition) was evaluated, both internally (130 images) and externally (118 images), using a multi-center dataset. To investigate the effect of image preprocessing, contrast-limited histogram equalization (CLAHE) was applied to the original images. The accuracy of the internal test dataset was 96.2% and that of the external test dataset was 89.8% in the original images. For the primary, mixed, and permanent dentition, the accuracy of the internal test dataset was 96.7%, 97.5%, and 93.3%, respectively, and the accuracy of the external test dataset was 86.7%, 95.3%, and 86.7%, respectively. The CLAHE images yielded less accurate results than the original images in both test datasets. The proposed model showed good performance in the internal and external test datasets and had the potential for clinical use to detect mesiodens on panoramic radiographs of all dentition types. The CLAHE preprocessing had a negligible effect on model performance.

12.
Imaging Sci Dent ; 51(3): 299-306, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34621657

ABSTRACT

PURPOSE: This study aimed to propose a fully automatic landmark identification model based on a deep learning algorithm using real clinical data and to verify its accuracy considering inter-examiner variability. MATERIALS AND METHODS: In total, 950 lateral cephalometric images from Yonsei Dental Hospital were used. Two calibrated examiners manually identified the 13 most important landmarks to set as references. The proposed deep learning model has a 2-step structure-a region of interest machine and a detection machine-each consisting of 8 convolution layers, 5 pooling layers, and 2 fully connected layers. The distance errors of detection between 2 examiners were used as a clinically acceptable range for performance evaluation. RESULTS: The 13 landmarks were automatically detected using the proposed model. Inter-examiner agreement for all landmarks indicated excellent reliability based on the 95% confidence interval. The average clinically acceptable range for all 13 landmarks was 1.24 mm. The mean radial error between the reference values assigned by 1 expert and the proposed model was 1.84 mm, exhibiting a successful detection rate of 36.1%. The A-point, the incisal tip of the maxillary and mandibular incisors, and ANS showed lower mean radial error than the calibrated expert variability. CONCLUSION: This experiment demonstrated that the proposed deep learning model can perform fully automatic identification of cephalometric landmarks and achieve better results than examiners for some landmarks. It is meaningful to consider between-examiner variability for clinical applicability when evaluating the performance of deep learning methods in cephalometric landmark identification.

SELECTION OF CITATIONS
SEARCH DETAIL
...