Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Angle Orthod ; 94(2): 207-215, 2024 Mar 01.
Article in English | MEDLINE | ID: mdl-37913813

ABSTRACT

OBJECTIVES: To compare facial growth prediction models based on the partial least squares and artificial intelligence (AI). MATERIALS AND METHODS: Serial longitudinal lateral cephalograms from 410 patients who had not undergone orthodontic treatment but had taken serial cephalograms were collected from January 2002 to December 2022. On every image, 46 skeletal and 32 soft-tissue landmarks were identified manually. Growth prediction models were constructed using multivariate partial least squares regression (PLS) and a deep learning method based on the TabNet deep neural network incorporating 161 predictor, and 156 response, variables. The prediction accuracy between the two methods was compared. RESULTS: On average, AI showed less prediction error by 2.11 mm than PLS. Among the 78 landmarks, AI was more accurate in 63 landmarks, whereas PLS was more accurate in nine landmarks, including cranial base landmarks. The remaining six landmarks showed no statistical difference between the two methods. Overall, soft-tissue landmarks, landmarks in the mandible, and growth in the vertical direction showed greater prediction errors than hard-tissue landmarks, landmarks in the maxilla, and growth changes in the horizontal direction, respectively. CONCLUSIONS: PLS and AI methods seemed to be valuable tools for predicting growth. PLS accurately predicted landmarks with low variability in the cranial base. In general, however, AI outperformed, particularly for those landmarks in the maxilla and mandible. Applying AI for growth prediction might be more advantageous when uncertainty is considerable.


Subject(s)
Artificial Intelligence , Face , Humans , Least-Squares Analysis , Face/diagnostic imaging , Mandible , Maxilla/diagnostic imaging
2.
Imaging Sci Dent ; 52(4): 351-357, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36605863

ABSTRACT

Purpose: Convolutional neural networks (CNNs) have rapidly emerged as one of the most promising artificial intelligence methods in the field of medical and dental research. CNNs can provide an effective diagnostic methodology allowing for the detection of early-staged diseases. Therefore, this study aimed to evaluate the performance of a deep CNN algorithm for apical lesion segmentation from panoramic radiographs. Materials and Methods: A total of 1000 panoramic images showing apical lesions were separated into training (n=800, 80%), validation (n=100, 10%), and test (n=100, 10%) datasets. The performance of identifying apical lesions was evaluated by calculating the precision, recall, and F1-score. Results: In the test group of 180 apical lesions, 147 lesions were segmented from panoramic radiographs with an intersection over union (IoU) threshold of 0.3. The F1-score values, as a measure of performance, were 0.828, 0.815, and 0.742, respectively, with IoU thresholds of 0.3, 0.4, and 0.5. Conclusion: This study showed the potential utility of a deep learning-guided approach for the segmentation of apical lesions. The deep CNN algorithm using U-Net demonstrated considerably high performance in detecting apical lesions.

SELECTION OF CITATIONS
SEARCH DETAIL
...