Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
Add more filters










Publication year range
1.
J Esthet Restor Dent ; 2024 May 17.
Article in English | MEDLINE | ID: mdl-38757761

ABSTRACT

OBJECTIVES: To provide an overview of the current artificial intelligence (AI) based applications for assisting digital data acquisition and implant planning procedures. OVERVIEW: A review of the main AI-based applications integrated into digital data acquisitions technologies (facial scanners (FS), intraoral scanners (IOSs), cone beam computed tomography (CBCT) devices, and jaw trackers) and computer-aided static implant planning programs are provided. CONCLUSIONS: The main AI-based application integrated in some FS's programs involves the automatic alignment of facial and intraoral scans for virtual patient integration. The AI-based applications integrated into IOSs programs include scan cleaning, assist scanning, and automatic alignment between the implant scan body with its corresponding CAD object while scanning. The more frequently AI-based applications integrated into the programs of CBCT units involve positioning assistant, noise and artifacts reduction, structures identification and segmentation, airway analysis, and alignment of facial, intraoral, and CBCT scans. Some computer-aided static implant planning programs include patient's digital files, identification, labeling, and segmentation of anatomical structures, mandibular nerve tracing, automatic implant placement, and surgical implant guide design.

2.
J Oral Rehabil ; 2024 May 17.
Article in English | MEDLINE | ID: mdl-38757865

ABSTRACT

BACKGROUND AND OBJECTIVE: The accurate diagnosis of temporomandibular disorders continues to be a challenge, despite the existence of internationally agreed-upon diagnostic criteria. The purpose of this study is to review applications of deep learning models in the diagnosis of temporomandibular joint arthropathies. MATERIALS AND METHODS: An electronic search was conducted on PubMed, Scopus, Embase, Google Scholar, IEEE, arXiv, and medRxiv up to June 2023. Studies that reported the efficacy (outcome) of prediction, object detection or classification of TMJ arthropathies by deep learning models (intervention) of human joint-based or arthrogenous TMDs (population) in comparison to reference standard (comparison) were included. To evaluate the risk of bias, included studies were critically analysed using the quality assessment of diagnostic accuracy studies (QUADAS-2). Diagnostic odds ratios (DOR) were calculated. Forrest plot and funnel plot were created using STATA 17 and MetaDiSc. RESULTS: Full text review was performed on 46 out of the 1056 identified studies and 21 studies met the eligibility criteria and were included in the systematic review. Four studies were graded as having a low risk of bias for all domains of QUADAS-2. The accuracy of all included studies ranged from 74% to 100%. Sensitivity ranged from 54% to 100%, specificity: 85%-100%, Dice coefficient: 85%-98%, and AUC: 77%-99%. The datasets were then pooled based on the sensitivity, specificity, and dataset size of seven studies that qualified for meta-analysis. The pooled sensitivity was 95% (85%-99%), specificity: 92% (86%-96%), and AUC: 97% (96%-98%). DORs were 232 (74-729). According to Deek's funnel plot and statistical evaluation (p =.49), publication bias was not present. CONCLUSION: Deep learning models can detect TMJ arthropathies high sensitivity and specificity. Clinicians, and especially those not specialized in orofacial pain, may benefit from this methodology for assessing TMD as it facilitates a rigorous and evidence-based framework, objective measurements, and advanced analysis techniques, ultimately enhancing diagnostic accuracy.

3.
Saudi Dent J ; 36(5): 765-769, 2024 May.
Article in English | MEDLINE | ID: mdl-38766280

ABSTRACT

Background: The objective of this study was to compare the cytotoxicity of TDV and Rebase II denture hard liners on human gingival fibroblasts, aiming to address issues associated with incomplete polymerization and free monomers that affect material properties. Methods: Seventy-two specimens (24 each of TDV, Rebase II, and controls) were prepared under aseptic conditions according to factory instructions. Cytotoxicity was determined using the MTT test with methyl tetrazolium salt added to the cell culture medium. A two-way ANOVA and a post-hoc Tukey test was used to evaluate the results of incubation before mitochondrial activity was measured using Multiscan spectrophotometry (570 nm). Results: There were significant differences in cell viability between the groups after 24 hours (P < 0.001), with TDV having higher viability than Rebase II. The difference between Rebase II and TDV, however, was not significant at 48 and 96 hours (P > 0.131). At 24 hours, Rebase II exhibited significantly lower viability than TDV liner, with a significant difference between the two groups (P = 0.001). Conclusion: Due to the maximum monomer release in the early hours of incubation, the amount of cytotoxicity decreased with increasing incubation time.

4.
J Prosthodont ; 2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38655727

ABSTRACT

PURPOSE: Smile design software increasingly relies on artificial intelligence (AI). However, using AI for smile design raises numerous technical and ethical concerns. This study aimed to evaluate these ethical issues. METHODS: An international consortium of experts specialized in AI, dentistry, and smile design was engaged to emulate and assess the ethical challenges raised by the use of AI for smile design. An e-Delphi protocol was used to seek the agreement of the ITU-WHO group on well-established ethical principles regarding the use of AI (wellness, respect for autonomy, privacy protection, solidarity, governance, equity, diversity, expertise/prudence, accountability/responsibility, sustainability, and transparency). Each principle included examples of ethical challenges that users might encounter when using AI for smile design. RESULTS: On the first round of the e-Delphi exercise, participants agreed that seven items should be considered in smile design (diversity, transparency, wellness, privacy protection, prudence, law and governance, and sustainable development), but the remaining four items (equity, accountability and responsibility, solidarity, and respect of autonomy) were rejected and had to be reformulated. After a second round, participants agreed to all items that should be considered while using AI for smile design. CONCLUSIONS: AI development and deployment for smile design should abide by the ethical principles of wellness, respect for autonomy, privacy protection, solidarity, governance, equity, diversity, expertise/prudence, accountability/responsibility, sustainability, and transparency.

5.
Article in English | MEDLINE | ID: mdl-38553304

ABSTRACT

OBJECTIVES: In this study, we assessed 6 different artificial intelligence (AI) chatbots (Bing, GPT-3.5, GPT-4, Google Bard, Claude, Sage) responses to controversial and difficult questions in oral pathology, oral medicine, and oral radiology. STUDY DESIGN: The chatbots' answers were evaluated by board-certified specialists using a modified version of the global quality score on a 5-point Likert scale. The quality and validity of chatbot citations were evaluated. RESULTS: Claude had the highest mean score of 4.341 ± 0.582 for oral pathology and medicine. Bing had the lowest scores of 3.447 ± 0.566. In oral radiology, GPT-4 had the highest mean score of 3.621 ± 1.009 and Bing the lowest score of 2.379 ± 0.978. GPT-4 achieved the highest mean score of 4.066 ± 0.825 for performance across all disciplines. 82 out of 349 (23.50%) of generated citations from chatbots were fake. CONCLUSIONS: The most superior chatbot in providing high-quality information for controversial topics in various dental disciplines was GPT-4. Although the majority of chatbots performed well, it is suggested that developers of AI medical chatbots incorporate scientific citation authenticators to validate the outputted citations given the relatively high number of fabricated citations.


Subject(s)
Artificial Intelligence , Oral Medicine , Humans , Radiology , Pathology, Oral
6.
J Dent ; 144: 104938, 2024 May.
Article in English | MEDLINE | ID: mdl-38499280

ABSTRACT

OBJECTIVES: Artificial Intelligence has applications such as Large Language Models (LLMs), which simulate human-like conversations. The potential of LLMs in healthcare is not fully evaluated. This pilot study assessed the accuracy and consistency of chatbots and clinicians in answering common questions in pediatric dentistry. METHODS: Two expert pediatric dentists developed thirty true or false questions involving different aspects of pediatric dentistry. Publicly accessible chatbots (Google Bard, ChatGPT4, ChatGPT 3.5, Llama, Sage, Claude 2 100k, Claude-instant, Claude-instant-100k, and Google Palm) were employed to answer the questions (3 independent new conversations). Three groups of clinicians (general dentists, pediatric specialists, and students; n = 20/group) also answered. Responses were graded by two pediatric dentistry faculty members, along with a third independent pediatric dentist. Resulting accuracies (percentage of correct responses) were compared using analysis of variance (ANOVA), and post-hoc pairwise group comparisons were corrected using Tukey's HSD method. ACronbach's alpha was calculated to determine consistency. RESULTS: Pediatric dentists were significantly more accurate (mean±SD 96.67 %± 4.3 %) than other clinicians and chatbots (p < 0.001). General dentists (88.0 % ± 6.1 %) also demonstrated significantly higher accuracy than chatbots (p < 0.001), followed by students (80.8 %±6.9 %). ChatGPT showed the highest accuracy (78 %±3 %) among chatbots. All chatbots except ChatGPT3.5 showed acceptable consistency (Cronbach alpha>0.7). CLINICAL SIGNIFICANCE: Based on this pilot study, chatbots may be valuable adjuncts for educational purposes and for distributing information to patients. However, they are not yet ready to serve as substitutes for human clinicians in diagnostic decision-making. CONCLUSION: In this pilot study, chatbots showed lower accuracy than dentists. Chatbots may not yet be recommended for clinical pediatric dentistry.


Subject(s)
Dentists , Pediatric Dentistry , Humans , Pilot Projects , Dentists/psychology , Artificial Intelligence , Communication , Surveys and Questionnaires , Child
7.
Pediatr Dent ; 46(1): 27-35, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38449036

ABSTRACT

Purpose: To systematically evaluate artificial intelligence applications for diagnostic and treatment planning possibilities in pediatric dentistry. Methods: PubMed®, EMBASE®, Scopus, Web of Science™, IEEE, medRxiv, arXiv, and Google Scholar were searched using specific search queries. The Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) checklist was used to assess the risk of bias assessment of the included studies. Results: Based on the initial screening, 33 eligible studies were included (among 3,542). Eleven studies appeared to have low bias risk across all QUADAS-2 domains. Most applications focused on early childhood caries diagnosis and prediction, tooth identification, oral health evaluation, and supernumerary tooth identification. Six studies evaluated AI tools for mesiodens or supernumerary tooth identification on radigraphs, four for primary tooth identification and/or numbering, seven studies to detect caries on radiographs, and 12 to predict early childhood caries. For these four tasks, the reported accuracy of AI varied from 60 percent to 99 percent, sensitivity was from 20 percent to 100 percent, specificity was from 49 percent to 100 percent, F1-score was from 60 percent to 97 percent, and the area-under-the-curve varied from 87 percent to 100 percent. Conclusions: The overall body of evidence regarding artificial intelligence applications in pediatric dentistry does not allow for firm conclusions. For a wide range of applications, AI shows promising accuracy. Future studies should focus on a comparison of AI against the standard of care and employ a set of standardized outcomes and metrics to allow comparison across studies.


Subject(s)
Artificial Intelligence , Pediatric Dentistry , Child , Child, Preschool , Humans , Dental Caries/diagnostic imaging , Dental Caries/therapy , Oral Health , Tooth, Supernumerary
8.
Dentomaxillofac Radiol ; 53(1): 5-21, 2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38183164

ABSTRACT

OBJECTIVES: Improved tools based on deep learning can be used to accurately number and identify teeth. This study aims to review the use of deep learning in tooth numbering and identification. METHODS: An electronic search was performed through October 2023 on PubMed, Scopus, Cochrane, Google Scholar, IEEE, arXiv, and medRxiv. Studies that used deep learning models with segmentation, object detection, or classification tasks for teeth identification and numbering of human dental radiographs were included. For risk of bias assessment, included studies were critically analysed using quality assessment of diagnostic accuracy studies (QUADAS-2). To generate plots for meta-analysis, MetaDiSc and STATA 17 (StataCorp LP, College Station, TX, USA) were used. Pooled outcome diagnostic odds ratios (DORs) were determined through calculation. RESULTS: The initial search yielded 1618 studies, of which 29 were eligible based on the inclusion criteria. Five studies were found to have low bias across all domains of the QUADAS-2 tool. Deep learning has been reported to have an accuracy range of 81.8%-99% in tooth identification and numbering and a precision range of 84.5%-99.94%. Furthermore, sensitivity was reported as 82.7%-98% and F1-scores ranged from 87% to 98%. Sensitivity was 75.5%-98% and specificity was 79.9%-99%. Only 6 studies found the deep learning model to be less than 90% accurate. The average DOR of the pooled data set was 1612, the sensitivity was 89%, the specificity was 99%, and the area under the curve was 96%. CONCLUSION: Deep learning models successfully can detect, identify, and number teeth on dental radiographs. Deep learning-powered tooth numbering systems can enhance complex automated processes, such as accurately reporting which teeth have caries, thus aiding clinicians in making informed decisions during clinical practice.


Subject(s)
Deep Learning , Dental Caries , Tooth , Humans , Radiography, Dental , Tooth/diagnostic imaging
9.
Clin Oral Investig ; 28(1): 88, 2024 Jan 13.
Article in English | MEDLINE | ID: mdl-38217733

ABSTRACT

OBJECTIVE: This study aimed to review and synthesize studies using artificial intelligence (AI) for classifying, detecting, or segmenting oral mucosal lesions on photographs. MATERIALS AND METHOD: Inclusion criteria were (1) studies employing AI to (2) classify, detect, or segment oral mucosa lesions, (3) on oral photographs of human subjects. Included studies were assessed for risk of bias using Quality Assessment of Diagnostic Accuracy Studies (QUADAS-2). A PubMed, Scopus, Embase, Web of Science, IEEE, arXiv, medRxiv, and grey literature (Google Scholar) search was conducted until June 2023, without language limitation. RESULTS: After initial searching, 36 eligible studies (from 8734 identified records) were included. Based on QUADAS-2, only 7% of studies were at low risk of bias for all domains. Studies employed different AI models and reported a wide range of outcomes and metrics. The accuracy of AI for detecting oral mucosal lesions ranged from 74 to 100%, while that for clinicians un-aided by AI ranged from 61 to 98%. Pooled diagnostic odds ratio for studies which evaluated AI for diagnosing or discriminating potentially malignant lesions was 155 (95% confidence interval 23-1019), while that for cancerous lesions was 114 (59-221). CONCLUSIONS: AI may assist in oral mucosa lesion screening while the expected accuracy gains or further health benefits remain unclear so far. CLINICAL RELEVANCE: Artificial intelligence assists oral mucosa lesion screening and may foster more targeted testing and referral in the hands of non-specialist providers, for example. So far, it remains unclear if accuracy gains compared with specialized can be realized.


Subject(s)
Artificial Intelligence , Mouth Mucosa , Humans , Referral and Consultation
10.
Oral Radiol ; 40(1): 1-20, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37855976

ABSTRACT

PURPOSE: This study aims to review deep learning applications for detecting head and neck cancer (HNC) using magnetic resonance imaging (MRI) and radiographic data. METHODS: Through January 2023, a PubMed, Scopus, Embase, Google Scholar, IEEE, and arXiv search were carried out. The inclusion criteria were implementing head and neck medical images (computed tomography (CT), positron emission tomography (PET), MRI, Planar scans, and panoramic X-ray) of human subjects with segmentation, object detection, and classification deep learning models for head and neck cancers. The risk of bias was rated with the quality assessment of diagnostic accuracy studies (QUADAS-2) tool. For the meta-analysis diagnostic odds ratio (DOR) was calculated. Deeks' funnel plot was used to assess publication bias. MIDAS and Metandi packages were used to analyze diagnostic test accuracy in STATA. RESULTS: From 1967 studies, 32 were found eligible after the search and screening procedures. According to the QUADAS-2 tool, 7 included studies had a low risk of bias for all domains. According to the results of all included studies, the accuracy varied from 82.6 to 100%. Additionally, specificity ranged from 66.6 to 90.1%, sensitivity from 74 to 99.68%. Fourteen studies that provided sufficient data were included for meta-analysis. The pooled sensitivity was 90% (95% CI 0.820.94), and the pooled specificity was 92% (CI 95% 0.87-0.96). The DORs were 103 (27-251). Publication bias was not detected based on the p-value of 0.75 in the meta-analysis. CONCLUSION: With a head and neck screening deep learning model, detectable screening processes can be enhanced with high specificity and sensitivity.


Subject(s)
Deep Learning , Head and Neck Neoplasms , Humans , Sensitivity and Specificity , Head and Neck Neoplasms/diagnostic imaging , Magnetic Resonance Imaging/methods , Positron-Emission Tomography/methods
11.
J Prosthet Dent ; 2023 Jul 18.
Article in English | MEDLINE | ID: mdl-37474386

ABSTRACT

STATEMENT OF PROBLEM: Removable partial dentures (RPDs) can be fabricated with conventional casting procedures or computer-aided design and computer-aided manufacturing (CAD-CAM) technologies; however, the manufacturing accuracy and internal discrepancy differences among these manufacturing methods remain uncertain. PURPOSE: The purpose of this systematic review and meta-analysis was to assess the influence of the fabricating method (casting, milling, or additive manufacturing) on the accuracy and internal discrepancy of RPDs. MATERIAL AND METHODS: An electronic search of the literature was performed in 6 databases: PubMed/Medline, Embase, Web of Science, Scopus, Cochrane, and Google Scholar. The studies that assessed the accuracy and internal discrepancy of RPDs fabricated from casting, milling, and additive manufacturing were included. Studies reporting gaps (mean) and standard deviations were included in the meta-analysis. Publication bias was identified using funnel plot asymmetry and the Egger test. RESULTS: A total of 25 articles were included. The internal discrepancy of the additively manufactured RPDs ranged from 14.4 to 511 µm and from 7 to 419 µm in conventionally fabricated RPDs. For the milling method, 20 to 66 µm horizontal and 17 to 59 µm vertical discrepancies were reported. The Egger tests indicated no publication bias among the studies that were included in the meta-analysis. Four included studies resulted in more than the acceptable clinical gap (311 µm) for the CAD-CAM method. Independently of the manufacturing method, the greatest internal discrepancies reported were observed under the major connectors. RPDs fabricated by using CAD-CAM techniques required fewer clinical appointments, the RPD design was easier to reproduce, and laboratory time was less than with conventional procedures. However, the reviewed studies described several disadvantages, including limited RPD design programs, difficulties in defining the occlusal plane, expensive materials, and increased laboratory cost. CONCLUSIONS: Additive and subtractive technologies provide accurate methods for RPD fabrication; however, all challenges, including limited design software programs have not yet been overcome, and casting is still needed when the framework pattern is milled or printed.

12.
J Dent ; 135: 104593, 2023 08.
Article in English | MEDLINE | ID: mdl-37355089

ABSTRACT

OBJECTIVE: Artificial Intelligence (AI) refers to the ability of machines to perform cognitive and intellectual human tasks. In dentistry, AI offers the potential to enhance diagnostic accuracy, improve patient outcomes and streamline workflows. The present study provides a framework and a checklist to evaluate AI applications in dentistry from this perspective. METHODS: Lending from existing guidance documents, an initial draft of the checklist and an explanatory paper were derived and discussed among the groups members. RESULTS: The checklist was consented to in an anonymous voting process by 29 Topic Group Dental Diagnostics and Digital Dentistry, ITU/WHO Focus Group AI on Health's members. Overall, 11 principles were identified (diversity, transparency, wellness, privacy protection, solidarity, equity, prudence, law and governance, sustainable development, accountability, and responsibility, respect of autonomy, decision-making). CONCLUSIONS: Providers, patients, researchers, industry, and other stakeholders should consider these principles when developing, implementing, or receiving AI applications in dentistry. CLINICAL SIGNIFICANCE: While AI has become increasingly commonplace in dentistry, there are ethical concerns around its usage, and users (providers, patients, and other stakeholders), as well as the industry should consider these when developing, implementing, or receiving AI applications based on comprehensive framework to address the associated ethical challenges.


Subject(s)
Artificial Intelligence , Checklist , Humans , Focus Groups , Privacy , Dentistry
13.
Maxillofac Plast Reconstr Surg ; 45(1): 14, 2023 Mar 13.
Article in English | MEDLINE | ID: mdl-36913002

ABSTRACT

Artificial intelligence (AI) refers to using technologies to simulate human cognition to solve a specific problem. The rapid development of AI in the health sector has been attributed to the improvement of computing speed, exponential increase in data production, and routine data collection. In this paper, we review the current applications of AI for oral and maxillofacial (OMF) cosmetic surgery to provide surgeons with the fundamental technical elements needed to understand its potential. AI plays an increasingly important role in OMF cosmetic surgery in various settings, and its usage may raise ethical issues. In addition to machine learning algorithms (a subtype of AI), convolutional neural networks (a subtype of deep learning) are widely used in OMF cosmetic surgeries. Depending on their complexity, these networks can extract and process the elementary characteristics of an image. They are, therefore, commonly used in the diagnostic process for medical images and facial photos. AI algorithms have been used to assist surgeons with diagnosis, therapeutic decisions, preoperative planning, and outcome prediction and evaluation. AI algorithms complement human skills while minimizing shortcomings through their capabilities to learn, classify, predict, and detect. This algorithm should, however, be rigorously evaluated clinically, and a systematic ethical reflection should be conducted regarding data protection, diversity, and transparency. It is possible to revolutionize the practice of functional and aesthetic surgeries with 3D simulation models and AI models. Planning, decision-making, and evaluation during and after surgery can be improved with simulation systems. A surgical AI model can also perform time-consuming or challenging tasks for surgeons.

14.
Maxillofac Plast Reconstr Surg ; 45(1): 11, 2023 Feb 17.
Article in English | MEDLINE | ID: mdl-36800048
15.
J Dent ; 130: 104430, 2023 03.
Article in English | MEDLINE | ID: mdl-36682721

ABSTRACT

OBJECTIVES: Despite deep learning's wide adoption in dental artificial intelligence (AI) research, researchers from other dental fields and, more so, dental professionals may find it challenging to understand and interpret deep learning studies, their employed methods, and outcomes. The objective of this primer is to explain the basic concept of deep learning. It will lay out the commonly used terms, and describe different deep learning approaches, their methods, and outcomes. METHODS: Our research is based on the latest review studies, medical primers, as well as the state-of-the-art research on AI and deep learning, which have been gathered in the current study. RESULTS: In this study, a basic understanding of deep learning models and various approaches to deep learning is presented. An overview of data management strategies for deep learning projects is presented, including data collection, data curation, data annotation, and data preprocessing. Additionally, we provided a step-by-step guide for completing a real-world project. CONCLUSION: Researchers and clinicians can benefit from this study by gaining insight into deep learning. It can be used to critically appraise existing work or plan new deep learning projects. CLINICAL SIGNIFICANCE: This study may be useful to dental researchers and professionals who are assessing and appraising deep learning studies within the field of dentistry.


Subject(s)
Artificial Intelligence , Deep Learning , Humans , Dentists
16.
Dent Res J (Isfahan) ; 20: 116, 2023.
Article in English | MEDLINE | ID: mdl-38169618

ABSTRACT

Background: Dentists begin the diagnosis by identifying and enumerating teeth. Panoramic radiographs are widely used for tooth identification due to their large field of view and low exposure dose. The automatic numbering of teeth in panoramic radiographs can assist clinicians in avoiding errors. Deep learning has emerged as a promising tool for automating tasks. Our goal is to evaluate the accuracy of a two-step deep learning method for tooth identification and enumeration in panoramic radiographs. Materials and Methods: In this retrospective observational study, 1007 panoramic radiographs were labeled by three experienced dentists. It involved drawing bounding boxes in two distinct ways: one for teeth and one for quadrants. All images were preprocessed using the contrast-limited adaptive histogram equalization method. First, panoramic images were allocated to a quadrant detection model, and the outputs of this model were provided to the tooth numbering models. A faster region-based convolutional neural network model was used in each step. Results: Average precision (AP) was calculated in different intersection-over-union thresholds. The AP50 of quadrant detection and tooth enumeration was 100% and 95%, respectively. Conclusion: We have obtained promising results with a high level of AP using our two-step deep learning framework for automatic tooth enumeration on panoramic radiographs. Further research should be conducted on diverse datasets and real-life situations.

17.
J Prosthet Dent ; 2022 May 18.
Article in English | MEDLINE | ID: mdl-35597606

ABSTRACT

STATEMENT OF PROBLEM: The conventional method of fabricating removable partial denture (RPD) patterns is a time-consuming, expensive, and complex process, and the success of the treatment depends on the fit of the framework. Questions still remain as to whether the 3D-printing method is an acceptable procedure compared with the conventional method. PURPOSE: The purpose of this in vitro study was to compare the fit of RPDs cast from 3D-printed frameworks and conventionally fabricated RPDs according to the gaps between the framework and the reference model. MATERIAL AND METHODS: A metal reference model was made from a Kennedy class III modification 1 maxillary typodont. For the conventional group (n=9), impressions were made from the metal cast. Cobalt-chromium frameworks were cast with the conventional method. For the digital group (n=9), the metal cast was scanned with a laboratory scanner, and the RPD was designed in the 3Shape platform. The standard tessellation language (STL) file of the design was downloaded to a 3D printer (Hunter DLP), and 9 resin frameworks were printed. These frameworks were invested and cast in the same dental laboratory as the first group. Gap measurement was assessed vertically with a superimposition software program (Geomagic Control X), and additional measurements were assessed under rests, reciprocal arms, and a 2.2-mm box under the major connector. The independent t test was used for determining the results and statistical analysis between groups. The paired t test was used for statistical analysis within groups (α=.05 for all tests). RESULTS: No significant differences (P>.05) were observed in the mean ±standard deviation in overall fit according to the gaps in the conventional group (103 ±18 µm) and those in the digital group (109 ±21 µm). The biggest gap (poorest fit) was observed in the 2.2-mm box under the major connector (115 ±6 µm). CONCLUSIONS: Both conventional and 3D-printing methods showed clinically acceptable fits. Further clinical studies with a larger specimen size and long-term follow-up are needed.

18.
J Dent ; 122: 104115, 2022 07.
Article in English | MEDLINE | ID: mdl-35367318

ABSTRACT

OBJECTIVES: Detecting caries lesions is challenging for dentists, and deep learning models may help practitioners to increase accuracy and reliability. We aimed to systematically review deep learning studies on caries detection. DATA: We selected diagnostic accuracy studies that used deep learning models on dental imagery (including radiographs, photographs, optical coherence tomography images, near-infrared light transillumination images). The latest version of the quality assessment tool for diagnostic accuracy studies (QUADAS-2) tool was used for risk of bias assessment. Meta-analysis was not performed due to heterogeneity in the studies methods and their performance measurements. SOURCES: Databases (Medline via PubMed, Google Scholar, Scopus, Embase) and a repository (ArXiv) were screened for publications published after 2010, without any limitation on language. STUDY SELECTION: From 252 potentially eligible references, 48 studies were assessed full-text and 42 included, using classification (n = 26), object detection (n = 6), or segmentation models (n = 10). A wide range of performance metrics was used; image, object or pixel accuracy ranged between 68%-99%. The minority of studies (n = 11) showed a low risk of biases in all domains, and 13 studies (31.0%) low risk for concerns regarding applicability. The accuracy of caries classification models varied, i.e. 71% to 96% on intra-oral photographs, 82% to 99.2% on peri-apical radiographs, 87.6% to 95.4% on bitewing radiographs, 68.0% to 78.0% on near-infrared transillumination images, 88.7% to 95.2% on optical coherence tomography images, and 86.1% to 96.1% on panoramic radiographs. Pooled diagnostic odds ratios varied from 2.27 to 32,767. For detection and segmentation models, heterogeneity in reporting did not allow useful pooling. CONCLUSION: An increasing number of studies investigated caries detection using deep learning, with a diverse types of architectures being employed. Reported accuracy seems promising, while study and reporting quality are currently low. CLINICAL SIGNIFICANCE: Deep learning models can be considered as an assistant for decisions regarding the presence or absence of carious lesions.


Subject(s)
Deep Learning , Dental Caries , Dental Caries/diagnostic imaging , Dental Caries Susceptibility , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...