Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Int J Oral Maxillofac Surg ; 51(11): 1488-1494, 2022 Nov.
Article in English | MEDLINE | ID: mdl-35397969

ABSTRACT

The aim of this study was to develop automated models for the identification and detection of mandibular fractures in panoramic radiographs using convolutional neural network (CNN) algorithms. A total of 1710 panoramic radiograph images from the years 2016 to 2020, including 855 images containing mandibular fractures, were obtained retrospectively from the regional trauma centre. CNN-based classification models, DenseNet-169 and ResNet-50, were fabricated to identify fractures in the radiographic images. The CNN-based object detection models Faster R-CNN and YOLOv5 were trained to automate the placement of the bounding boxes to detect fractures in the radiographic images. The performance of the models was evaluated on a hold-out test set and also by comparison with residents in oral and maxillofacial surgery and oral and maxillofacial surgeons (experts) on a 100-image subset. The binary classification performance of the models achieved promising results with an area under the receiver operating characteristics curve (AUC), sensitivity, and specificity of 100%. The detection performance of the models achieved an AUC of approximately 90%. When compared with the accuracy of clinician observers, the identification performance of the models outperformed even an expert-level classification. In conclusion, CNN-based models identified mandibular fractures above expert-level performance. It is expected that these models will be used as an aid to improve clinician performance, with aided resident performance approximating that of expert level.


Subject(s)
Deep Learning , Mandibular Fractures , Humans , Radiography, Panoramic , Mandibular Fractures/diagnostic imaging , Retrospective Studies , Neural Networks, Computer
2.
Int J Oral Maxillofac Surg ; 51(5): 699-704, 2022 May.
Article in English | MEDLINE | ID: mdl-34548194

ABSTRACT

Oral potentially malignant disorders (OPMDs) are a group of conditions that can transform into oral cancer. The purpose of this study was to evaluate convolutional neural network (CNN) algorithms to classify and detect OPMDs in oral photographs. In this study, 600 oral photograph images were collected retrospectively and grouped into 300 images of OPMDs and 300 images of normal oral mucosa. CNN-based classification models were created using DenseNet-121 and ResNet-50. The detection models were created using Faster R-CNN and YOLOv4. The image data were randomly selected and assigned as training, validating, and testing data. The testing data were evaluated to compare the performance of the CNN models with the diagnosis results produced by oral and maxillofacial surgeons. DenseNet-121 and ResNet-50 were found to produce high efficiency in diagnosis of OPMDs, with an area under the receiver operating characteristic curve (AUC) of 95%. Faster R-CNN yielded the highest detection performance, with an AUC of 74.34%. For the CNN-based classification model, the sensitivity and specificity were 100% and 90%, respectively. For the oral and maxillofacial surgeons, these values were 91.73% and 92.27%, respectively. In conclusion, the DenseNet-121, ResNet-50 and Faster R-CNN models have potential for the classification and detection of OPMDs in oral photographs.


Subject(s)
Mouth Neoplasms , Neural Networks, Computer , Algorithms , Humans , Mouth Neoplasms/diagnostic imaging , Retrospective Studies
3.
Oper Dent ; 43(3): E110-E118, 2018.
Article in English | MEDLINE | ID: mdl-29513643

ABSTRACT

This work presents the multilayered caries model with a visuo-tactile virtual reality simulator and a randomized controlled trial protocol to determine the effectiveness of the simulator in training for minimally invasive caries removal. A three-dimensional, multilayered caries model was reconstructed from 10 micro-computed tomography (CT) images of deeply carious extracted human teeth before and after caries removal. The full grey scale 0-255 yielded a median grey scale value of 0-9, 10-18, 19-25, 26-52, and 53-80 regarding dental pulp, infected carious dentin, affected carious dentin, normal dentin, and normal enamel, respectively. The simulator was connected to two haptic devices for a handpiece and mouth mirror. The visuo-tactile feedback during the operation varied depending on the grey scale. Sixth-year dental students underwent a pretraining assessment of caries removal on extracted teeth. The students were then randomly assigned to train on either the simulator (n=16) or conventional extracted teeth (n=16) for 3 days, after which the assessment was repeated. The posttraining performance of caries removal improved compared with pretraining in both groups (Wilcoxon, p<0.05). The equivalence test for proportional differences (two 1-sided t-tests) with a 0.2 margin confirmed that the participants in both groups had identical posttraining performance scores (95% CI=0.92, 1; p=0.00). In conclusion, training on the micro-CT multilayered caries model with the visuo-tactile virtual reality simulator and conventional extracted tooth had equivalent effects on improving performance of minimally invasive caries removal.


Subject(s)
Dental Caries/surgery , Education, Dental/methods , Models, Dental , User-Computer Interface , Dental Caries/diagnostic imaging , Dental Cavity Preparation/methods , Humans , Minimally Invasive Surgical Procedures/education , Minimally Invasive Surgical Procedures/methods , Students, Dental , X-Ray Microtomography
4.
Int Endod J ; 45(7): 627-32, 2012 Jul.
Article in English | MEDLINE | ID: mdl-22288913

ABSTRACT

AIM: To design and evaluate the impact of virtual reality (VR) pre-surgical practice on the performance of actual endodontic microsurgery. METHODOLOGY: The VR system operates on a laptop with a 1.6-GHz Intel processor and 2 GB of main memory. Volumetric cone-beam computed tomography (CBCT) data were acquired from a fresh cadaveric porcine mandible prior to endodontic microsurgery. Ten inexperienced endodontic trainees were randomized as to whether they performed endodontic microsurgery with or without virtual pre-surgical practice. The VR simulator has microinstruments to perform surgical procedures under magnification. After the initial endodontic microsurgery, all participants served as their own controls by performing another procedure with or without virtual pre-surgical practice. All procedures were videotaped and assessed by two independent observers using an endodontic competency rating scale (from 6 to 30). RESULTS: A significant difference was observed between the scores for endodontic microsurgery on molar teeth completed with virtual pre-surgical practice and those completed without virtual presurgical practice, median 24.5 (range = 17-28) versus median 18.75 (range = 14-26.5), P = 0.041. A significant difference was observed between the scores for osteotomy on a molar tooth completed with virtual pre-surgical practice and those completed without virtual pre-surgical practice, median 4.5 (range = 3.5-4.5) versus median 3 (range = 2-4), P = 0.042. CONCLUSIONS: Pre-surgical practice in a virtual environment using the 3D computerized model generated from the original CBCT image data improved endodontic microsurgery performance.


Subject(s)
Computer Simulation , Cone-Beam Computed Tomography , Endodontics/education , Microsurgery/education , User-Computer Interface , Adult , Animals , Bicuspid/diagnostic imaging , Bicuspid/surgery , Clinical Competence , Computer-Assisted Instruction , Cross-Over Studies , Female , Humans , Male , Molar/diagnostic imaging , Molar/surgery , Retrograde Obturation , Sus scrofa
5.
Int Endod J ; 44(11): 983-9, 2011 Nov.
Article in English | MEDLINE | ID: mdl-21623838

ABSTRACT

AIM: To evaluate the effectiveness of haptic virtual reality (VR) simulator training using microcomputed tomography (micro-CT) tooth models on minimizing procedural errors in endodontic access preparation. METHODOLOGY: Fourth year dental students underwent a pre-training assessment of access cavity preparation on an extracted maxillary molar tooth mounted on a phantom head. Students were then randomized to training on either the micro-CT tooth models with a haptic VR simulator (n = 16) or extracted teeth in a phantom head (n = 16) training environments for 3 days, after which the assessment was repeated. The main outcome measure was procedural errors assessed by an expert blinded to trainee and training status. The secondary outcome measures were tooth mass loss and task completion time. The Wilcoxon test was used to examine the differences between pre-training and post-training error scores, on the same group. The Mann-Whitney test was used to detect any differences between haptic VR training and phantom head training groups. The independent t-test was used to make a comparison on tooth mass removed and task completion time between the haptic VR training and phantom head training groups. RESULTS: Post-training performance had improved compared with pre-training performance in error scores in both groups (P < 0.05). However, error score reduction between the haptic VR simulator and the conventional training group was not significantly different (P > 0.05). The VR simulator group decreased significantly (P < 0.05) the amount of hard tissue volume lost on the post-training exercise. Task completion time was not significantly different (P > 0.05) in both groups. CONCLUSIONS: Training on the haptic VR simulator and conventional phantom head had equivalent effects on minimizing procedural errors in endodontic access cavity preparation.


Subject(s)
Computer Simulation , Computer-Assisted Instruction/methods , Education, Dental/methods , Endodontics/education , Root Canal Preparation/methods , Computer-Assisted Instruction/instrumentation , Humans , Maxilla , Models, Dental , Molar , Program Evaluation , Prospective Studies , Single-Blind Method , Statistics, Nonparametric , Students, Dental , User-Computer Interface , Vibration , X-Ray Microtomography
7.
Methods Inf Med ; 49(4): 396-405, 2010.
Article in English | MEDLINE | ID: mdl-20582388

ABSTRACT

OBJECTIVES: We present a dental training system with a haptic interface that allows dental students or experts to practice dental procedures in a virtual environment. The simulator is able to monitor and classify the performance of an operator into novice or expert categories. The intelligent training module allows a student to simultaneously and proactively follow the correct dental procedures demonstrated by an intelligent tutor. METHODS: The virtual reality (VR) simulator simulates the tooth preparation procedure both graphically and haptically, using a video display and haptic device. We evaluated the performance of users using hidden Markov models (HMMs) incorporating various data collected by the simulator. We implemented an intelligent training module which is able to record and replay the procedure that was performed by an expert and allows students to follow the correct steps and apply force proactively by themselves while reproducing the procedure. RESULTS: We find that the level of graphics and haptics fidelity is acceptable as evaluated by dentists. The accuracy of the objective performance assessment using HMMs is encouraging with 100 percent accuracy. CONCLUSIONS: The simulator can simulate realistic tooth surface exploration and cutting. The accuracy of automatic performance assessment system using HMMs is also acceptable on relatively small data sets. The intelligent training allows skill transfer in a proactive manner which is an advantage over the passive method in a traditional training. We will soon conduct experiments with more participants and implement a variety of training strategies.


Subject(s)
Clinical Competence , Computer Simulation , Dentistry/standards , Education, Dental/methods , Teaching/methods , User-Computer Interface , Artificial Intelligence , Dentistry/methods , Education, Dental/standards , Health Knowledge, Attitudes, Practice , Humans , Markov Chains , Students, Dental , Task Performance and Analysis , Thailand
SELECTION OF CITATIONS
SEARCH DETAIL
...