Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 29
Filter
1.
Sci Rep ; 14(1): 6463, 2024 03 18.
Article in English | MEDLINE | ID: mdl-38499700

ABSTRACT

Three-dimensional facial stereophotogrammetry provides a detailed representation of craniofacial soft tissue without the use of ionizing radiation. While manual annotation of landmarks serves as the current gold standard for cephalometric analysis, it is a time-consuming process and is prone to human error. The aim in this study was to develop and evaluate an automated cephalometric annotation method using a deep learning-based approach. Ten landmarks were manually annotated on 2897 3D facial photographs. The automated landmarking workflow involved two successive DiffusionNet models. The dataset was randomly divided into a training and test dataset. The precision of the workflow was evaluated by calculating the Euclidean distances between the automated and manual landmarks and compared to the intra-observer and inter-observer variability of manual annotation and a semi-automated landmarking method. The workflow was successful in 98.6% of all test cases. The deep learning-based landmarking method achieved precise and consistent landmark annotation. The mean precision of 1.69 ± 1.15 mm was comparable to the inter-observer variability (1.31 ± 0.91 mm) of manual annotation. Automated landmark annotation on 3D photographs was achieved with the DiffusionNet-based approach. The proposed method allows quantitative analysis of large datasets and may be used in diagnosis, follow-up, and virtual surgical planning.


Subject(s)
Anatomic Landmarks , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Reproducibility of Results , Face/diagnostic imaging , Cephalometry/methods
2.
Cancers (Basel) ; 16(5)2024 Feb 28.
Article in English | MEDLINE | ID: mdl-38473338

ABSTRACT

In this retrospective study, the clinical and economic implications of microvascular reconstruction of the mandible were assessed, comparing immediate versus delayed surgical approaches. Utilizing data from two German university departments for oral and maxillofacial surgery, the study included patients who underwent mandibular reconstruction following continuity resection. The data assessed included demographic information, reconstruction details, medical history, dental rehabilitation status, and flap survival rates. In total, 177 cases (131 male and 46 females; mean age: 59 years) of bony free flap reconstruction (72 immediate and 105 delayed) were included. Most patients received adjuvant treatment (81% with radiotherapy and 51% combined radiochemotherapy), primarily for tumor resection. Flap survival was not significantly influenced by the timing of reconstruction, radiotherapy status, or the mean interval (14.5 months) between resection and reconstruction. However, immediate reconstruction had consumed significantly fewer resources. The rate of implant-supported masticatory rehabilitation was only 18% overall. This study suggests that immediate jaw reconstruction is economically advantageous without impacting flap survival rates. It emphasizes patient welfare as paramount over financial aspects in clinical decisions. Furthermore, this study highlights the need for improved pathways for masticatory rehabilitation, as evidenced by only 18% of patients with implant-supported dentures, to enhance quality of life and social integration.

3.
BMC Oral Health ; 24(1): 387, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38532414

ABSTRACT

OBJECTIVE: Panoramic radiographs (PRs) provide a comprehensive view of the oral and maxillofacial region and are used routinely to assess dental and osseous pathologies. Artificial intelligence (AI) can be used to improve the diagnostic accuracy of PRs compared to bitewings and periapical radiographs. This study aimed to evaluate the advantages and challenges of using publicly available datasets in dental AI research, focusing on solving the novel task of predicting tooth segmentations, FDI numbers, and tooth diagnoses, simultaneously. MATERIALS AND METHODS: Datasets from the OdontoAI platform (tooth instance segmentations) and the DENTEX challenge (tooth bounding boxes with associated diagnoses) were combined to develop a two-stage AI model. The first stage implemented tooth instance segmentation with FDI numbering and extracted regions of interest around each tooth segmentation, whereafter the second stage implemented multi-label classification to detect dental caries, impacted teeth, and periapical lesions in PRs. The performance of the automated tooth segmentation algorithm was evaluated using a free-response receiver-operating-characteristics (FROC) curve and mean average precision (mAP) metrics. The diagnostic accuracy of detection and classification of dental pathology was evaluated with ROC curves and F1 and AUC metrics. RESULTS: The two-stage AI model achieved high accuracy in tooth segmentations with a FROC score of 0.988 and a mAP of 0.848. High accuracy was also achieved in the diagnostic classification of impacted teeth (F1 = 0.901, AUC = 0.996), whereas moderate accuracy was achieved in the diagnostic classification of deep caries (F1 = 0.683, AUC = 0.960), early caries (F1 = 0.662, AUC = 0.881), and periapical lesions (F1 = 0.603, AUC = 0.974). The model's performance correlated positively with the quality of annotations in the used public datasets. Selected samples from the DENTEX dataset revealed cases of missing (false-negative) and incorrect (false-positive) diagnoses, which negatively influenced the performance of the AI model. CONCLUSIONS: The use and pooling of public datasets in dental AI research can significantly accelerate the development of new AI models and enable fast exploration of novel tasks. However, standardized quality assurance is essential before using the datasets to ensure reliable outcomes and limit potential biases.


Subject(s)
Dental Caries , Tooth, Impacted , Tooth , Humans , Artificial Intelligence , Radiography, Panoramic , Bone and Bones
4.
J Dent ; 143: 104886, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38342368

ABSTRACT

OBJECTIVE: Secondary caries lesions adjacent to restorations, a leading cause of restoration failure, require accurate diagnostic methods to ensure an optimal treatment outcome. Traditional diagnostic strategies rely on visual inspection complemented by radiographs. Recent advancements in artificial intelligence (AI), particularly deep learning, provide potential improvements in caries detection. This study aimed to develop a convolutional neural network (CNN)-based algorithm for detecting primary caries and secondary caries around restorations using bitewings. METHODS: Clinical data from 7 general dental practices in the Netherlands, comprising 425 bitewings of 383 patients, were utilized. The study used the Mask-RCNN architecture, for instance, segmentation, supported by the Swin Transformer backbone. After data augmentation, model training was performed through a ten-fold cross-validation. The diagnostic accuracy of the algorithm was evaluated by calculating the area under the Free-Response Receiver Operating Characteristics curve, sensitivity, precision, and F1 scores. RESULTS: The model achieved areas under FROC curves of 0.806 and 0.804, and F1-scores of 0.689 and 0.719 for primary and secondary caries detection, respectively. CONCLUSION: An accurate CNN-based automated system was developed to detect primary and secondary caries lesions on bitewings, highlighting a significant advancement in automated caries diagnostics. CLINICAL SIGNIFICANCE: An accurate algorithm that integrates the detection of both primary and secondary caries will permit the development of automated systems to aid clinicians in their daily clinical practice.


Subject(s)
Deep Learning , Dental Caries , Humans , Artificial Intelligence , Dental Caries Susceptibility , Neural Networks, Computer , ROC Curve , Dental Caries/therapy
5.
J Clin Med ; 12(20)2023 Oct 20.
Article in English | MEDLINE | ID: mdl-37892787

ABSTRACT

BACKGROUND: Virtual surgical planning allows surgeons to meticulously define surgical procedures by creating a digital replica of patients' anatomy. This enables precise preoperative assessment, facilitating the selection of optimal surgical approaches and the customization of treatment plans. In neck surgery, virtual planning has been significantly underreported compared to craniofacial surgery, due to a multitude of factors, including the predominance of soft tissues, the unavailability of intraoperative navigation and the complexity of segmenting such areas. Augmented reality represents the most innovative approach to translate virtual planning for real patients, as it merges the digital world with the surgical field in real time. Surgeons can access patient-specific data directly within their field of view, through dedicated visors. In head and neck surgical oncology, augmented reality systems overlay critical anatomical information onto the surgeon's visual field. This aids in locating and preserving vital structures, such as nerves and blood vessels, during complex procedures. In this paper, the authors examine a series of patients undergoing complex neck surgical oncology procedures with prior virtual surgical planning analysis. For each patient, the surgical plan was imported in Hololens headset to allow for intraoperative augmented reality visualization. The authors discuss the results of this preliminary investigation, tracing the conceptual framework for an increasing AR implementation in complex head and neck surgical oncology procedures.

6.
BMC Oral Health ; 23(1): 643, 2023 09 05.
Article in English | MEDLINE | ID: mdl-37670290

ABSTRACT

OBJECTIVE: Intra-oral scans and gypsum cast scans (OS) are widely used in orthodontics, prosthetics, implantology, and orthognathic surgery to plan patient-specific treatments, which require teeth segmentations with high accuracy and resolution. Manual teeth segmentation, the gold standard up until now, is time-consuming, tedious, and observer-dependent. This study aims to develop an automated teeth segmentation and labeling system using deep learning. MATERIAL AND METHODS: As a reference, 1750 OS were manually segmented and labeled. A deep-learning approach based on PointCNN and 3D U-net in combination with a rule-based heuristic algorithm and a combinatorial search algorithm was trained and validated on 1400 OS. Subsequently, the trained algorithm was applied to a test set consisting of 350 OS. The intersection over union (IoU), as a measure of accuracy, was calculated to quantify the degree of similarity between the annotated ground truth and the model predictions. RESULTS: The model achieved accurate teeth segmentations with a mean IoU score of 0.915. The FDI labels of the teeth were predicted with a mean accuracy of 0.894. The optical inspection showed excellent position agreements between the automatically and manually segmented teeth components. Minor flaws were mostly seen at the edges. CONCLUSION: The proposed method forms a promising foundation for time-effective and observer-independent teeth segmentation and labeling on intra-oral scans. CLINICAL SIGNIFICANCE: Deep learning may assist clinicians in virtual treatment planning in orthodontics, prosthetics, implantology, and orthognathic surgery. The impact of using such models in clinical practice should be explored.


Subject(s)
Deep Learning , Humans , Algorithms , Calcium Sulfate , Dental Care , Physical Examination
7.
Maxillofac Plast Reconstr Surg ; 45(1): 27, 2023 Aug 09.
Article in English | MEDLINE | ID: mdl-37556073

ABSTRACT

BACKGROUND: This study aimed to compare the skeletal structures between mandibular prognathism and retrognathism among patients with facial asymmetry. RESULTS: Patients who had mandibular asymmetry with retrognathism (Group A) in The Netherlands were compared with those with deviated mandibular prognathism (Group B) in Korea. All the data were obtained from 3D-reformatted cone-beam computed tomography images from each institute. The right and left condylar heads were located more posteriorly, inferiorly, and medially in Group B than in Group A. The deviated side of Group A and the contralateral side of Group B showed similar condylar width and height, ramus-proper height, and ramus height. Interestingly, there were no inter-group differences in the ramus-proper heights. Asymmetric mandibular body length was the most significantly correlated with chin asymmetry in retrognathic asymmetry patients whereas asymmetric elongation of condylar process was the most important factor for chin asymmetry in deviated mandibular prognathism. CONCLUSION: Considering the 3D positional difference of gonion and large individual variations of frontal ramal inclination, significant structural deformation in deviated mandibular prognathism need to be considered in asymmetric prognathism patients. Therefore, Individually planned surgical procedures that also correct the malpositioning of the mandibular ramus are recommended especially in patients with asymmetric prognathism.

8.
Sci Rep ; 13(1): 12082, 2023 07 26.
Article in English | MEDLINE | ID: mdl-37495645

ABSTRACT

Field driven design is a novel approach that allows to define through equations geometrical entities known as implicit bodies. This technology does not rely upon conventional geometry subunits, such as polygons or edges, rather it represents spatial shapes through mathematical functions within a geometrical field. The advantages in terms of computational speed and automation are conspicuous, and well acknowledged in engineering, especially for lattice structures. Moreover, field-driven design amplifies the possibilities for generative design, facilitating the creation of shapes generated by the software on the basis of user-defined constraints. Given such potential, this paper suggests the possibility to use the software nTopology, which is currently the only software for field-driven generative design, in the context of patient-specific implant creation for maxillofacial surgery. Clinical scenarios of applicability, including trauma and orthognathic surgery, are discussed, as well as the integration of this new technology with current workflows of virtual surgical planning. This paper represents the first application of field-driven design in maxillofacial surgery and, although its results are very preliminary as it is limited in considering only the distance field elaborated from specific points of reconstructed anatomy, it introduces the importance of this new technology for the future of personalized implant design in surgery.


Subject(s)
Orthognathic Surgery , Orthognathic Surgical Procedures , Surgery, Computer-Assisted , Surgery, Oral , Humans , Surgery, Computer-Assisted/methods , Software , Orthognathic Surgical Procedures/methods , Imaging, Three-Dimensional/methods
9.
Head Face Med ; 19(1): 23, 2023 Jun 22.
Article in English | MEDLINE | ID: mdl-37349791

ABSTRACT

The use of artificial intelligence (AI) in dentistry is rapidly evolving and could play a major role in a variety of dental fields. This study assessed patients' perceptions and expectations regarding AI use in dentistry. An 18-item questionnaire survey focused on demographics, expectancy, accountability, trust, interaction, advantages and disadvantages was responded to by 330 patients; 265 completed questionnaires were included in this study. Frequencies and differences between age groups were analysed using a two-sided chi-squared or Fisher's exact tests with Monte Carlo approximation. Patients' perceived top three disadvantages of AI use in dentistry were (1) the impact on workforce needs (37.7%), (2) new challenges on doctor-patient relationships (36.2%) and (3) increased dental care costs (31.7%). Major expected advantages were improved diagnostic confidence (60.8%), time reduction (48.3%) and more personalised and evidencebased disease management (43.0%). Most patients expected AI to be part of the dental workflow in 1-5 (42.3%) or 5-10 (46.8%) years. Older patients (> 35 years) expected higher AI performance standards than younger patients (18-35 years) (p < 0.05). Overall, patients showed a positive attitude towards AI in dentistry. Understanding patients' perceptions may allow professionals to shape AI-driven dentistry in the future.


Subject(s)
Artificial Intelligence , Dental Care , Humans , Artificial Intelligence/trends , Perception , Adolescent , Young Adult , Adult , Dental Care/methods , Dental Care/psychology , Dental Care/trends
10.
J Dent ; 133: 104519, 2023 06.
Article in English | MEDLINE | ID: mdl-37061117

ABSTRACT

OBJECTIVE: The aim of this study is to automatically assess the positional relationship between lower third molars (M3i) and the mandibular canal (MC) based on the panoramic radiograph(s) (PR(s)). MATERIAL AND METHODS: A total of 1444 M3s were manually annotated and labeled on 863 PRs as a reference. A deep-learning approach, based on MobileNet-V2 combination with a skeletonization algorithm and a signed distance method, was trained and validated on 733 PRs with 1227 M3s to classify the positional relationship between M3i and MC into three categories. Subsequently, the trained algorithm was applied to a test set consisting of 130 PRs (217 M3s). Accuracy, precision, sensitivity, specificity, negative predictive value, and F1-score were calculated. RESULTS: The proposed method achieved a weighted accuracy of 0.951, precision of 0.943, sensitivity of 0.941, specificity of 0.800, negative predictive value of 0.865 and an F1-score of 0.938. CONCLUSION: AI-enhanced assessment of PRs can objectively, accurately, and reproducibly determine the positional relationship between M3i and MC. CLINICAL SIGNIFICANCE: The use of such an explainable AI system can assist clinicians in the intuitive positional assessment of lower third molars and mandibular canals. Further research is required to automatically assess the risk of alveolar nerve injury on panoramic radiographs.


Subject(s)
Mandibular Canal , Molar, Third , Molar, Third/diagnostic imaging , Cone-Beam Computed Tomography , Artificial Intelligence , Radiography, Panoramic , Deep Learning , Mandibular Nerve/diagnostic imaging , Mandibular Canal/diagnostic imaging
11.
J Dent ; 132: 104475, 2023 05.
Article in English | MEDLINE | ID: mdl-36870441

ABSTRACT

OBJECTIVE: Quantitative analysis of the volume and shape of the temporomandibular joint (TMJ) using cone-beam computed tomography (CBCT) requires accurate segmentation of the mandibular condyles and the glenoid fossae. This study aimed to develop and validate an automated segmentation tool based on a deep learning algorithm for accurate 3D reconstruction of the TMJ. MATERIALS AND METHODS: A three-step deep-learning approach based on a 3D U-net was developed to segment the condyles and glenoid fossae on CBCT datasets. Three 3D U-Nets were utilized for region of interest (ROI) determination, bone segmentation, and TMJ classification. The AI-based algorithm was trained and validated on 154 manually segmented CBCT images. Two independent observers and the AI algorithm segmented the TMJs of a test set of 8 CBCTs. The time required for the segmentation and accuracy metrics (intersection of union, DICE, etc.) was calculated to quantify the degree of similarity between the manual segmentations (ground truth) and the performances of the AI models. RESULTS: The AI segmentation achieved an intersection over union (IoU) of 0.955 and 0.935 for the condyles and glenoid fossa, respectively. The IoU of the two independent observers for manual condyle segmentation were 0.895 and 0.928, respectively (p<0.05). The mean time required for the AI segmentation was 3.6 s (SD 0.9), whereas the two observers needed 378.9 s (SD 204.9) and 571.6 s (SD 257.4), respectively (p<0.001). CONCLUSION: The AI-based automated segmentation tool segmented the mandibular condyles and glenoid fossae with high accuracy, speed, and consistency. Potential limited robustness and generalizability are risks that cannot be ruled out, as the algorithms were trained on scans from orthognathic surgery patients derived from just one type of CBCT scanner. CLINICAL SIGNIFICANCE: The incorporation of the AI-based segmentation tool into diagnostic software could facilitate 3D qualitative and quantitative analysis of TMJs in a clinical setting, particularly for the diagnosis of TMJ disorders and longitudinal follow-up.


Subject(s)
Deep Learning , Temporomandibular Joint Disorders , Humans , Temporomandibular Joint/diagnostic imaging , Mandibular Condyle/diagnostic imaging , Mandibular Condyle/surgery , Temporomandibular Joint Disorders/diagnostic imaging , Cone-Beam Computed Tomography/methods , Image Processing, Computer-Assisted/methods
12.
Diagnostics (Basel) ; 13(5)2023 Mar 06.
Article in English | MEDLINE | ID: mdl-36900140

ABSTRACT

Using super-resolution (SR) algorithms, an image with a low resolution can be converted into a high-quality image. Our objective was to compare deep learning-based SR models to a conventional approach for improving the resolution of dental panoramic radiographs. A total of 888 dental panoramic radiographs were obtained. Our study involved five state-of-the-art deep learning-based SR approaches, including SR convolutional neural networks (SRCNN), SR generative adversarial network (SRGAN), U-Net, Swin for image restoration (SwinIr), and local texture estimator (LTE). Their results were compared with one another and with conventional bicubic interpolation. The performance of each model was evaluated using the metrics of mean squared error (MSE), peak signal-to-noise ratio (PNSR), structural similarity index (SSIM), and mean opinion score by four experts (MOS). Among all the models evaluated, the LTE model presented the highest performance, with MSE, SSIM, PSNR, and MOS results of 7.42 ± 0.44, 39.74 ± 0.17, 0.919 ± 0.003, and 3.59 ± 0.54, respectively. Additionally, compared with low-resolution images, the output of all the used approaches showed significant improvements in MOS evaluation. A significant enhancement in the quality of panoramic radiographs can be achieved by SR. The LTE model outperformed the other models.

13.
Sci Rep ; 13(1): 2296, 2023 02 09.
Article in English | MEDLINE | ID: mdl-36759684

ABSTRACT

Oral squamous cell carcinoma (OSCC) is amongst the most common malignancies, with an estimated incidence of 377,000 and 177,000 deaths worldwide. The interval between the onset of symptoms and the start of adequate treatment is directly related to tumor stage and 5-year-survival rates of patients. Early detection is therefore crucial for efficient cancer therapy. This study aims to detect OSCC on clinical photographs (CP) automatically. 1406 CP(s) were manually annotated and labeled as a reference. A deep-learning approach based on Swin-Transformer was trained and validated on 1265 CP(s). Subsequently, the trained algorithm was applied to a test set consisting of 141 CP(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved a classification accuracy of 0.986 and an AUC of 0.99 for classifying OSCC on clinical photographs. Deep learning-based assistance of clinicians may raise the rate of early detection of oral cancer and hence the survival rate and quality of life of patients.


Subject(s)
Carcinoma, Squamous Cell , Head and Neck Neoplasms , Mouth Neoplasms , Humans , Carcinoma, Squamous Cell/diagnosis , Carcinoma, Squamous Cell/pathology , Mouth Neoplasms/diagnosis , Mouth Neoplasms/pathology , Squamous Cell Carcinoma of Head and Neck , Quality of Life
14.
J Endod ; 49(3): 248-261.e3, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36563779

ABSTRACT

INTRODUCTION: The aim of this systematic review and meta-analysis was to investigate the overall accuracy of deep learning models in detecting periapical (PA) radiolucent lesions in dental radiographs, when compared to expert clinicians. METHODS: Electronic databases of Medline (via PubMed), Embase (via Ovid), Scopus, Google Scholar, and arXiv were searched. Quality of eligible studies was assessed by using Quality Assessment and Diagnostic Accuracy Tool-2. Quantitative analyses were conducted using hierarchical logistic regression for meta-analyses on diagnostic accuracy. Subgroup analyses on different image modalities (PA radiographs, panoramic radiographs, and cone beam computed tomographic images) and on different deep learning tasks (classification, segmentation, object detection) were conducted. Certainty of evidence was assessed by using Grading of Recommendations Assessment, Development, and Evaluation system. RESULTS: A total of 932 studies were screened. Eighteen studies were included in the systematic review, out of which 6 studies were selected for quantitative analyses. Six studies had low risk of bias. Twelve studies had risk of bias. Pooled sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio of included studies (all image modalities; all tasks) were 0.925 (95% confidence interval [CI], 0.862-0.960), 0.852 (95% CI, 0.810-0.885), 6.261 (95% CI, 4.717-8.311), 0.087 (95% CI, 0.045-0.168), and 71.692 (95% CI, 29.957-171.565), respectively. No publication bias was detected (Egger's test, P = .82). Grading of Recommendations Assessment, Development and Evaluationshowed a "high" certainty of evidence for the studies included in the meta-analyses. CONCLUSION: Compared to expert clinicians, deep learning showed highly accurate results in detecting PA radiolucent lesions in dental radiographs. Most studies had risk of bias. There was a lack of prospective studies.


Subject(s)
Deep Learning , Cone-Beam Computed Tomography/methods , Radiography, Panoramic , Diagnostic Tests, Routine , Sensitivity and Specificity
15.
Sci Rep ; 12(1): 19596, 2022 11 15.
Article in English | MEDLINE | ID: mdl-36379971

ABSTRACT

Mandibular fractures are among the most frequent facial traumas in oral and maxillofacial surgery, accounting for 57% of cases. An accurate diagnosis and appropriate treatment plan are vital in achieving optimal re-establishment of occlusion, function and facial aesthetics. This study aims to detect mandibular fractures on panoramic radiographs (PR) automatically. 1624 PR with fractures were manually annotated and labelled as a reference. A deep learning approach based on Faster R-CNN and Swin-Transformer was trained and validated on 1640 PR with and without fractures. Subsequently, the trained algorithm was applied to a test set consisting of 149 PR with and 171 PR without fractures. The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an F1 score of 0.947 and an AUC of 0.977. Deep learning-based assistance of clinicians may reduce the misdiagnosis and hence the severe complications.


Subject(s)
Deep Learning , Mandibular Fractures , Humans , Radiography, Panoramic/methods , Mandibular Fractures/diagnostic imaging , Algorithms , Area Under Curve
16.
Data Brief ; 45: 108739, 2022 Dec.
Article in English | MEDLINE | ID: mdl-36426089

ABSTRACT

In the present work, we present a publicly available, expert-segmented representative dataset of 158 3.0 Tesla biparametric MRIs [1]. There is an increasing number of studies investigating prostate and prostate carcinoma segmentation using deep learning (DL) with 3D architectures [2], [3], [4], [5], [6], [7]. The development of robust and data-driven DL models for prostate segmentation and assessment is currently limited by the availability of openly available expert-annotated datasets [8], [9], [10]. The dataset contains 3.0 Tesla MRI images of the prostate of patients with suspected prostate cancer. Patients over 50 years of age who had a 3.0 Tesla MRI scan of the prostate that met PI-RADS version 2.1 technical standards were included. All patients received a subsequent biopsy or surgery so that the MRI diagnosis could be verified/matched with the histopathologic diagnosis. For patients who had undergone multiple MRIs, the last MRI, which was less than six months before biopsy/surgery, was included. All patients were examined at a German university hospital (Charité Universitätsmedizin Berlin) between 02/2016 and 01/2020. All MRI were acquired with two 3.0 Tesla MRI scanners (Siemens VIDA and Skyra, Siemens Healthineers, Erlangen, Germany). Axial T2W sequences and axial diffusion-weighted sequences (DWI) with apparent diffusion coefficient maps (ADC) were included in the data set. T2W sequences and ADC maps were annotated by two board-certified radiologists with 6 and 8 years of experience, respectively. For T2W sequences, the central gland (central zone and transitional zone) and peripheral zone were segmented. If areas of suspected prostate cancer (PIRADS score of ≥ 4) were identified on examination, they were segmented in both the T2W sequences and ADC maps. Because restricted diffusion is best seen in DWI images with high b-values, only these images were selected and all images with low b-values were discarded. Data were then anonymized and converted to NIfTI (Neuroimaging Informatics Technology Initiative) format.

17.
Diagnostics (Basel) ; 12(8)2022 Aug 14.
Article in English | MEDLINE | ID: mdl-36010318

ABSTRACT

The detection and classification of cystic lesions of the jaw is of high clinical relevance and represents a topic of interest in medical artificial intelligence research. The human clinical diagnostic reasoning process uses contextual information, including the spatial relation of the detected lesion to other anatomical structures, to establish a preliminary classification. Here, we aimed to emulate clinical diagnostic reasoning step by step by using a combined object detection and image segmentation approach on panoramic radiographs (OPGs). We used a multicenter training dataset of 855 OPGs (all positives) and an evaluation set of 384 OPGs (240 negatives). We further compared our models to an international human control group of ten dental professionals from seven countries. The object detection model achieved an average precision of 0.42 (intersection over union (IoU): 0.50, maximal detections: 100) and an average recall of 0.394 (IoU: 0.50-0.95, maximal detections: 100). The classification model achieved a sensitivity of 0.84 for odontogenic cysts and 0.56 for non-odontogenic cysts as well as a specificity of 0.59 for odontogenic cysts and 0.84 for non-odontogenic cysts (IoU: 0.30). The human control group achieved a sensitivity of 0.70 for odontogenic cysts, 0.44 for non-odontogenic cysts, and 0.56 for OPGs without cysts as well as a specificity of 0.62 for odontogenic cysts, 0.95 for non-odontogenic cysts, and 0.76 for OPGs without cysts. Taken together, our results show that a combined object detection and image segmentation approach is feasible in emulating the human clinical diagnostic reasoning process in classifying cystic lesions of the jaw.

18.
Medicina (Kaunas) ; 58(8)2022 Aug 05.
Article in English | MEDLINE | ID: mdl-36013526

ABSTRACT

Background: Applications of artificial intelligence (AI) in medicine and dentistry have been on the rise in recent years. In dental radiology, deep learning approaches have improved diagnostics, outperforming clinicians in accuracy and efficiency. This study aimed to provide information on clinicians' knowledge and perceptions regarding AI. Methods: A 21-item questionnaire was used to study the views of dentistry professionals on AI use in clinical practice. Results: In total, 302 questionnaires were answered and assessed. Most of the respondents rated their knowledge of AI as average (37.1%), below average (22.2%) or very poor (23.2%). The participants were largely convinced that AI would improve and bring about uniformity in diagnostics (mean Likert ± standard deviation 3.7 ± 1.27). Among the most serious concerns were the responsibility for machine errors (3.7 ± 1.3), data security or privacy issues (3.5 ± 1.24) and the divestment of healthcare to large technology companies (3.5 ± 1.28). Conclusions: Within the limitations of this study, insights into the acceptance and use of AI in dentistry are revealed for the first time.


Subject(s)
Artificial Intelligence , Surgery, Oral , Humans , Surveys and Questionnaires
19.
Front Digit Health ; 4: 919985, 2022.
Article in English | MEDLINE | ID: mdl-35990014

ABSTRACT

The COVID-19 pandemic has put a strain on the entire global healthcare infrastructure. The pandemic has necessitated the re-invention, re-organization, and transformation of the healthcare system. The resurgence of new COVID-19 virus variants in several countries and the infection of a larger group of communities necessitate a rapid strategic shift. Governments, non-profit, and other healthcare organizations have all proposed various digital solutions. It's not clear whether these digital solutions are adaptable, functional, effective, or reliable. With the disease becoming more and more prevalent, many countries are looking for assistance and implementation of digital technologies to combat COVID-19. Digital health technologies for COVID-19 pandemic management, surveillance, contact tracing, diagnosis, treatment, and prevention will be discussed in this paper to ensure that healthcare is delivered effectively. Artificial Intelligence (AI), big data, telemedicine, robotic solutions, Internet of Things (IoT), digital platforms for communication (DC), computer vision, computer audition (CA), digital data management solutions (blockchain), digital imaging are premiering to assist healthcare workers (HCW's) with solutions that include case base surveillance, information dissemination, disinfection, and remote consultations, along with many other such interventions.

20.
Comput Biol Med ; 148: 105817, 2022 09.
Article in English | MEDLINE | ID: mdl-35841780

ABSTRACT

BACKGROUND: The development of deep learning (DL) models for prostate segmentation on magnetic resonance imaging (MRI) depends on expert-annotated data and reliable baselines, which are often not publicly available. This limits both reproducibility and comparability. METHODS: Prostate158 consists of 158 expert annotated biparametric 3T prostate MRIs comprising T2w sequences and diffusion-weighted sequences with apparent diffusion coefficient maps. Two U-ResNets trained for segmentation of anatomy (central gland, peripheral zone) and suspicious lesions for prostate cancer (PCa) with a PI-RADS score of ≥4 served as baseline algorithms. Segmentation performance was evaluated using the Dice similarity coefficient (DSC), the Hausdorff distance (HD), and the average surface distance (ASD). The Wilcoxon test with Bonferroni correction was used to evaluate differences in performance. The generalizability of the baseline model was assessed using the open datasets Medical Segmentation Decathlon and PROSTATEx. RESULTS: Compared to Reader 1, the models achieved a DSC/HD/ASD of 0.88/18.3/2.2 for the central gland, 0.75/22.8/1.9 for the peripheral zone, and 0.45/36.7/17.4 for PCa. Compared with Reader 2, the DSC/HD/ASD were 0.88/17.5/2.6 for the central gland, 0.73/33.2/1.9 for the peripheral zone, and 0.4/39.5/19.1 for PCa. Interrater agreement measured in DSC/HD/ASD was 0.87/11.1/1.0 for the central gland, 0.75/15.8/0.74 for the peripheral zone, and 0.6/18.8/5.5 for PCa. Segmentation performances on the Medical Segmentation Decathlon and PROSTATEx were 0.82/22.5/3.4; 0.86/18.6/2.5 for the central gland, and 0.64/29.2/4.7; 0.71/26.3/2.2 for the peripheral zone. CONCLUSIONS: We provide an openly accessible, expert-annotated 3T dataset of prostate MRI and a reproducible benchmark to foster the development of prostate segmentation algorithms.


Subject(s)
Prostate , Prostatic Neoplasms , Algorithms , Humans , Magnetic Resonance Imaging , Male , Reproducibility of Results , Retrospective Studies
SELECTION OF CITATIONS
SEARCH DETAIL
...