Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
BMC Oral Health ; 24(1): 344, 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38494481

ABSTRACT

BACKGROUND: Dental caries diagnosis requires the manual inspection of diagnostic bitewing images of the patient, followed by a visual inspection and probing of the identified dental pieces with potential lesions. Yet the use of artificial intelligence, and in particular deep-learning, has the potential to aid in the diagnosis by providing a quick and informative analysis of the bitewing images. METHODS: A dataset of 13,887 bitewings from the HUNT4 Oral Health Study were annotated individually by six different experts, and used to train three different object detection deep-learning architectures: RetinaNet (ResNet50), YOLOv5 (M size), and EfficientDet (D0 and D1 sizes). A consensus dataset of 197 images, annotated jointly by the same six dental clinicians, was used for evaluation. A five-fold cross validation scheme was used to evaluate the performance of the AI models. RESULTS: The trained models show an increase in average precision and F1-score, and decrease of false negative rate, with respect to the dental clinicians. When compared against the dental clinicians, the YOLOv5 model shows the largest improvement, reporting 0.647 mean average precision, 0.548 mean F1-score, and 0.149 mean false negative rate. Whereas the best annotators on each of these metrics reported 0.299, 0.495, and 0.164 respectively. CONCLUSION: Deep-learning models have shown the potential to assist dental professionals in the diagnosis of caries. Yet, the task remains challenging due to the artifacts natural to the bitewing images.


Subject(s)
Deep Learning , Dental Caries , Humans , Dental Caries/diagnostic imaging , Dental Caries/pathology , Oral Health , Artificial Intelligence , Dental Caries Susceptibility , X-Rays , Radiography, Bitewing
2.
PLoS One ; 18(2): e0282110, 2023.
Article in English | MEDLINE | ID: mdl-36827289

ABSTRACT

PURPOSE: This study aims to explore training strategies to improve convolutional neural network-based image-to-image deformable registration for abdominal imaging. METHODS: Different training strategies, loss functions, and transfer learning schemes were considered. Furthermore, an augmentation layer which generates artificial training image pairs on-the-fly was proposed, in addition to a loss layer that enables dynamic loss weighting. RESULTS: Guiding registration using segmentations in the training step proved beneficial for deep-learning-based image registration. Finetuning the pretrained model from the brain MRI dataset to the abdominal CT dataset further improved performance on the latter application, removing the need for a large dataset to yield satisfactory performance. Dynamic loss weighting also marginally improved performance, all without impacting inference runtime. CONCLUSION: Using simple concepts, we improved the performance of a commonly used deep image registration architecture, VoxelMorph. In future work, our framework, DDMR, should be validated on different datasets to further assess its value.


Subject(s)
Image Processing, Computer-Assisted , Neural Networks, Computer , Image Processing, Computer-Assisted/methods , Magnetic Resonance Imaging , Neuroimaging , Tomography, X-Ray Computed
3.
Artif Intell Med ; 130: 102331, 2022 08.
Article in English | MEDLINE | ID: mdl-35809970

ABSTRACT

Deep learning-based methods, in particular, convolutional neural networks and fully convolutional networks are now widely used in the medical image analysis domain. The scope of this review focuses on the analysis using deep learning of focal liver lesions, with a special interest in hepatocellular carcinoma and metastatic cancer; and structures like the parenchyma or the vascular system. Here, we address several neural network architectures used for analyzing the anatomical structures and lesions in the liver from various imaging modalities such as computed tomography, magnetic resonance imaging and ultrasound. Image analysis tasks like segmentation, object detection and classification for the liver, liver vessels and liver lesions are discussed. Based on the qualitative search, 91 papers were filtered out for the survey, including journal publications and conference proceedings. The papers reviewed in this work are grouped into eight categories based on the methodologies used. By comparing the evaluation metrics, hybrid models performed better for both the liver and the lesion segmentation tasks, ensemble classifiers performed better for the vessel segmentation tasks and combined approach performed better for both the lesion classification and detection tasks. The performance was measured based on the Dice score for the segmentation, and accuracy for the classification and detection tasks, which are the most commonly used metrics.


Subject(s)
Deep Learning , Liver Neoplasms , Humans , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer
4.
Minim Invasive Ther Allied Technol ; 30(4): 229-238, 2021 Aug.
Article in English | MEDLINE | ID: mdl-32134342

ABSTRACT

PURPOSE: This study aims to evaluate the accuracy of point-based registration (PBR) when used for augmented reality (AR) in laparoscopic liver resection surgery. MATERIAL AND METHODS: The study was conducted in three different scenarios in which the accuracy of sampling targets for PBR decreases: using an assessment phantom with machined divot holes, a patient-specific liver phantom with markers visible in computed tomography (CT) scans and in vivo, relying on the surgeon's anatomical understanding to perform annotations. Target registration error (TRE) and fiducial registration error (FRE) were computed using five randomly selected positions for image-to-patient registration. RESULTS: AR with intra-operative CT scanning showed a mean TRE of 6.9 mm for the machined phantom, 7.9 mm for the patient-specific phantom and 13.4 mm in the in vivo study. CONCLUSIONS: AR showed an increase in both TRE and FRE throughout the experimental studies, proving that AR is not robust to the sampling accuracy of the targets used to compute image-to-patient registration. Moreover, an influence of the size of the volume to be register was observed. Hence, it is advisable to reduce both errors due to annotations and the size of registration volumes, which can cause large errors in AR systems.


Subject(s)
Augmented Reality , Laparoscopy , Surgery, Computer-Assisted , Algorithms , Humans , Imaging, Three-Dimensional , Phantoms, Imaging
5.
Int J Comput Assist Radiol Surg ; 13(12): 1927-1936, 2018 Dec.
Article in English | MEDLINE | ID: mdl-30074134

ABSTRACT

PURPOSE: Test the feasibility of the novel Single Landmark image-to-patient registration method for use in the operating room for future clinical trials. The algorithm is implemented in the open-source platform CustusX, a computer-aided intervention research platform dedicated to intraoperative navigation and ultrasound, with an interface for laparoscopic ultrasound probes. METHODS: The Single Landmark method is compared to fiducial landmark on an IOUSFAN (Kyoto Kagaku Co., Ltd., Japan) soft tissue abdominal phantom and T2 magnetic resonance scans of it. RESULTS: The experiments show that the accuracy of the Single Landmark registration is good close to the registered point, increasing with the distance from this point (12.4 mm error at 60 mm away from the registered point). In this point, the registration accuracy is mainly dominated by the accuracy of the user when clicking on the ultrasound image. In the presented set-up, the time required to perform the Single Landmark registration is 40% less than for the FLRM. CONCLUSION: The Single Landmark registration is suitable for being integrated in a laparoscopic workflow. The statistical analysis shows robustness against translational displacements of the patient and improvements in terms of time. The proposed method allows the clinician to accurately register lesions intraoperatively by clicking on these in the ultrasound image provided by the ultrasound transducer. The Single Landmark registration method can be further combined with other more accurate registration approaches improving the registration at relevant points defined by the clinicians.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Laparoscopy/methods , Microsurgery/methods , Phantoms, Imaging , Surgery, Computer-Assisted/methods , Ultrasonography/methods , Anatomic Landmarks , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...