Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 72
Filter
1.
Rev. bras. oftalmol ; 83: e0006, 2024. tab, graf
Article in Portuguese | LILACS | ID: biblio-1535603

ABSTRACT

RESUMO Objetivo: Obter imagens de fundoscopia por meio de equipamento portátil e de baixo custo e, usando inteligência artificial, avaliar a presença de retinopatia diabética. Métodos: Por meio de um smartphone acoplado a um dispositivo com lente de 20D, foram obtidas imagens de fundo de olhos de pacientes diabéticos; usando a inteligência artificial, a presença de retinopatia diabética foi classificada por algoritmo binário. Resultados: Foram avaliadas 97 imagens da fundoscopia ocular (45 normais e 52 com retinopatia diabética). Com auxílio da inteligência artificial, houve acurácia diagnóstica em torno de 70 a 100% na classificação da presença de retinopatia diabética. Conclusão: A abordagem usando dispositivo portátil de baixo custo apresentou eficácia satisfatória na triagem de pacientes diabéticos com ou sem retinopatia diabética, sendo útil para locais sem condições de infraestrutura.


ABSTRACT Introduction: To obtain fundoscopy images through portable and low-cost equipment using artificial intelligence to assess the presence of DR. Methods: Fundus images of diabetic patients' eyes were obtained by using a smartphone coupled to a device with a 20D lens. By using artificial intelligence (AI), the presence of DR was classified by a binary algorithm. Results: 97 ocular fundoscopy images were evaluated (45 normal and 52 with DR). Through AI diagnostic accuracy around was 70% to 100% in the classification of the presence of DR. Conclusion: The approach using a low-cost portable device showed satisfactory efficacy in the screening of diabetic patients with or without diabetic retinopathy, being useful for places without infrastructure conditions.


Subject(s)
Humans , Male , Female , Adolescent , Adult , Middle Aged , Aged , Algorithms , Artificial Intelligence , Diabetic Retinopathy/diagnostic imaging , Photograph/instrumentation , Fundus Oculi , Ophthalmoscopy/methods , Retina/diagnostic imaging , Mass Screening , Neural Networks, Computer , Diagnostic Techniques, Ophthalmological/instrumentation , Machine Learning , Smartphone , Deep Learning
2.
Acta Medica Philippina ; : 1-8, 2024.
Article in English | WPRIM | ID: wpr-1013409

ABSTRACT

Background and Objectives@#The Philippines faces challenges in the screening of tuberculosis (TB), one of them being the shortage in the health workforce who are skilled and allowed to screen TB. Deep learning neural networks (DLNNs) have shown potential in the TB screening process utilizing chest radiographs (CXRs). However, local studies on AIbased TB screening are limited. This study evaluated qXR3.0 technology's diagnostic performance for TB screening in Filipino adults aged 15 and older. Specifically, we evaluated the specificity and sensitivity of qXR3.0 compared to radiologists' impressions and determined whether it meets the World Health Organization (WHO) standards.@*Methods@#A prospective cohort design was used to perform a study on comparing screening and diagnostic accuracies of qXR3.0 and two radiologist gradings in accordance with the Standards for Reporting Diagnostic Accuracy (STARD). Subjects from two clinics in Metro Manila which had qXR 3.0 seeking consultation at the time of study were invited to participate to have CXRs and sputum collected. Radiologists' and qXR3.0 readings and impressions were compared with respect to the reference standard Xpert MTB/RiF assay. Diagnostic accuracy measures were calculated. @*Results@#With 82 participants, qXR3.0 demonstrated 100% sensitivity and 72.7% specificity with respect to the reference standard. There was a strong agreement between qXR3.0 and radiologists' readings as exhibited by the 0.7895 (between qXR 3.0 and CXRs read by at least one radiologist), 0.9362 (qXR 3.0 and CXRs read by both radiologists), and 0.9403 (qXR 3.0 and CXRs read as not suggestive of TB by at least one radiologist) concordance indices. @*Conclusions@#qXR3.0 demonstrated high sensitivity to identify presence of TB among patients, and meets the WHO standard of at least 70% specificity for detecting true TB infection. This shows an immense potential for the tool to supplement the shortage of radiologists for TB screening in the country. Future research directions may consider larger sample sizes to confirm these findings and explore the economic value of mainstream adoption of qXR 3.0 for TB screening.


Subject(s)
Tuberculosis , Diagnostic Imaging , Deep Learning
3.
Journal of Zhejiang University. Science. B ; (12): 83-90, 2024.
Article in English | WPRIM | ID: wpr-1010599

ABSTRACT

Hepatocellular carcinoma (HCC) is one of the most common malignancies and is a major cause of cancer-related mortalities worldwide (Forner et al., 2018; He et al., 2023). Sarcopenia is a syndrome characterized by an accelerated loss of skeletal muscle (SM) mass that may be age-related or the result of malnutrition in cancer patients (Cruz-Jentoft and Sayer, 2019). Preoperative sarcopenia in HCC patients treated with hepatectomy or liver transplantation is an independent risk factor for poor survival (Voron et al., 2015; van Vugt et al., 2016). Previous studies have used various criteria to define sarcopenia, including muscle area and density. However, the lack of standardized diagnostic methods for sarcopenia limits their clinical use. In 2018, the European Working Group on Sarcopenia in Older People (EWGSOP) renewed a consensus on the definition of sarcopenia: low muscle strength, loss of muscle quantity, and poor physical performance (Cruz-Jentoft et al., 2019). Radiological imaging-based measurement of muscle quantity or mass is most commonly used to evaluate the degree of sarcopenia. The gold standard is to measure the SM and/or psoas muscle (PM) area using abdominal computed tomography (CT) at the third lumbar vertebra (L3), as it is linearly correlated to whole-body SM mass (van Vugt et al., 2016). According to a "North American Expert Opinion Statement on Sarcopenia," SM index (SMI) is the preferred measure of sarcopenia (Carey et al., 2019). The variability between morphometric muscle indexes revealed that they have different clinical relevance and are generally not applicable to broader populations (Esser et al., 2019).


Subject(s)
Humans , Aged , Sarcopenia/diagnostic imaging , Carcinoma, Hepatocellular/diagnostic imaging , Muscle, Skeletal/diagnostic imaging , Deep Learning , Prognosis , Radiomics , Liver Neoplasms/diagnostic imaging , Retrospective Studies
4.
Journal of Southern Medical University ; (12): 1010-1016, 2023.
Article in Chinese | WPRIM | ID: wpr-987015

ABSTRACT

OBJECTIVE@#To propose an deep learning-based algorithm for automatic prediction of dose distribution in radiotherapy planning for head and neck cancer.@*METHODS@#We propose a novel beam dose decomposition learning (BDDL) method designed on a cascade network. The delivery matter of beam through the planning target volume (PTV) was fitted with the pre-defined beam angles, which served as an input to the convolution neural network (CNN). The output of the network was decomposed into multiple sub-fractions of dose distribution along the beam directions to carry out a complex task by performing multiple simpler sub-tasks, thus allowing the model more focused on extracting the local features. The subfractions of dose distribution map were merged into a distribution map using the proposed multi-voting mechanism. We also introduced dose distribution features of the regions-of-interest (ROIs) and boundary map as the loss function during the training phase to serve as constraining factors of the network when extracting features of the ROIs and areas of dose boundary. Public datasets of radiotherapy planning for head and neck cancer were used for obtaining the accuracy of dose distribution of the BDDL method and for implementing the ablation study of the proposed method.@*RESULTS@#The BDDL method achieved a Dose score of 2.166 and a DVH score of 1.178 (P < 0.05), demonstrating its superior prediction accuracy to that of current state-ofthe-art (SOTA) methods. Compared with the C3D method, which was in the first place in OpenKBP-2020 Challenge, the BDDL method improved the Dose score and DVH score by 26.3% and 30%, respectively. The results of the ablation study also demonstrated the effectiveness of each key component of the BDDL method.@*CONCLUSION@#The BDDL method utilizes the prior knowledge of the delivery matter of beam and dose distribution in the ROIs to establish a dose prediction model. Compared with the existing methods, the proposed method is interpretable and reliable and can be potentially applied in clinical radiotherapy.


Subject(s)
Humans , Deep Learning , Head and Neck Neoplasms/radiotherapy , Algorithms , Neural Networks, Computer
5.
Journal of Biomedical Engineering ; (6): 1027-1032, 2023.
Article in Chinese | WPRIM | ID: wpr-1008930

ABSTRACT

In recent years, the incidence of thyroid diseases has increased significantly and ultrasound examination is the first choice for the diagnosis of thyroid diseases. At the same time, the level of medical image analysis based on deep learning has been rapidly improved. Ultrasonic image analysis has made a series of milestone breakthroughs, and deep learning algorithms have shown strong performance in the field of medical image segmentation and classification. This article first elaborates on the application of deep learning algorithms in thyroid ultrasound image segmentation, feature extraction, and classification differentiation. Secondly, it summarizes the algorithms for deep learning processing multimodal ultrasound images. Finally, it points out the problems in thyroid ultrasound image diagnosis at the current stage and looks forward to future development directions. This study can promote the application of deep learning in clinical ultrasound image diagnosis of thyroid, and provide reference for doctors to diagnose thyroid disease.


Subject(s)
Humans , Algorithms , Deep Learning , Image Processing, Computer-Assisted/methods , Thyroid Diseases/diagnostic imaging , Ultrasonography
6.
Journal of Biomedical Engineering ; (6): 903-911, 2023.
Article in Chinese | WPRIM | ID: wpr-1008915

ABSTRACT

Magnetic resonance imaging(MRI) can obtain multi-modal images with different contrast, which provides rich information for clinical diagnosis. However, some contrast images are not scanned or the quality of the acquired images cannot meet the diagnostic requirements due to the difficulty of patient's cooperation or the limitation of scanning conditions. Image synthesis techniques have become a method to compensate for such image deficiencies. In recent years, deep learning has been widely used in the field of MRI synthesis. In this paper, a synthesis network based on multi-modal fusion is proposed, which firstly uses a feature encoder to encode the features of multiple unimodal images separately, and then fuses the features of different modal images through a feature fusion module, and finally generates the target modal image. The similarity measure between the target image and the predicted image in the network is improved by introducing a dynamic weighted combined loss function based on the spatial domain and K-space domain. After experimental validation and quantitative comparison, the multi-modal fusion deep learning network proposed in this paper can effectively synthesize high-quality MRI fluid-attenuated inversion recovery (FLAIR) images. In summary, the method proposed in this paper can reduce MRI scanning time of the patient, as well as solve the clinical problem of missing FLAIR images or image quality that is difficult to meet diagnostic requirements.


Subject(s)
Humans , Deep Learning , Magnetic Resonance Imaging/methods , Image Processing, Computer-Assisted/methods
7.
Journal of Forensic Medicine ; (6): 129-136, 2023.
Article in English | WPRIM | ID: wpr-981846

ABSTRACT

OBJECTIVES@#To investigate the reliability and accuracy of deep learning technology in automatic sex estimation using the 3D reconstructed images of the computed tomography (CT) from the Chinese Han population.@*METHODS@#The pelvic CT images of 700 individuals (350 males and 350 females) of the Chinese Han population aged 20 to 85 years were collected and reconstructed into 3D virtual skeletal models. The feature region images of the medial aspect of the ischiopubic ramus (MIPR) were intercepted. The Inception v4 was adopted as the image recognition model, and two methods of initial learning and transfer learning were used for training. Eighty percent of the individuals' images were randomly selected as the training and validation dataset, and the remaining were used as the test dataset. The left and right sides of the MIPR images were trained separately and combinedly. Subsequently, the models' performance was evaluated by overall accuracy, female accuracy, male accuracy, etc.@*RESULTS@#When both sides of the MIPR images were trained separately with initial learning, the overall accuracy of the right model was 95.7%, the female accuracy and male accuracy were both 95.7%; the overall accuracy of the left model was 92.1%, the female accuracy was 88.6% and the male accuracy was 95.7%. When the left and right MIPR images were combined to train with initial learning, the overall accuracy of the model was 94.6%, the female accuracy was 92.1% and the male accuracy was 97.1%. When the left and right MIPR images were combined to train with transfer learning, the model achieved an overall accuracy of 95.7%, and the female and male accuracies were both 95.7%.@*CONCLUSIONS@#The use of deep learning model of Inception v4 and transfer learning algorithm to construct a sex estimation model for pelvic MIPR images of Chinese Han population has high accuracy and well generalizability in human remains, which can effectively estimate the sex in adults.


Subject(s)
Adult , Female , Humans , Male , Young Adult , Middle Aged , Aged , Aged, 80 and over , Deep Learning , Imaging, Three-Dimensional , Pelvis , Reproducibility of Results , Tomography, X-Ray Computed
8.
Journal of Biomedical Engineering ; (6): 373-377, 2023.
Article in Chinese | WPRIM | ID: wpr-981552

ABSTRACT

Heart failure is a disease that seriously threatens human health and has become a global public health problem. Diagnostic and prognostic analysis of heart failure based on medical imaging and clinical data can reveal the progression of heart failure and reduce the risk of death of patients, which has important research value. The traditional analysis methods based on statistics and machine learning have some problems, such as insufficient model capability, poor accuracy due to prior dependence, and poor model adaptability. In recent years, with the development of artificial intelligence technology, deep learning has been gradually applied to clinical data analysis in the field of heart failure, showing a new perspective. This paper reviews the main progress, application methods and major achievements of deep learning in heart failure diagnosis, heart failure mortality and heart failure readmission, summarizes the existing problems and presents the prospects of related research to promote the clinical application of deep learning in heart failure clinical research.


Subject(s)
Humans , Artificial Intelligence , Deep Learning , Heart Failure/diagnosis , Machine Learning , Diagnostic Imaging
9.
Acta Academiae Medicinae Sinicae ; (6): 416-421, 2023.
Article in Chinese | WPRIM | ID: wpr-981285

ABSTRACT

Objective To evaluate the impact of deep learning reconstruction algorithm on the image quality of head and neck CT angiography (CTA) at 100 kVp. Methods CT scanning was performed at 100 kVp for the 37 patients who underwent head and neck CTA in PUMC Hospital from March to April in 2021.Four sets of images were reconstructed by three-dimensional adaptive iterative dose reduction (AIDR 3D) and advanced intelligent Clear-IQ engine (AiCE) (low,medium,and high intensity algorithms),respectively.The average CT value,standard deviation (SD),signal-to-noise ratio (SNR),and contrast-to-noise ratio (CNR) of the region of interest in the transverse section image were calculated.Furthermore,the four sets of sagittal maximum intensity projection images of the anterior cerebral artery were scored (1 point:poor,5 points:excellent). Results The SNR and CNR showed differences in the images reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D (all P<0.01).The quality scores of the image reconstructed by AiCE (low,medium,and high intensity) and AIDR 3D were 4.78±0.41,4.92±0.27,4.97±0.16,and 3.92±0.27,respectively,which showed statistically significant differences (all P<0.001). Conclusion AiCE outperformed AIDR 3D in reconstructing the images of head and neck CTA at 100 kVp,being capable of improving image quality and applicable in clinical examinations.


Subject(s)
Humans , Computed Tomography Angiography/methods , Radiation Dosage , Deep Learning , Radiographic Image Interpretation, Computer-Assisted/methods , Signal-To-Noise Ratio , Algorithms
10.
Acta Academiae Medicinae Sinicae ; (6): 273-279, 2023.
Article in Chinese | WPRIM | ID: wpr-981263

ABSTRACT

Objective To evaluate the accuracy of different convolutional neural networks (CNN),representative deep learning models,in the differential diagnosis of ameloblastoma and odontogenic keratocyst,and subsequently compare the diagnosis results between models and oral radiologists. Methods A total of 1000 digital panoramic radiographs were retrospectively collected from the patients with ameloblastoma (500 radiographs) or odontogenic keratocyst (500 radiographs) in the Department of Oral and Maxillofacial Radiology,Peking University School of Stomatology.Eight CNN including ResNet (18,50,101),VGG (16,19),and EfficientNet (b1,b3,b5) were selected to distinguish ameloblastoma from odontogenic keratocyst.Transfer learning was employed to train 800 panoramic radiographs in the training set through 5-fold cross validation,and 200 panoramic radiographs in the test set were used for differential diagnosis.Chi square test was performed for comparing the performance among different CNN.Furthermore,7 oral radiologists (including 2 seniors and 5 juniors) made a diagnosis on the 200 panoramic radiographs in the test set,and the diagnosis results were compared between CNN and oral radiologists. Results The eight neural network models showed the diagnostic accuracy ranging from 82.50% to 87.50%,of which EfficientNet b1 had the highest accuracy of 87.50%.There was no significant difference in the diagnostic accuracy among the CNN models (P=0.998,P=0.905).The average diagnostic accuracy of oral radiologists was (70.30±5.48)%,and there was no statistical difference in the accuracy between senior and junior oral radiologists (P=0.883).The diagnostic accuracy of CNN models was higher than that of oral radiologists (P<0.001). Conclusion Deep learning CNN can realize accurate differential diagnosis between ameloblastoma and odontogenic keratocyst with panoramic radiographs,with higher diagnostic accuracy than oral radiologists.


Subject(s)
Humans , Ameloblastoma/diagnostic imaging , Deep Learning , Diagnosis, Differential , Radiography, Panoramic , Retrospective Studies , Odontogenic Cysts/diagnostic imaging , Odontogenic Tumors
11.
West China Journal of Stomatology ; (6): 218-224, 2023.
Article in English | WPRIM | ID: wpr-981115

ABSTRACT

OBJECTIVES@#This study aims to predict the risk of deep caries exposure in radiographic images based on the convolutional neural network model, compare the prediction results of the network model with those of senior dentists, evaluate the performance of the model for teaching and training stomatological students and young dentists, and assist dentists to clarify treatment plans and conduct good doctor-patient communication before surgery.@*METHODS@#A total of 206 cases of pulpitis caused by deep caries were selected from the Department of Stomatological Hospital of Tianjin Medical University from 2019 to 2022. According to the inclusion and exclusion criteria, 104 cases of pulpitis were exposed during the decaying preparation period and 102 cases of pulpitis were not exposed. The 206 radiographic images collected were randomly divided into three groups according to the proportion: 126 radiographic images in the training set, 40 radiographic images in the validation set, and 40 radiographic images in the test set. Three convolutional neural networks, visual geometry group network (VGG), residual network (ResNet), and dense convolutional network (DenseNet) were selected to analyze the rules of the radiographic images in the training set. The radiographic images of the validation set were used to adjust the super parameters of the network. Finally, 40 radiographic images of the test set were used to evaluate the performance of the three network models. A senior dentist specializing in dental pulp was selected to predict whether the deep caries of 40 radiographic images in the test set were exposed. The gold standard is whether the pulp is exposed after decaying the prepared hole during the clinical operation. The prediction effect of the three network models (VGG, ResNet, and DenseNet) and the senior dentist on the pulp exposure of 40 radiographic images in the test set were compared using receiver operating characteristic (ROC) curve, area under the ROC curve (AUC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and F1 score to select the best network model.@*RESULTS@#The best network model was DenseNet model, with AUC of 0.97. The AUC values of the ResNet model, VGG model, and the senior dentist were 0.89, 0.78, and 0.87, respectively. Accuracy was not statistically different between the senior dentist (0.850) and the DenseNet model (0.850)(P>0.05). Kappa consistency test showed moderate reliability (Kappa=0.6>0.4, P<0.05).@*CONCLUSIONS@#Among the three convolutional neural network models, the DenseNet model has the best predictive effect on whether deep caries are exposed in imaging. The predictive effect of this model is equivalent to the level of senior dentists specializing in dental pulp.


Subject(s)
Humans , Deep Learning , Neural Networks, Computer , Pulpitis/diagnostic imaging , Reproducibility of Results , ROC Curve , Random Allocation
12.
Biomedical and Environmental Sciences ; (12): 431-440, 2023.
Article in English | WPRIM | ID: wpr-981071

ABSTRACT

OBJECTIVE@#To develop a few-shot learning (FSL) approach for classifying optical coherence tomography (OCT) images in patients with inherited retinal disorders (IRDs).@*METHODS@#In this study, an FSL model based on a student-teacher learning framework was designed to classify images. 2,317 images from 189 participants were included. Of these, 1,126 images revealed IRDs, 533 were normal samples, and 658 were control samples.@*RESULTS@#The FSL model achieved a total accuracy of 0.974-0.983, total sensitivity of 0.934-0.957, total specificity of 0.984-0.990, and total F1 score of 0.935-0.957, which were superior to the total accuracy of the baseline model of 0.943-0.954, total sensitivity of 0.866-0.886, total specificity of 0.962-0.971, and total F1 score of 0.859-0.885. The performance of most subclassifications also exhibited advantages. Moreover, the FSL model had a higher area under curves (AUC) of the receiver operating characteristic (ROC) curves in most subclassifications.@*CONCLUSION@#This study demonstrates the effective use of the FSL model for the classification of OCT images from patients with IRDs, normal, and control participants with a smaller volume of data. The general principle and similar network architectures can also be applied to other retinal diseases with a low prevalence.


Subject(s)
Humans , Tomography, Optical Coherence , Deep Learning , Retinal Diseases/diagnostic imaging , Retina/diagnostic imaging , ROC Curve
13.
Chinese Medical Journal ; (24): 967-973, 2023.
Article in English | WPRIM | ID: wpr-980909

ABSTRACT

BACKGROUND@#Sarcopenia is an age-related progressive skeletal muscle disorder involving the loss of muscle mass or strength and physiological function. Efficient and precise AI algorithms may play a significant role in the diagnosis of sarcopenia. In this study, we aimed to develop a machine learning model for sarcopenia diagnosis using clinical characteristics and laboratory indicators of aging cohorts.@*METHODS@#We developed models of sarcopenia using the baseline data from the West China Health and Aging Trend (WCHAT) study. For external validation, we used the Xiamen Aging Trend (XMAT) cohort. We compared the support vector machine (SVM), random forest (RF), eXtreme Gradient Boosting (XGB), and Wide and Deep (W&D) models. The area under the receiver operating curve (AUC) and accuracy (ACC) were used to evaluate the diagnostic efficiency of the models.@*RESULTS@#The WCHAT cohort, which included a total of 4057 participants for the training and testing datasets, and the XMAT cohort, which consisted of 553 participants for the external validation dataset, were enrolled in this study. Among the four models, W&D had the best performance (AUC = 0.916 ± 0.006, ACC = 0.882 ± 0.006), followed by SVM (AUC =0.907 ± 0.004, ACC = 0.877 ± 0.006), XGB (AUC = 0.877 ± 0.005, ACC = 0.868 ± 0.005), and RF (AUC = 0.843 ± 0.031, ACC = 0.836 ± 0.024) in the training dataset. Meanwhile, in the testing dataset, the diagnostic efficiency of the models from large to small was W&D (AUC = 0.881, ACC = 0.862), XGB (AUC = 0.858, ACC = 0.861), RF (AUC = 0.843, ACC = 0.836), and SVM (AUC = 0.829, ACC = 0.857). In the external validation dataset, the performance of W&D (AUC = 0.970, ACC = 0.911) was the best among the four models, followed by RF (AUC = 0.830, ACC = 0.769), SVM (AUC = 0.766, ACC = 0.738), and XGB (AUC = 0.722, ACC = 0.749).@*CONCLUSIONS@#The W&D model not only had excellent diagnostic performance for sarcopenia but also showed good economic efficiency and timeliness. It could be widely used in primary health care institutions or developing areas with an aging population.@*TRIAL REGISTRATION@#Chictr.org, ChiCTR 1800018895.


Subject(s)
Humans , Aged , Sarcopenia/diagnosis , Deep Learning , Aging , Algorithms , Biomarkers
14.
Chinese Journal of Stomatology ; (12): 533-539, 2023.
Article in Chinese | WPRIM | ID: wpr-986121

ABSTRACT

Artificial intelligence, represented by deep learning, has received increasing attention in the field of oral and maxillofacial medical imaging, which has been widely studied in image analysis and image quality improvement. This narrative review provides an insight into the following applications of deep learning in oral and maxillofacial imaging: detection, recognition and segmentation of teeth and other anatomical structures, detection and diagnosis of oral and maxillofacial diseases, and forensic personal identification. In addition, the limitations of the studies and the directions for future development are summarized.


Subject(s)
Artificial Intelligence , Deep Learning , Diagnostic Imaging , Radiography , Image Processing, Computer-Assisted
15.
Chinese Journal of Stomatology ; (12): 561-568, 2023.
Article in Chinese | WPRIM | ID: wpr-986111

ABSTRACT

Objective: To develop a multi-classification orthodontic image recognition system using the SqueezeNet deep learning model for automatic classification of orthodontic image data. Methods: A total of 35 000 clinical orthodontic images were collected in the Department of Orthodontics, Capital Medical University School of Stomatology, from October to November 2020 and June to July 2021. The images were from 490 orthodontic patients with a male-to-female ratio of 49∶51 and the age range of 4 to 45 years. After data cleaning based on inclusion and exclusion criteria, the final image dataset included 17 453 face images (frontal, smiling, 90° right, 90° left, 45° right, and 45° left), 8 026 intraoral images [frontal occlusion, right occlusion, left occlusion, upper occlusal view (original and flipped), lower occlusal view (original and flipped) and coverage of occlusal relationship], 4 115 X-ray images [lateral skull X-ray from the left side, lateral skull X-ray from the right side, frontal skull X-ray, cone-beam CT (CBCT), and wrist bone X-ray] and 684 other non-orthodontic images. A labeling team composed of orthodontic doctoral students, associate professors, and professors used image labeling tools to classify the orthodontic images into 20 categories, including 6 face image categories, 8 intraoral image categories, 5 X-ray image categories, and other images. The data for each label were randomly divided into training, validation, and testing sets in an 8∶1∶1 ratio using the random function in the Python programming language. The improved SqueezeNet deep learning model was used for training, and 13 000 natural images from the ImageNet open-source dataset were used as additional non-orthodontic images for algorithm optimization of anomaly data processing. A multi-classification orthodontic image recognition system based on deep learning models was constructed. The accuracy of the orthodontic image classification was evaluated using precision, recall, F1 score, and confusion matrix based on the prediction results of the test set. The reliability of the model's image classification judgment logic was verified using the gradient-weighted class activation mapping (Grad-CAM) method to generate heat maps. Results: After data cleaning and labeling, a total of 30 278 orthodontic images were included in the dataset. The test set classification results showed that the precision, recall, and F1 scores of most classification labels were 100%, with only 5 misclassified images out of 3 047, resulting in a system accuracy of 99.84%(3 042/3 047). The precision of anomaly data processing was 100% (10 500/10 500). The heat map showed that the judgment basis of the SqueezeNet deep learning model in the image classification process was basically consistent with that of humans. Conclusions: This study developed a multi-classification orthodontic image recognition system for automatic classification of 20 types of orthodontic images based on the improved SqueezeNet deep learning model. The system exhibitted good accuracy in orthodontic image classification.


Subject(s)
Humans , Male , Female , Child, Preschool , Child , Adolescent , Young Adult , Adult , Middle Aged , Deep Learning , Reproducibility of Results , Radiography , Algorithms , Cone-Beam Computed Tomography
16.
Chinese Journal of Stomatology ; (12): 547-553, 2023.
Article in Chinese | WPRIM | ID: wpr-986109

ABSTRACT

Objective: To establish a comprehensive diagnostic classification model of lateral cephalograms based on artificial intelligence (AI) to provide reference for orthodontic diagnosis. Methods: A total of 2 894 lateral cephalograms were collected in Department of Orthodontics, Capital Medical University School of Stomatology from January 2015 to December 2021 to construct a data set, including 1 351 males and 1 543 females with a mean age of (26.4± 7.4) years. Firstly, 2 orthodontists (with 5 and 8 years of orthodontic experience, respectively) performed manual annotation and calculated measurement for primary classification, and then 2 senior orthodontists (with more than 20 years of orthodontic experience) verified the 8 diagnostic classifications including skeletal and dental indices. The data were randomly divided into training, validation, and test sets in the ratio of 7∶2∶1. The open source DenseNet121 was used to construct the model. The performance of the model was evaluated by classification accuracy, precision rate, sensitivity, specificity and area under the curve (AUC). Visualization of model regions of interest through class activation heatmaps. Results: The automatic classification model of lateral cephalograms was successfully established. It took 0.012 s on average to make 8 diagnoses on a lateral cephalogram. The accuracy of 5 classifications was 80%-90%, including sagittal and vertical skeletal facial pattern, mandibular growth, inclination of upper incisors, and protrusion of lower incisors. The acuracy rate of 3 classifications was 70%-80%, including maxillary growth, inclination of lower incisors and protrusion of upper incisors. The average AUC of each classification was ≥0.90. The class activation heat map of successfully classified lateral cephalograms showed that the AI model activation regions were distributed in the relevant structural regions. Conclusions: In this study, an automatic classification model for lateral cephalograms was established based on the DenseNet121 to achieve rapid classification of eight commonly used clinical diagnostic items.


Subject(s)
Male , Female , Humans , Young Adult , Adult , Artificial Intelligence , Deep Learning , Cephalometry , Maxilla , Mandible/diagnostic imaging
17.
Chinese Journal of Stomatology ; (12): 540-546, 2023.
Article in Chinese | WPRIM | ID: wpr-986108

ABSTRACT

Objective: To construct a kind of neural network for eliminating the metal artifacts in CT images by training the generative adversarial networks (GAN) model, so as to provide reference for clinical practice. Methods: The CT data of patients treated in the Department of Radiology, West China Hospital of Stomatology, Sichuan University from January 2017 to June 2022 were collected. A total of 1 000 cases of artifact-free CT data and 620 cases of metal artifact CT data were obtained, including 5 types of metal restorative materials, namely, fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies. Four hundred metal artifact CT data and 1 000 artifact-free CT data were utilized for simulation synthesis, and 1 000 pairs of simulated artifacts and metal images and simulated metal images (200 pairs of each type) were constructed. Under the condition that the data of the five metal artifacts were equal, the entire data set was randomly (computer random) divided into a training set (800 pairs) and a test set (200 pairs). The former was used to train the GAN model, and the latter was used to evaluate the performance of the GAN model. The test set was evaluated quantitatively and the quantitative indexes were root-mean-square error (RMSE) and structural similarity index measure (SSIM). The trained GAN model was employed to eliminate the metal artifacts from the CT data of the remaining 220 clinical cases of metal artifact CT data, and the elimination results were evaluated by two senior attending doctors using the modified LiKert scale. Results: The RMSE values for artifact elimination of fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies in test set were 0.018±0.004, 0.023±0.007, 0.015±0.003, 0.019±0.004, 0.024±0.008, respectively (F=1.29, P=0.274). The SSIM values were 0.963±0.023, 0.961±0.023, 0.965±0.013, 0.958±0.022, 0.957±0.026, respectively (F=2.22, P=0.069). The intra-group correlation coefficient of 2 evaluators was 0.972. For 220 clinical cases, the overall score of the modified LiKert scale was (3.73±1.13), indicating a satisfactory performance. The scores of modified LiKert scale for fillings, crowns, titanium plates and screws, orthodontic brackets and metal foreign bodies were (3.68±1.13), (3.67±1.16), (3.97±1.03), (3.83±1.14), (3.33±1.12), respectively (F=1.44, P=0.145). Conclusions: The metal artifact reduction GAN model constructed in this study can effectively remove the interference of metal artifacts and improve the image quality.


Subject(s)
Humans , Tomography, X-Ray Computed/methods , Deep Learning , Titanium , Neural Networks, Computer , Metals , Image Processing, Computer-Assisted/methods , Algorithms
18.
Neuroscience Bulletin ; (6): 893-910, 2023.
Article in English | WPRIM | ID: wpr-982439

ABSTRACT

Accurate and efficient methods for identifying and tracking each animal in a group are needed to study complex behaviors and social interactions. Traditional tracking methods (e.g., marking each animal with dye or surgically implanting microchips) can be invasive and may have an impact on the social behavior being measured. To overcome these shortcomings, video-based methods for tracking unmarked animals, such as fruit flies and zebrafish, have been developed. However, tracking individual mice in a group remains a challenging problem because of their flexible body and complicated interaction patterns. In this study, we report the development of a multi-object tracker for mice that uses the Faster region-based convolutional neural network (R-CNN) deep learning algorithm with geometric transformations in combination with multi-camera/multi-image fusion technology. The system successfully tracked every individual in groups of unmarked mice and was applied to investigate chasing behavior. The proposed system constitutes a step forward in the noninvasive tracking of individual mice engaged in social behavior.


Subject(s)
Animals , Mice , Deep Learning , Zebrafish , Algorithms , Neural Networks, Computer , Social Behavior
19.
Rev. bras. med. esporte ; 29(spe1): e2022_0197, 2023. tab, graf
Article in English | LILACS | ID: biblio-1394845

ABSTRACT

ABSTRACT Introduction The recent development of the deep learning algorithm as a new multilayer network machine learning algorithm has reduced the problem of traditional training algorithms easily falling into minimal places, becoming a recent direction in the learning field. Objective Design and validate an artificial intelligence model for deep learning of the resulting impacts of weekly load training on students' biological system. Methods According to the physiological and biochemical indices of athletes in the training process, this paper analyzes the actual data of athletes' training load in the annual preparation period. The characteristics of athletes' training load in the preparation period were discussed. The value, significance, composition factors, arrangement principle and method of calculation, and determination of weekly load density using the deep learning algorithm are discussed. Results The results showed that the daily 24-hour random sampling load was moderate intensity, low and high-intensity training, and enhanced the physical-motor system and neural reactivity. Conclusion The research shows that there can be two activities of "teaching" and "training" in physical education and sports training. The sports biology monitoring research proves to be a growth point of sports training research with great potential for expansion for future research. Level of evidence II; Therapeutic studies - investigation of treatment outcomes.


RESUMO Introdução O recente desenvolvimento do algoritmo de aprendizado profundo como um novo algoritmo de aprendizado de máquina de rede multicamadas reduziu o problema dos algoritmos de treinamento tradicionais, que facilmente caiam em locais mínimos, tornando-se uma direção recente no campo do aprendizado. Objetivo Desenvolver e validar um modelo de inteligência artificial para aprendizado profundo dos impactos resultantes dos treinos semanais de carga sobre o sistema biológico dos estudantes. Métodos De acordo com os índices fisiológicos e bioquímicos dos atletas no processo de treinamento, este artigo analisa os dados reais da carga de treinamento dos atletas no período anual de preparação. As características da carga de treinamento dos atletas no período de preparação foram discutidas. O valor, significância, fatores de composição, princípio de arranjo e método de cálculo e determinação da densidade de carga semanal usando o algoritmo de aprendizado profundo são discutidos. Resultados Os resultados mostraram que a carga diária de 24 horas de amostragem aleatória foi de intensidade moderada, treinamento de baixa densidade e alta intensidade, e o sistema físico-motor e a reatividade neural foram aprimorados. Conclusão A pesquisa mostra que pode haver duas atividades de "ensino" e "treinamento" na área de educação física e no treinamento esportivo. A pesquisa de monitoramento da biologia esportiva revela-se um ponto de crescimento da pesquisa de treinamento esportivo com grande potencial de expansão para pesquisas futuras. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción El reciente desarrollo del algoritmo de aprendizaje profundo como un nuevo algoritmo de aprendizaje automático de red multicapa ha reducido el problema de los algoritmos de entrenamiento tradicionales, que caen fácilmente en lugares mínimos, convirtiéndose en una dirección reciente en el campo del aprendizaje. Objetivo Desarrollar y validar un modelo de inteligencia artificial para el aprendizaje profundo de los impactos resultantes del entrenamiento de la carga semanal en el sistema biológico de los estudiantes. Métodos De acuerdo con los índices fisiológicos y bioquímicos de los atletas en el proceso de entrenamiento, este artículo analiza los datos reales de la carga de entrenamiento de los atletas en el período de preparación anual. Se analizaron las características de la carga de entrenamiento de los atletas en el periodo de preparación. Se analizan el valor, el significado, los factores de composición, el principio de disposición y el método de cálculo y determinación de la densidad de carga semanal mediante el algoritmo de aprendizaje profundo. Resultados Los resultados mostraron que la carga diaria de 24 horas de muestreo aleatorio era de intensidad moderada, de baja densidad y de alta intensidad de entrenamiento, y que el sistema físico-motor y la reactividad neural mejoraban. Conclusión La investigación muestra que puede haber dos actividades de "enseñanza" y "formación" en la educación física y el entrenamiento deportivo. La investigación sobre el seguimiento de la biología del deporte demuestra ser un punto de crecimiento de la investigación sobre el entrenamiento deportivo con un gran potencial de expansión para futuras investigaciones. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Subject(s)
Humans , Algorithms , Computational Biology/methods , Athletic Performance/physiology , Deep Learning , Physical Education and Training/methods
20.
Rev. bras. med. esporte ; 29(spe1): e2022_0199, 2023. tab, graf
Article in English | LILACS | ID: biblio-1394846

ABSTRACT

ABSTRACT Introduction Nowadays, more people are concerned with physical exercise and swimming competitions, as a major sporting event, have become a focus of attention. Such competitions require special attention to their athletes and the use of computational algorithms assists in this task. Objective To design and validate an algorithm to evaluate changes in vital capacity and blood markers of athletes after swimming matches based on combined learning. Methods The data integration algorithm was used to analyze changes in vital capacity and blood acid after combined learning swimming competition, followed by the construction of an information system model to calculate and process this algorithm. Results Comparative experiments show that the neural network algorithm can reduce the calculation time from the original initial time. In the latest tests carried out in about 10 seconds, this has greatly reduced the total calculation time. Conclusion According to the model requirements of the designed algorithm, practical help has been demonstrated by building a computational model. The algorithm can be optimized and selected according to the calculation model according to the reality of the application. Level of evidence II; Therapeutic studies - investigation of treatment outcomes.


RESUMO Introdução Atualmente, mais pessoas preocupam-se com o exercício físico e as competições de natação, como evento esportivo de destaque, tornou-se foco de atenção. Tais competições exigem atenção especial aos seus atletas e o uso de algoritmos computacionais auxiliam nessa tarefa. Objetivo Projetar e validar um algoritmo para avaliação das alterações da capacidade vital e marcadores sanguíneos dos atletas após os jogos de natação baseados no aprendizado combinado. Métodos O algoritmo de integração de dados foi usado para analisar as mudanças de capacidade vital e ácido sanguíneo após competição de natação de aprendizado combinado, seguido à construção de um modelo de sistema de informação para calcular e processar esse algoritmo. Resultados Experiências comparativas mostram que o algoritmo de rede neural pode reduzir o tempo de cálculo a partir do tempo inicial original. Nos últimos testes levados à cabo em cerca de 10 segundos, isto reduziu muito o tempo total de cálculo. Conclusão De acordo com os requisitos do modelo do algoritmo projetado, foi demonstrada a ajuda prática pela construção de um modelo computacional. O algoritmo pode ser otimizado e selecionado de acordo com o modelo de cálculo, segundo a realidade da aplicação. Nível de evidência II; Estudos terapêuticos - investigação dos resultados do tratamento.


RESUMEN Introducción Hoy en día, cada vez más personas se preocupan por el ejercicio físico y las competiciones de natación, como evento deportivo destacado, se han convertido en un foco de atención. Estas competiciones requieren una atención especial para sus atletas y el uso de algoritmos computacionales ayuda en esta tarea. Objetivo Diseñar y validar un algoritmo para evaluar los cambios en la capacidad vital y los marcadores sanguíneos de los atletas después de los partidos de natación basado en el aprendizaje combinado. Métodos Se utilizó el algoritmo de integración de datos para analizar los cambios de la capacidad vital y la acidez de la sangre tras la competición de natación de aprendizaje combinado, seguido de la construcción de un modelo de sistema de información para calcular y procesar este algoritmo. Resultados Los experimentos comparativos muestran que el algoritmo de la red neuronal puede reducir el tiempo de cálculo con respecto al tiempo inicial. En las últimas pruebas realizadas en unos 10 segundos, esto redujo en gran medida el tiempo total de cálculo. Conclusión De acuerdo con los requisitos del modelo del algoritmo diseñado, se ha demostrado la ayuda práctica mediante la construcción de un modelo computacional. El algoritmo puede optimizarse y seleccionarse según el modelo de cálculo en función de la realidad de la aplicación. Nivel de evidencia II; Estudios terapéuticos - investigación de los resultados del tratamiento.


Subject(s)
Humans , Swimming/physiology , Algorithms , Biomarkers/analysis , Deep Learning , Athletic Performance/physiology , Athletes
SELECTION OF CITATIONS
SEARCH DETAIL