Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 120
Filter
1.
J Cancer Res Ther ; 2024 Apr; 20(2): 625-632
Article | IMSEAR | ID: sea-238036

ABSTRACT

Objective: To establish a multimodal model for distinguishing benign and malignant breast lesions. Materials and Methods: Clinical data, mammography, and MRI images (including T2WI, diffusion?weighted images (DWI), apparent diffusion coefficient (ADC), and DCE?MRI images) of 132 benign and breast cancer patients were analyzed retrospectively. The region of interest (ROI) in each image was marked and segmented using MATLAB software. The mammography, T2WI, DWI, ADC, and DCE?MRI models based on the ResNet34 network were trained. Using an integrated learning method, the five models were used as a basic model, and voting methods were used to construct a multimodal model. The dataset was divided into a training set and a prediction set. The accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of the model were calculated. The diagnostic efficacy of each model was analyzed using a receiver operating characteristic curve (ROC) and an area under the curve (AUC). The diagnostic value was determined by the DeLong test with statistically significant differences set at P < 0.05. Results: We evaluated the ability of the model to classify benign and malignant tumors using the test set. The AUC values of the multimodal model, mammography model, T2WI model, DWI model, ADC model and DCE?MRI model were 0.943, 0.645, 0.595, 0.905, 0.900, and 0.865, respectively. The diagnostic ability of the multimodal model was significantly higher compared with that of the mammography and T2WI models. However, compared with the DWI, ADC, and DCE?MRI models, there was no significant difference in the diagnostic ability of these models. Conclusion: Our deep learning model based on multimodal image training has practical value for the diagnosis of benign and malignant breast lesions.

2.
Article | IMSEAR | ID: sea-236716

ABSTRACT

The Internet of things (IoT) empowers precise organization and intelligent coordination for industrial facilities and smart farming, enhancing agricultural efficiency. Sugar production relies on various auxiliary elements, but in labor- intensive smart agriculture, creating accurate forecasts is a formidable challenge. Machine learning emerges as a potential solution, as current convolutional neural network-based phase recognition techniques struggle with long- range dependencies. To address this, a temporal-based swin transformer network (TSTN) is introduced, comprising a swin transformer and long short-term memory (LSTM). The swin transformer employs attention mechanisms for expressive representations, while LSTM excels at extracting temporal data with long-range dependencies. The nutcracker optimizer algorithm (NOA) fine-tunes LSTM weights. TSTN effectively blends these components, providing spatiotemporal data with enhanced context. This model outperforms competitors in accuracy, as demonstrated through testing with data from Uttar Pradesh. The integration of IoT and TSTN marks a significant advancement in optimizing agricultural operations for increased productivity and efficiency. In the comparative analysis, the proposed TSTN-NOA model achieves better performance and results than other existing models.

3.
China Oncology ; (12): 306-315, 2024.
Article in Chinese | WPRIM | ID: wpr-1023818

ABSTRACT

Pathology is the gold standard for diagnosis of neoplastic diseases.Whole slide imaging turns traditional slides into digital images,and artificial intelligence has shown great potential in pathological image analysis,especially deep learning models.The application of artificial intelligence in whole slide imaging of lung cancer involves many aspects such as histopathological classification,tumor microenvironment analysis,efficacy and survival prediction,etc.,which is expected to assist clinical decision-making of accurate treatment.Limitations in this field include the lack of precisely annotated data and slide quality varying among institutions.Here we summarized recent research in lung cancer pathology image analysis leveraging artificial intelligence and proposed several future directions.

4.
Article in Chinese | WPRIM | ID: wpr-1026219

ABSTRACT

In order to address issues such as the decline in diagnostic performance of deep learning models due to imbalanced data distribution in psoriasis vulgaris,a VGG13-based deep convolutional neural network model is proposed by integrating the processing capability of the improved fuzzy KMeans clustering algorithm for highly clustered complex data and the predictive capability of VGG13 deep convolutional neural network model.The model is applied to the diagnosis of psoriasis vulgaris,and the experimental results indicate that compared with VGG13 and resNet18,the proposed approach based on deep learning and improved fuzzy KMeans is more suitable for identifying psoriasis features.

5.
Article in Chinese | WPRIM | ID: wpr-1026234

ABSTRACT

Objective To establish a hybrid deep learning lung sound classification model based on convolutional neural network(CNN)-long short-term memory(LSTM)for electronic auscultation.Methods Wavelet transform was used to extract features from the dataset,transforming lung sound signals into energy entropy,peak value and other features.On this basis,a classification model based on hybrid algorithm incorporating CNN and LSTM neural network was constructed.The features extracted by wavelet transform were input into CNN module to obtain the spatial features of the data,and then the temporal features were detected through LSTM module.The fusion of the two types of features enabled the classification of lung sounds through the model,thereby assisting in the diagnosis of pulmonary diseases.Results The accuracy rate and F1 score of CNN-LSTM hybrid model were significantly higher than those of other single models,reaching 0.948 and 0.950.Conclusion The proposed CNN-LSTM hybrid model demonstrates higher accuracy and more precise classification,showcasing broad potential application value in intelligent auscultation.

6.
Article in Chinese | WPRIM | ID: wpr-1026236

ABSTRACT

To address the problem of low accuracy in multi-classification recognition of motor imagery electroencephalogram(EEG)signals,a recognition method is proposed based on differential entropy and convolutional neural network for 4-class classification of motor imagery.EEG signals are extracted into 4 frequency bands(Alpha,Beta,Theta,and Gamma)through the filter,followed by the computation of differential entropy for each frequency band.According to the spatial characteristics of brain electrodes,the data structure is reconstructed into three-dimensional EEG signal feature cube which is input into convolutional neural network for 4-class classification.The method achieves an accuracy of 95.88%on the BCI Competition IV-2a public dataset.Additionally,a 4-class classification motor imagery dataset is established in the laboratory for the same processing,and an accuracy of 94.50%is obtained.The test results demonstrate that the proposed method exhibits superior recognition performance.

7.
Article in Chinese | WPRIM | ID: wpr-1039116

ABSTRACT

ObjectiveDirect continuous monitoring of arterial blood pressure is invasive and continuous monitoring cannot be achieved by traditional cuffed indirect blood pressure measurement methods. Previously, continuous non-invasive arterial blood pressure monitoring was achieved by using photoplethysmography (PPG), but it is discrete values of systolic and diastolic blood pressures rather than continuous values constructing arterial blood pressure waves. This study aimed to reconstruct arterial blood pressure wave signal based on CNN-LSTM using PPG to achieve continuous non-invasive arterial blood pressure monitoring. MethodsA CNN-LSTM hybrid neural network model was constructed, and the PPG and arterial blood pressure wave synchronized recorded signal data from the Medical Information Mart for Intensive Care (MIMIC) were used. The PPG signals were input to this model after noise reduction, normalization, and sliding window segmentation. The corresponding arterial blood pressure waves were reconstructed from PPG by using the CNN-LSTM hybrid model. ResultsWhen using the CNN-LSTM neural network with a window length of 312, the error between the reconstructed arterial blood pressure values and the actual arterial blood pressure values was minimal: the values of mean absolute error (MAE) and root mean square error (RMSE) were 2.79 mmHg and 4.24 mmHg, respectively, and the cosine similarity is the optimal. The reconstructed arterial blood pressure values were highly correlated with the actual arterial blood pressure values, which met the Association for the Advancement of Medical Instrumentation (AAMI) standards. ConclusionCNN-LSTM hybrid neural network can reconstruct arterial blood pressure wave signal using PPG to achieve continuous non-invasive arterial blood pressure monitoring.

8.
International Eye Science ; (12): 453-457, 2024.
Article in Chinese | WPRIM | ID: wpr-1011400

ABSTRACT

The advancement of computers and data explosion have ushered in the third wave of artificial intelligence(AI). AI is an interdisciplinary field that encompasses new ideas, new theories, and new technologies, etc. AI has brought convenience to ophthalmology application and promoted its intelligent, precise, and minimally invasive development. At present, AI has been widely applied in various fields of ophthalmology, especially in oculoplastic surgery. AI has made rapid progress in image detection, facial recognition, etc., and its performance and accuracy have even surpassed humans in some aspects. This article reviews the relevant research and applications of AI in oculoplastic surgery, including ptosis, single eyelid, pouch, eyelid mass, and exophthalmos, and discusses the challenges and opportunities faced by AI in oculoplastic surgery, and provides prospects for its future development, aiming to provide new ideas for the development of AI in oculoplastic surgery.

9.
Article in Chinese | WPRIM | ID: wpr-1021941

ABSTRACT

BACKGROUND:With the continuous improvement and progress of artificial intelligence technology in the treatment of spinal deformity,a large number of studies have been invested in this field,but the main research status,hot spots and development trends are still unclear. OBJECTIVE:To visually analyze the literature related to artificial intelligence in the field of spinal deformities by using bibliometrics,identify the research hotspots and shortcomings in this field,and provide references for future research. METHODS:The core database of Web of Science was used to search the articles related to artificial intelligence in the field of spinal deformities published from inception to 2023.The data were visually analyzed by Citespace 5.6.R5 and VOSviewer 1.6.19. RESULTS AND CONCLUSION:(1)A total of 165 papers were included,and the number of papers published in this field showed a fluctuating upward trend.The author with the largest number of articles is Lafage V,and the country with the largest number of articles is China.(2)Keyword analysis results show that adolescent scoliosis,deep learning,classification,precision and robot are the main keywords.(3)The in-depth analysis results of co-cited and highly cited articles show that artificial intelligence has three hotspots in the field of spinal deformities,including the use of U-shaped architecture(a mature mode of deep learning convolutional neural networks)to automatically measure imaging parameters(Cobb angle and accurate segmentation of paraspinal muscles),multi-view correlation network architecture(i.e.,spine curvature assessment framework),and robot-guided spinal surgery.(4)In the field of artificial intelligence treatment of spinal malformations,the mechanism research such as genomics is very weak.In the future,unsupervised hierarchical clustering and other machine learning techniques can be used to study the basic mechanism of susceptibility genes in the field of spinal deformities by genome-wide association analysis and other genomics research methods.

10.
Article in Chinese | WPRIM | ID: wpr-1022728

ABSTRACT

Objective To evaluate the relationship between diabetic nephropathy(DN)and diabetic retinopathy(DR)in patients with type 2 diabetes mellitus(T2DM)based on imaging and clinical testing data.Methods Totally 600 T2DM patients who visited the First People's Hospital of Ziyang from March 2021 to December 2022 were included.The fundus photography and fundus fluorescein angiography were performed on all these patients and their age,gender,T2DM duration,cardiovascular diseases,cerebrovascular disease,hypertension,smoking history,drinking history,body mass in-dex,systolic blood pressure,diastolic blood pressure and other clinical data were collected.The levels of fasting blood glu-cose(FPG),triglyceride(TG),total cholesterol(TC),high-density lipoprotein cholesterol(HDL-C),low-density lipo-protein cholesterol(LDL-C),glycosylated hemoglobin(HbA1c),24 h urinary albumin(UAlb),urinary albumin to creati-nine ratio(ACR),serum creatinine(Scr)and blood urea nitrogen(BUN)were measured.Logistic regression was used to analyze the risk factors associated with DR.DR staging was performed according to fundus images,and the convolutional neural network(CNN)algorithm was used as an image analysis method to explore the correlation between DR and DN based on emission computed tomography(ECT)and clinical testing data.Results The average lesion area rates of DR and DN detected by the CNN in the non-DR,mild-non-proliferative DR(NPDR),moderate-NPDR,severe-NPDR and pro-liferative DR(PDR)groups were higher than those obtained by the traditional algorithm(TCM).As DR worsened,the Scr,BUN,24 h UAlb and ACR gradually increased.Besides,the incidence of DN in the non-DR,mild-NPDR,moderate-NPDR,severe-NPDR and PDR groups was 1.67%,8.83%,16.16%,22.16%and 30.83%,respectively.Logistic regression analysis showed that the duration of T2DM,smoking history,HbA1c,TC,TG,HDL-C,LDL-C,24 h UAlb,Scr,BUN,ACR and glomerular filtration rate(GFR)were independent risk factors for DR.Renal dynamic ECT analysis demonstrated that with the aggravation of DR,renal blood flow perfusion gradually decreased,resulting in diminished renal filtration.Conclusion The application of CCN in the early stage DR and DN image analysis of T2DM patients will improve the diag-nosis accuracy of DR and DN lesion area.The DN is worsening as the aggravation of DR.

11.
Article in Chinese | WPRIM | ID: wpr-1027398

ABSTRACT

Objective:To investigate the effectiveness and feasibility of a 3D U-Net in conjunction with a three-phase CT image segmentation model in the automatic segmentation of GTVnx and GTVnd in nasopharyngeal carcinoma.Methods:A total of 645 sets of computed tomography (CT) images were retrospectively collected from 215 nasopharyngeal carcinoma cases, including three phases: plain scan (CT), contrast-enhanced CT (CTC), and delayed CT (CTD). The dataset was grouped into a training set consisting of 172 cases and a test set comprising 43 cases using the random number table method. Meanwhile, six experimental groups, A1, A2, A3, A4, B1, and B2, were established. Among them, the former four groups used only CT, only CTC, only CTD, and all three phases, respectively. The B1 and B2 groups used phase fine-tuning CTC models. The Dice similarity coefficient (DSC) and 95% Hausdorff distance (HD95) served as quantitative evaluation indicators.Results:Compared to only monophasic CT (group A1/A2/A3), triphasic CT (group A4) yielded better result in the automatic segmentation of GTVnd (DSC: 0.67 vs. 0.61, 0.64, 0.64; t = 7.48, 3.27, 4.84, P < 0.01; HD95: 36.45 vs. 79.23, 59.55, 65.17; t = 5.24, 2.99, 3.89, P < 0.01), with statistically significant differences ( P < 0.01). However, triphasic CT (group A4) showed no significant enhancement in the automatic segmentation of GTVnx compared to monophasic CT (group A1/A2/A3) (DSC: 0.73 vs. 0.74, 0.74, 0.73; HD95: 14.17 mm vs. 8.06, 8.11, 8.10 mm), with no statistically significant difference ( P > 0.05). For the automatic segmentation of GTVnd, group B1/B2 showed higher automatic segmentation accuracy compared to group A1 (DSC: 0.63, 0.63 vs. 0.61, t = 4.10, 3.03, P<0.01; HD95: 58.11, 50.31 mm vs. 79.23 mm, t = 2.75, 3.10, P < 0.01). Conclusions:Triphasic CT scanning can improve the automatic segmentation of the GTVnd in nasopharyngeal carcinoma. Additionally, phase fine-tuning models can enhance the automatic segmentation accuracy of the GTVnd on plain CT images.

12.
Article in Chinese | WPRIM | ID: wpr-973239

ABSTRACT

ObjectiveArtificial intelligence (AI) full smear automated diatom detection technology can perform forensic pathology drowning diatom detection more quickly and efficiently than human experts.However, this technique was only used in conjunction with the strong acid digestion method, which has a low extraction rate of diatoms. In this study, we propose to use the more efficient proteinase K tissue digestion method (hereinafter referred to as enzyme digestion method) as a diatom extraction method to investigate the generalization ability and feasibility of this technique in other diatom extraction methods. MethodsLung tissues from 6 drowned cadavers were collected for proteinase K ablation and made into smears, and the smears were digitized using the digital image matrix cutting method and a diatom and background database was established accordingly.The data set was divided into training set, validation set and test set in the ratio of 3:1:1, and the convolutional neural network (CNN) models were trained, internally validated, and externally tested on the basis of ImageNet pre-training. ResultsThe results showed that the accuracy rate of the external test of the best model was 97.65 %, and the area where the model features were extracted was the area where the diatoms were located. The best CNN model in practice had a precision of more than 80 % for diatom detection of drowned corpses. ConclusionIt is shown that the AI automated diatom detection technique based on CNN model and enzymatic digestion method in combination can efficiently identify diatoms and can be used as an auxiliary method for diatom detection in drowning identification.

13.
Article in Chinese | WPRIM | ID: wpr-973695

ABSTRACT

Objective To develop an intelligent recognition model based on deep learning algorithms of unmanned aerial vehicle (UAV) images, and to preliminarily explore the value of this model for remote identification, monitoring and management of cattle, a source of Schistosoma japonicum infection. Methods Oncomelania hupensis snail-infested marshlands around the Poyang Lake area were selected as the study area. Image datasets of the study area were captured by aerial photography with UAV and subjected to augmentation. Cattle in the sample database were annotated with the annotation software VGG Image Annotator to create the morphological recognition labels for cattle. A model was created for intelligent recognition of livestock based on deep learning-based Mask R-convolutional neural network (CNN) algorithms. The performance of the model for cattle recognition was evaluated with accuracy, precision, recall, F1 score and mean precision. Results A total of 200 original UAV images were obtained, and 410 images were yielded following data augmentation. A total of 2 860 training samples of cattle recognition were labeled. The created deep learning-based Mask R-CNN model converged following 200 iterations, with an accuracy of 88.01%, precision of 92.33%, recall of 94.06%, F1 score of 93.19%, and mean precision of 92.27%, and the model was effective to detect and segment the morphological features of cattle. Conclusion The deep learning-based Mask R-CNN model is highly accurate for recognition of cattle based on UAV images, which is feasible for remote intelligent recognition, monitoring, and management of the source of S. japonicum infection.

14.
Article in Chinese | WPRIM | ID: wpr-972255

ABSTRACT

@#Facial symmetry evaluation has always been a hot topic of concern for doctors who engage in the study of facial beauty disciplines such as orthodontics, dentistry, and plastic surgery. Although scholars at home and abroad have carried out much research on the evaluation of facial symmetry with a variety of emerging technologies and methods, there is still a lack of unified standards for the evaluation of facial asymmetry due to the complexity of the content and methods and individual subjectivity. Facial asymmetry involves changes in the length, width and height of the face. It is a complex dental and maxillofacial malformation whose early identification and accurate evaluation are particularly important. Clinically, in addition to the necessary dental and maxillofacial examinations, it is also necessary to evaluate facial asymmetry with the help of corresponding auxiliary methods. This paper gives a summary of the commonly used three-dimensional evaluation methods. The evaluation methods of facial asymmetry can be divided into 5 categories: qualitative analysis, quantitative analysis, dynamic analysis, mathematical analysis, and artificial intelligence analysis. After the analysis and summarization of the characteristics, advantages and limitations of each method in clinical applications, it is found that although these methods vary in accuracy, evaluation scope, diagnosis nature and calculation method, etc., the three-dimensional evaluation methods are more objective, more accurate and more convenient and will become the mainstream evaluation method for facial asymmetry with further development of three-dimensional measurement technologies.

15.
Zhongguo Zhong Yao Za Zhi ; (24): 829-834, 2023.
Article in Chinese | WPRIM | ID: wpr-970553

ABSTRACT

In the digital transformation of Chinese pharmaceutical industry, how to efficiently govern and analyze industrial data and excavate the valuable information contained therein to guide the production of drug products has always been a research hotspot and application difficulty. Generally, the Chinese pharmaceutical technique is relatively extensive, and the consistency of drug quality needs to be improved. To address this problem, we proposed an optimization method combining advanced calculation tools(e.g., Bayesian network, convolutional neural network, and Pareto multi-objective optimization algorithm) with lean six sigma tools(e.g., Shewhart control chart and process performance index) to dig deeply into historical industrial data and guide the continuous improvement of pharmaceutical processes. Further, we employed this strategy to optimize the manufacturing process of sporoderm-removal Ganoderma lucidum spore powder. After optimization, we preliminarily obtained the possible interval combination of critical parameters to ensure the P_(pk) values of the critical quality properties including moisture, fineness, crude polysaccharide, and total triterpenes of the sporoderm-removal G. lucidum spore powder to be no less than 1.33. The results indicate that the proposed strategy has an industrial application value.


Subject(s)
Bayes Theorem , Data Mining , Drug Industry , Powders , Reishi , Spores, Fungal
16.
Zhongguo yi xue ke xue yuan xue bao ; Zhongguo yi xue ke xue yuan xue bao;(6): 273-279, 2023.
Article in Chinese | WPRIM | ID: wpr-981263

ABSTRACT

Objective To evaluate the accuracy of different convolutional neural networks (CNN),representative deep learning models,in the differential diagnosis of ameloblastoma and odontogenic keratocyst,and subsequently compare the diagnosis results between models and oral radiologists. Methods A total of 1000 digital panoramic radiographs were retrospectively collected from the patients with ameloblastoma (500 radiographs) or odontogenic keratocyst (500 radiographs) in the Department of Oral and Maxillofacial Radiology,Peking University School of Stomatology.Eight CNN including ResNet (18,50,101),VGG (16,19),and EfficientNet (b1,b3,b5) were selected to distinguish ameloblastoma from odontogenic keratocyst.Transfer learning was employed to train 800 panoramic radiographs in the training set through 5-fold cross validation,and 200 panoramic radiographs in the test set were used for differential diagnosis.Chi square test was performed for comparing the performance among different CNN.Furthermore,7 oral radiologists (including 2 seniors and 5 juniors) made a diagnosis on the 200 panoramic radiographs in the test set,and the diagnosis results were compared between CNN and oral radiologists. Results The eight neural network models showed the diagnostic accuracy ranging from 82.50% to 87.50%,of which EfficientNet b1 had the highest accuracy of 87.50%.There was no significant difference in the diagnostic accuracy among the CNN models (P=0.998,P=0.905).The average diagnostic accuracy of oral radiologists was (70.30±5.48)%,and there was no statistical difference in the accuracy between senior and junior oral radiologists (P=0.883).The diagnostic accuracy of CNN models was higher than that of oral radiologists (P<0.001). Conclusion Deep learning CNN can realize accurate differential diagnosis between ameloblastoma and odontogenic keratocyst with panoramic radiographs,with higher diagnostic accuracy than oral radiologists.


Subject(s)
Humans , Ameloblastoma/diagnostic imaging , Deep Learning , Diagnosis, Differential , Radiography, Panoramic , Retrospective Studies , Odontogenic Cysts/diagnostic imaging , Odontogenic Tumors
17.
Article in Chinese | WPRIM | ID: wpr-982772

ABSTRACT

Objective:To evaluate the diagnostic accuracy of the convolutional neural network(CNN) in diagnosing nasopharyngeal carcinoma using endoscopic narrowband imaging. Methods:A total of 834 cases with nasopharyngeal lesions were collected from the People's Hospital of Guangxi Zhuang Autonomous Region between 2014 and 2016. We trained the DenseNet201 model to classify the endoscopic images, evaluated its performance using the test dataset, and compared the results with those of two independent endoscopic experts. Results:The area under the ROC curve of the CNN in diagnosing nasopharyngeal carcinoma was 0.98. The sensitivity and specificity of the CNN were 91.90% and 94.69%, respectively. The sensitivity of the two expert-based assessment was 92.08% and 91.06%, respectively, and the specificity was 95.58% and 92.79%, respectively. There was no significant difference between the diagnostic accuracy of CNN and the expert-based assessment (P=0.282, P=0.085). Moreover, there was no significant difference in the accuracy in discriminating early-stage and late-stage nasopharyngeal carcinoma(P=0.382). The CNN model could rapidly distinguish nasopharyngeal carcinoma from benign lesions, with an image recognition time of 0.1 s/piece. Conclusion:The CNN model can quickly distinguish nasopharyngeal carcinoma from benign nasopharyngeal lesions, which can aid endoscopists in diagnosing nasopharyngeal lesions and reduce the rate of nasopharyngeal biopsy.


Subject(s)
Humans , Nasopharyngeal Carcinoma , Narrow Band Imaging , China , Neural Networks, Computer , Nasopharyngeal Neoplasms/diagnostic imaging
18.
Chinese Journal of Medical Physics ; (6): 1477-1485, 2023.
Article in Chinese | WPRIM | ID: wpr-1026167

ABSTRACT

In view of numerous subtle features in fundus disease images,small sample sizes,and difficulties in diagnosis,both deep learning and medical imaging technologies are used to develop a fundus disease diagnosis model that integrates multi-scale features and hybrid domain attention mechanism.Resnet50 network is taken as the baseline network,and it is modified in the study.The method uses parallel multi-branch architecture to extract the features of fundus diseases under different receptive fields for effectively improving the feature extraction ability and computational efficiency,and adopts hybrid domain attention mechanism to select information that is more critical to the current task for effectively enhancing the classification performance.The test on ODIR dataset shows that the proposed method has a diagnostic accuracy of 93.2%for different fundus diseases,which is 5.2%higher than the baseline network,demonstrating a good diagnostic performance.

19.
Article in Chinese | WPRIM | ID: wpr-1038393

ABSTRACT

Objective @#To develop an endoscopic automatic detection system in early gastric cancer (EGC) based on a region-based convolutional neural network ( Mask R-CNN) .@*Methods @# A total of 3 579 and 892 white light images (WLI) of EGC were obtained from the First Affiliated Hospital of Anhui Medical University for training and testing,respectively.Then,10 WLI videos were obtained prospectively to test dynamic performance of the RCNN system.In addition,400 WLI images were randomly selected for comparison with the Mask R-CNN system and endoscopists.Diagnostic ability was assessed by accuracy,sensitivity,specificity,positive predictive value ( PPV) , and negative predictive value (NPV) . @*Results @# The accuracy,sensitivity and specificity of the Mask R-CNN system in diagnosing EGC in WLI images were 90. 25% ,91. 06% and 89. 01% ,respectively,and there was no significant statistical difference with the results of pathological diagnosis.Among WLI real-time videos,the diagnostic accuracy was 90. 27%.The speed of test videos was up to 35 frames / s in real time.In the controlled experiment, the sensitivity of Maks R-CNN system was higher than that of the experts (93. 00% vs 80. 20% ,χ2 = 7. 059,P < 0. 001) ,and the specificity was higher than that of the juniors (82. 67% vs 71. 87% ,χ2 = 9. 955,P<0. 001) , and the overall accuracy rate was higher than that of the seniors (85. 25% vs 78. 00% ,χ2 = 7. 009,P<0. 001) . @*Conclusion@#The Mask R-CNN system has excellent performance for detection of EGC under WLI,which has great potential for practical clinical application.

20.
Article in Chinese | WPRIM | ID: wpr-1022924

ABSTRACT

Objective To propose a brain age prediction method based on deep convolutional generative adversarial networks(DCGAN)for objective assessment of brain health status.Methods The DCGAN model was extended from 2D to 3D and improved by integrating the concept of residual block to enhance the ability for feature extraction.The classifiers were pre-trained with unsupervised adversarial learning and fine-tuned with migration learning to eliminate the overfitting of 3D convolutional neural network(CNN)due to small sample size.To verify the effectiveness of the improved model,comparison analyses based on UK Biobank(UKB)database were carried out between the improved model and least absolute shrinkage and selection operator(LASSO)model,machine learning model,3D CNN model and graph convolutional network model by using mean absolute error(MAE)as the evaluation metric.Results The model proposed gained advantages over LASSO model,machine learning model,3D CNN model and graph convolutional network model in predicting brain age with a MAE error of 2.896 years.Conclusion The method proposed behaves well for large-scale datasets,which can predict brain age accurately and assess brain health status objectively.

SELECTION OF CITATIONS
SEARCH DETAIL