ABSTRACT
In recent years, the incidence of dry eye disease has been increasing year by year due to environmental changes and some people's overuse of eyes.As the main type of dry eye disease, hyperevaporative dry eye disease is mostly caused by meibomian gland dysfunction (MGD) resulted from abnormal quality or quantity of lipid layer.Due to differences in diagnosis and classification, there is no unified standard for the treatment of this disease at present.The clinician's judgment of the diagnosis and treatment effect and follow-up management are limited.With the availability of big data, improvements in computer graphics processing and mathematical models, artificial intelligence (AI) is widely used in the medical field.AI systems can utilize technologies such as machine learning and deep learning to exert advanced problem-solving capabilities, making diagnosis more objective and improving diagnosis and treatment efficiency.The application of AI in ophthalmology is mainly based on the auxiliary diagnosis of eye images and the screening of eye diseases, which reduces the dependence of the medical system on manual labor, makes the screening and diagnosis of eye diseases faster, more convenient and more consistent, alleviates the medical burden, and thus significantly improves the efficiency and cost-effectiveness of medical services.At present, the application of AI in cataract, glaucoma, diabetic retinopathy and other fields is becoming more and more mature, and the research in the field of MGD-related dry eye has also made certain progress.This article reviewed the application status and progress of AI in MGD-related dry eye.
ABSTRACT
Land use describes the actual form of land, such as a forest or open water and classification based on human utilization. Land use map provides the information about the current landscape of an area. In this study, the Lower Bhavani basin's land use and land cover were classified using GIS platforms and data from the Landsat 8 satellite. The platform utilized in this study were Semi-Automated Plugin (SAP) in QGIS and Random forest method in Google Earth Engine (GEE). The findings suggested that both platforms performed efficiently and displayed comparable percentages of land covered by various land use features. The accuracy of the resulting land use map was evaluated using a Google Earth image, and it was discovered that SAP and GEE hold 91.8% and 92.6% of the total accuracy. This study aids in evaluating and classifying the various Geographic Information System platforms land use trends.
ABSTRACT
Arrhythmia is a significant cardiovascular disease that poses a threat to human health, and its primary diagnosis relies on electrocardiogram (ECG). Implementing computer technology to achieve automatic classification of arrhythmia can effectively avoid human error, improve diagnostic efficiency, and reduce costs. However, most automatic arrhythmia classification algorithms focus on one-dimensional temporal signals, which lack robustness. Therefore, this study proposed an arrhythmia image classification method based on Gramian angular summation field (GASF) and an improved Inception-ResNet-v2 network. Firstly, the data was preprocessed using variational mode decomposition, and data augmentation was performed using a deep convolutional generative adversarial network. Then, GASF was used to transform one-dimensional ECG signals into two-dimensional images, and an improved Inception-ResNet-v2 network was utilized to implement the five arrhythmia classifications recommended by the AAMI (N, V, S, F, and Q). The experimental results on the MIT-BIH Arrhythmia Database showed that the proposed method achieved an overall classification accuracy of 99.52% and 95.48% under the intra-patient and inter-patient paradigms, respectively. The arrhythmia classification performance of the improved Inception-ResNet-v2 network in this study outperforms other methods, providing a new approach for deep learning-based automatic arrhythmia classification.
Subject(s)
Humans , Arrhythmias, Cardiac/diagnostic imaging , Cardiovascular Diseases , Algorithms , Databases, Factual , ElectrocardiographyABSTRACT
Objective To construct a COVID-19 CT image classification model based on lightweight RG DenseNet.Methods A RG-DenseNet model was constructed by adding channel and spatial attention modules to DenseNet121 for minimizing the interference of irrelevant features,and replacing Bottleneck module in DenseNet with pre-activated RG beneck2 module for reducing model parameters while maintaining accuracy as much as possible.The model performance was verified with 3-category classification experiments on the COVIDx CT-2A dataset.Results RG-DenseNet had an accuracy,precision,recall rate,specificity,and F1-score of 98.93%,98.70%,98.97%,99.48%,and 98.83%,respectively.Conclusion Compared with the original model DenseNet121,RG-DenseNet reduces the number of parameters and the computational complexity by 92.7%,while maintaining an accuracy reduction of only 0.01%,demonstrating a significant lightweight effect and high practical application value.
ABSTRACT
Objective:To explore the application value of transrectal ultrasound images classification network model of prostate cancer based on deep learning in the classification of benign and malignant prostate tissue in transrectal ultrasound images.Methods:A total of 1 462 two-dimensional images of transrectal prostate biopsy with clear pathologic results(including 658 images of malignant tumor, 804 images of benign tumor) from 203 patients with suspicious prostate cancer(including 89 cases of malignant tumor, 114 cases of benign tumor) were collected from May 2018 to May 2021 in the First Affiliated Hospital of Jinan University. They were divided into the training database, validation database, and test database. And the training and validation database were used to train and obtain the intelligence-assisted diagnosis network model, and then the test database was used to test the network model and two ultrasound doctors of different ages. With pathologic diagnosis as the gold standard, the diagnostic performance among them was evaluated.Results:①The sensitivity of network model was 66.7% the specificity was 91.9%, the accuracy was 80.5%, the precision(positive predictive value) was 87.1%. The area under the ROC curve was 0.922. ②The accuracy of the junior and senior ultrasound doctors was 57.5%, 62.0%; the specificity was 62.0%, 66.3%; the sensitivity was 51.5%, 56.8%; the precision was 53.1%, 58.1%, respectively. ③The accuracy, sensitivity, specificity, precision of classification: the network model > the ultrasound doctors, the differences were significant( P<0.05); the senior ultrasound doctor>the junior ultrasound doctor, the differences were not significant( P>0.05). Conclusions:The intelligence-assisted diagnosis network model based on deep learning can classify benign and malignant prostate tissue in transrectal ultrasound images, improve the accuracy of ultrasound doctors in diagnosing prostate cancer. It is of great significance to improve the efficiency of screening for patients with high clinical suspicion of prostate cancer.
ABSTRACT
OBJECTIVES@#To apply the convolutional neural network (CNN) Inception_v3 model in automatic identification of acceleration and deceleration injury based on CT images of brain, and to explore the application prospect of deep learning technology in forensic brain injury mechanism inference.@*METHODS@#CT images from 190 cases with acceleration and deceleration brain injury were selected as the experimental group, and CT images from 130 normal brain cases were used as the control group. The above-mentioned 320 imaging data were divided into training validation dataset and testing dataset according to random sampling method. The model classification performance was evaluated by the accuracy rate, precision rate, recall rate, F1-value and AUC value.@*RESULTS@#In the training process and validation process, the accuracy rate of the model to classify acceleration injury, deceleration injury and normal brain was 99.00% and 87.21%, which met the requirements. The optimized model was used to test the data of the testing dataset, the result showed that the accuracy rate of the model in the test set was 87.18%, and the precision rate, recall rate, F1-score and AUC of the model to recognize acceleration injury were 84.38%, 90.00%, 87.10% and 0.98, respectively, to recognize deceleration injury were 86.67%, 72.22%, 78.79% and 0.92, respectively, to recognize normal brain were 88.57%, 89.86%, 89.21% and 0.93, respectively.@*CONCLUSIONS@#Inception_v3 model has potential application value in distinguishing acceleration and deceleration injury based on brain CT images, and is expected to become an auxiliary tool to infer the mechanism of head injury.
Subject(s)
Humans , Brain/diagnostic imaging , Brain Injuries , Deep Learning , Neural Networks, ComputerABSTRACT
La Inteligencia Artificial ha ayudado a lidiar diferentes problemas relacionados con los datos masivos y a su vez con su tratamiento, diagnóstico y detección de enfermedades como la que actualmente nos preocupa, la Covid-19. El objetivo de esta investigación ha sido analizar y desarrollar la clasificación de imágenes de neumonía a causa de covid-19 para un diagnostico efectivo y óptimo. Se ha usado Transfer-Learning aplicando ResNet, DenseNet, Poling y Dense layer para la elaboración de los modelos de red propios Covid-UPeU y Covid-UPeU-TL, utilizando las plataformas Kaggle y Google colab, donde se realizaron 4 experimentos. El resultado con una mejor clasificación de imágenes se obtuvo en el experimento 4 prueba N°2 con el modelo Covid-UPeU-TL donde Acc.Train: 0.9664 y Acc.Test: 0.9851. Los modelos implementados han sido desarrollados con el propósito de tener una visión holística de los factores para la optimización en la clasificación de imágenes de neumonía a causa de COVID-19(AU)
Artificial Intelligence has helped to deal with different problems related to massive data in turn to the treatment, diagnosis and detection of diseases such as the one that currently has us in concern, Covid-19. The objective of this research has been to analyze and develop the classification of images of pneumonia due to covid-19 for an effective and optimal diagnosis. Transfer-Learning has been used applying ResNet, DenseNet, Poling and Dense layer for the elaboration of the own network models Covid-Upeu and Covid-UpeU-TL, using Kaggle and Google colab platforms, where 4 experiments have been carried out. The result with a better classification of images was obtained in experiment 4 test N ° 2 with the Covid-UPeU-TL model where Acc.Train: 0.9664 and Acc.Test: 0.9851. The implemented models have been developed with the purpose of having a holistic view of the factors for optimization in the classification of COVID-19 images(AU)
Subject(s)
Humans , Male , Female , Pneumonia/epidemiology , Medical Informatics Applications , Artificial Intelligence/trends , Radiography/methods , COVID-19/complicationsABSTRACT
With the change of medical diagnosis and treatment mode, the quality of medical image directly affects the diagnosis and treatment of the disease for doctors. Therefore, realization of intelligent image quality control by computer will have a greater auxiliary effect on the radiographer's filming work. In this paper, the research methods and applications of image segmentation model and image classification model in the field of deep learning and traditional image processing algorithm applied to medical image quality evaluation are described. The results demonstrate that deep learning algorithm is more accurate and efficient than the traditional image processing algorithm in the effective training of medical image big data, which explains the broad application prospect of deep learning in the medical field. This paper developed a set of intelligent quality control system for auxiliary filming, and successfully applied it to the Radiology Department of West China Hospital and other city and county hospitals, which effectively verified the feasibility and stability of the quality control system.
ABSTRACT
Coronavirus disease 2019 (COVID-19) has spread rapidly around the world. In order to diagnose COVID-19 more quickly, in this paper, a depthwise separable DenseNet was proposed. The paper constructed a deep learning model with 2 905 chest X-ray images as experimental dataset. In order to enhance the contrast, the contrast limited adaptive histogram equalization (CLAHE) algorithm was used to preprocess the X-ray image before network training, then the images were put into the training network and the parameters of the network were adjusted to the optimal. Meanwhile, Leaky ReLU was selected as the activation function. VGG16, ResNet18, ResNet34, DenseNet121 and SDenseNet models were used to compare with the model proposed in this paper. Compared with ResNet34, the proposed classification model of pneumonia had improved 2.0%, 2.3% and 1.5% in accuracy, sensitivity and specificity respectively. Compared with the SDenseNet network without depthwise separable convolution, number of parameters of the proposed model was reduced by 43.9%, but the classification effect did not decrease. It can be found that the proposed DWSDenseNet has a good classification effect on the COVID-19 chest X-ray images dataset. Under the condition of ensuring the accuracy as much as possible, the depthwise separable convolution can effectively reduce number of parameters of the model.
Subject(s)
Humans , Betacoronavirus , Coronavirus Infections , Diagnostic Imaging , Deep Learning , Pandemics , Pneumonia, Viral , Diagnostic Imaging , X-RaysABSTRACT
Objective To develop a convolution neural network (CNN) model to classify multi?sequence MR images of the prostate. Methods ResNet18 convolution neural network (CNN) model was developed to classify multi?sequence MR images of the prostate. A deep residual network was used to improve training accuracy and test accuracy. The dataset used in this experiment included 19 146 7?sequence prostate MR images (transverse T1WI, transverse T2WI, coronal T2WI, sagittal T2WI, transverse DWI, transverse ADC, transverse PWI), from which a total of 2 800 7?sequence MR images was selected as a training set. Three hundred and eighty eight 7?sequence MR images were selected as test sets. Accuracy was used to evaluate the effectiveness of ResNet18 CNN model. Results The classification accuracy of the model for transverse DWI, sagittal T2WI, transverse ADC, transverse T1WI, and transverse T2WI was as high as 100.0% (44/44,52/52), and the accuracy for transverse PWI was also as high as 96.7% (116/120). The accuracy for coronal T2WI was 77.5% (31/40). 0.8% (1/120) of transverse PWI was incorrectly assigned to transverse T2WI, and 2.5% (3/120) incorrectly assigned to sagittal T2WI. 15.0% (6/40) of coronal T2WI was incorrectly assigned to transverse T2WI, and 7.5% (3/40) to sagittal T2WI. Conclusion The experimental results show the effectiveness of our deep learning method regarding accuracy in the prostate multi?sequence MR images detection.
ABSTRACT
Objective@#To develop a convolution neural network (CNN) model to classify multi-sequence MR images of the prostate.@*Methods@#ResNet18 convolution neural network (CNN) model was developed to classify multi-sequence MR images of the prostate. A deep residual network was used to improve training accuracy and test accuracy. The dataset used in this experiment included 19 146 7-sequence prostate MR images (transverse T1WI, transverse T2WI, coronal T2WI, sagittal T2WI, transverse DWI, transverse ADC, transverse PWI), from which a total of 2 800 7-sequence MR images was selected as a training set. Three hundred and eighty eight 7-sequence MR images were selected as test sets. Accuracy was used to evaluate the effectiveness of ResNet18 CNN model.@*Results@#The classification accuracy of the model for transverse DWI, sagittal T2WI, transverse ADC, transverse T1WI, and transverse T2WI was as high as 100.0% (44/44,52/52), and the accuracy for transverse PWI was also as high as 96.7% (116/120). The accuracy for coronal T2WI was 77.5% (31/40). 0.8% (1/120) of transverse PWI was incorrectly assigned to transverse T2WI, and 2.5% (3/120) incorrectly assigned to sagittal T2WI. 15.0% (6/40) of coronal T2WI was incorrectly assigned to transverse T2WI, and 7.5% (3/40) to sagittal T2WI.@*Conclusion@#The experimental results show the effectiveness of our deep learning method regarding accuracy in the prostate multi-sequence MR images detection.
ABSTRACT
Objective To investigate a diabetic retinopathy ( DR ) detection algorithm based on transfer learning in small sample dataset. Methods Total of 4465 fundus color photographs taken by Gaoyao People ' s Hospital was used as the full dataset. The model training strategies using fixed pre-trained parameters and fine-tuning pre-trained parameters were used as the transfer learning group to compare with the non-transfer learning strategy that randomly initializes parameters. These three training strategies were applied to the training of three deep learning networks:ResNet50,Inception V3 and NASNet. In addition,a small dataset randomly extracted from the full dataset was used to study the impact of the reduction of training data on different strategies. The accuracy and training time of the diagnostic model were used to analyze the performance of different training strategies. Results The best results in different network architectures were chosen. The accuracy of the model obtained by fine-tuning pre-training parameters strategy was 90. 9%,which was higher than the strategy of fixed pre-training parameters (88. 1%) and the strategy of randomly initializing parameters ( 88. 4%) . The training time for fixed pre-training parameters was 10 minutes,less than the strategy of fine-tuning pre-training parameters ( 16 hours ) and the strategy of randomly initializing parameters (24 hours). After the training data was reduced,the accuracy of the model obtained by the strategy of randomly initializing parameters decreased by 8. 6% on average,while the accuracy of the transfer learning group decreased by 2. 5% on average. Conclusions The proposed automated and novel DR detection algorithm based on fine-tune and NASNet structure maintains high accuracy in small sample dataset,is found to be robust,and effective for the preliminary diagnosis of DR.
ABSTRACT
Objective@#Ultrasound imaging is well known to play an important role in the detection of thyroid disease, but the management of thyroid ultrasound remains inconsistent. Both standardized diagnostic criteria and new ultrasound technologies are essential for improving the accuracy of thyroid ultrasound. This study reviewed the global guidelines of thyroid ultrasound and analyzed their common characteristics for basic clinical screening. Advances in the application of a combination of thyroid ultrasound and artificial intelligence (AI) were also presented.@*Data sources@#An extensive search of the PubMed database was undertaken, focusing on research published after 2001 with keywords including thyroid ultrasound, guideline, AI, segmentation, image classification, and deep learning.@*Study selection@#Several types of articles, including original studies and literature reviews, were identified and reviewed to summarize the importance of standardization and new technology in thyroid ultrasound diagnosis.@*Results@#Ultrasound has become an important diagnostic technique in thyroid nodules. Both standardized diagnostic criteria and new ultrasound technologies are essential for improving the accuracy of thyroid ultrasound. In the standardization, since there are no global consensus exists, common characteristics such as a multi-feature diagnosis, the performance of lymph nodes, explicit indications of fine needle aspiration, and the diagnosis of special populations should be focused on. Besides, evidence suggests that AI technique has a good effect on the unavoidable limitations of traditional ultrasound, and the combination of diagnostic criteria and AI may lead to a great promotion in thyroid diagnosis.@*Conclusion@#Standardization and development of novel techniques are key factors to improving thyroid ultrasound, and both should be considered in normal clinical use.
ABSTRACT
Objective To compare the therapeutic efficacy of giant nonfunctioning pituitary adenomas (GNPAs) of different imaging types,and to explore the surgical treatment strategies of GNPAs.Methods The pre-and post-operative images,clinical data and follow-up results of 69 patients with GNPAs,admitted to our hospital from July 2011 to October 2016,were analyzed retrospectively.According to the morphology and growth patterns of tumors on MR imaging,they were divided into GNPAs of vertical type,cystic type,deviation Ⅰ/Ⅱ type,lateral extension type,sinus type,laryngeal type,isolated type,and mixed type.The tumor resection results of GNPAs of different types were compared by different surgical treatment strategies.Results Fifty-one patients,with total resection rate of 31.37%,were treated by transsphenoidal approach,and 18 patients,total resection rate of 44.44%,were treated by craniotomy.The overall total resection rate of GNPAs was 36.23% (n=25).Total resection rate and subtotal resection rate was 71.01% (n=49).The surgery resection rates of GNPAs of different types were different,and the GNPAs of mixed type enjoyed the worst efficacy.Fifty-three patients were followed-up for one-66 months with an average of 17 months;in patients with total resection,18 (72%) were without recurrence,one (4%) was with recurrence;X knife treatment was performed in 14 patients.Postoperative residual reduction,control,and increase were noted in 4,26 and 4 patients.Two patients died after surgery.Conclusions The total reduction rate of GNPAs is low and the operation is difficult;however,favorable prognosis can be achieved.Transsphenoidal surgery is the first choice for elimination of occupying effect.According to different types,appropriate procedures can be used to reduce the tumor residue and improve the total resection or subtotal rates.
ABSTRACT
ABSTRACT The robustness and speed of image classification is still a challenging task in satellite image processing. This paper introduces a novel image classification technique that uses the particle filter framework (PFF)-based optimisation technique for satellite image classification. The framework uses a template-matching algorithm, comprising fast marching algorithm (FMA) and level set method (LSM)-based segmentation which assists in creating the initial templates for comparison with other test images. The created templates are trained and used as inputs for the optimisation. The optimisation technique used in this proposed work is multikernel sparse representation (MKSR). The combined execution of FMA, LSM, PFF and MKSR approaches has resulted in a substantial reduction in processing time for various classes in a satellite image which is small when compared with Support Vector Machine (SVM) and Independent Component Discrimination Analysis (ICDA)based image classifications obtained for comparison purposes. This study aims to improve the robustness of image classification based on overall accuracy (OA) and kappa coefficient. The variation of OA with this technique, between different classes of a satellite image, is only10%, whereas that with the SVM and ICDA techniques is more than 50%.
ABSTRACT
A cobertura da terra é uma informação espacial de extrema relevância para uma série de modelos, sendo utilizada para estimar a produção de sedimentos e para mensurar a potencialidade da paisagem em sequestrar carbono. A classificação da cobertura da terra pelo método de classificação supervisionado necessita de áreas de treino, já que essas áreas devem ser representativas para cada classe de cobertura da terra. Para o algoritmo de classificação por árvore de decisão (AD), a complexidade da AD resulta em diferentes valores de acurácias para os mapas temáticos. Desse modo, o objetivo deste estudo foi determinar a densidade mínima de amostras em um modelo por AD, a fim de discriminar as classes de cobertura da terra e avaliar o tamanho da AD gerada quanto ao seu número de folhas. Além disso, preocupou-se em identificar as classes da cobertura da terra de mais difícil classificação. Nesse contexto, foram utilizadas bandas da imagem do satélite RESOURCESAT-1 e índices espectrais. A densidade mínima de amostras variou entre 0,15 e 0,30% da área total para cada classe. Esse intervalo de amostragem possibilitou resultados melhores que 80% para o índice kappa. O menor agrupamento entre observações em uma mesma folha terminal foi de 45, e as classes mais difíceis de classificar foram floresta e lavoura de arroz, devido à semelhança espectral que as florestas sombreadas possuem com as lavouras de arroz irrigadas.
Land cover is a spatial information of great relevance for a variety of models for estimating sediment yield and to measure the potential of the landscape carbon sequestration. The classification of land cover by the supervised method requires training areas, these areas must be representative of each class of land cover. For the classification decision tree (DT) algorithm, the complexity of DT, results in different values of accuracies for thematic maps. Thus, the objective of this study was to estimate the minimum sample density in a DT model which would allow to discriminate land cover classes, evaluate the size of the generated DT model, as well as, identify the more difficult land cover class to be mapped. Satellite images from RESOURCESAT-1as well as spectral indices were used in the study. The minimum sample density varied between 0.15 and 0.30% of the total area for each class, this sampling interval allowed better results than 80% for kappa index. The smallest grouping of observations in the same terminal leaf was 45 observations. In this study the most difficult land use classes to be mapped were forest and rice crops due to spectral similarity of shaded forests with irrigated rice crops.
ABSTRACT
A novel mono-hierarchical muti-axiel classification coding scheme for medical image retrieval is proposed.The so-called MOAB coding scheme consists of four axes with three to four positions ranging from0to9,from A to Z.In particular,the modality code M describes imaging modality and relevant technical detail,and the orientation code O models examined body orientation.The anatomy code A refers to the body region examined and the biology code B describes the biological system examined.The MOAB classification-coding scheme enables a unique classifi-cation of medical images so that medical image retrieval can be efficient.The code is flexible and easy to be ex-tended.
ABSTRACT
Along with the increasing medical image data,it is imperative to set up an effective system about medical image retrieval.Essentially,the image classification is crucial for medical image retrieval.As a distribution pattern of image gray scale,texture is an important character.Wavelet multi-scale decomposition is essentially multi-channel filtering and its multi-resolution analysis structure is identical with human visual system.So the extraction of texture feature under different resolutions after multi-band wavelets transform is of great benefit to image recognition and image retrieval.Consequently,this paper designs an image classification method based on eight-band wavelet.This method solves the key technology in the medical image retrieval,and it gains very high classification rate.