ABSTRACT
Objective: To predict COVID-19 severity by building a prediction model based on the clinical manifestations and radiomic features of the thymus in COVID-19 patients. Method: We retrospectively analyzed the clinical and radiological data from 217 confirmed cases of COVID-19 admitted to Xiangyang NO.1 People's Hospital and Jiangsu Hospital of Chinese Medicine from December 2019 to April 2022 (including 118 mild cases and 99 severe cases). The data were split into the training and test sets at a 7:3 ratio. The cases in the training set were compared in terms of clinical data and radiomic parameters of the lasso regression model. Several models for severity prediction were established based on the clinical and radiomic features of the COVID-19 patients. The DeLong test and decision curve analysis (DCA) were used to compare the performances of several models. Finally, the prediction results were verified on the test set. Result: For the training set, the univariate analysis showed that BMI, diarrhea, thymic steatosis, anorexia, headache, findings on the chest CT scan, platelets, LDH, AST and radiomic features of the thymus were significantly different between the two groups of patients (P < 0.05). The combination model based on the clinical and radiomic features of COVID-19 patients had the highest predictive value for COVID-19 severity [AUC: 0.967 (OR 0.0115, 95%CI: 0.925-0.989)] vs. the clinical feature-based model [AUC: 0.772 (OR 0.0387, 95%CI: 0.697-0.836), P < 0.05], laboratory-based model [AUC: 0.687 (OR 0.0423, 95%CI: 0.608- 0.760), P < 0.05] and model based on CT radiomics [AUC: 0.895 (OR 0.0261, 95%CI: 0.835-0.938), P < 0.05]. DCA also confirmed the high clinical net benefits of the combination model. The nomogram drawn based on the combination model could help differentiate between the mild and severe cases of COVID-19 at an early stage. The predictions from different models were verified on the test set. Conclusion: Severe cases of COVID-19 had a higher level of thymic involution. The thymic differentiation in radiomic features was related to disease progression. The combination model based on the radiomic features of the thymus could better promote early clinical intervention of COVID-19 and increase the cure rate. © 2023 American Institute of Mathematical Sciences. All rights reserved.
ABSTRACT
As a result of the COVID-19 pandemic, medical examinations (RTPCR, X-ray, CT-Scan, etc.) may be required to make a medical decision. COVID-19's SARS-CoV-2 virus infects and spreads in the lungs, which can be easily recognized by chest X-rays or CT scans. However, along with COVID-19 instances, cases of another respiratory ailment known as Pneumonia began to rise. As a result, clinicians are having difficulty distinguishing between COVID-19 and Pneumonia. So, more tests were required to identify the condition. After a few days, the COVID-19 SARS-CoV-2 virus multiplied in the lungs, causing pneumonia and COVID-19 named Novel Corona virus infected Pneumonia. We employ Machine Learning and Deep Learning models to predict diseases such as COVID-19 Positive, COVID-19 Negative, and Viral Pneumonia in this research. A dataset of data is used in a Machine Learning model. A dataset of 120 images was used in the Machine Learning model. By extracting eight statistical elements from an image texture, we calculated accuracy. Adaboost, Decision Tree & Naive Bayes have overall accuracy of 88.46%, 86.4% and 80%, respectively. When we compared the algorithms, Adaboost algorithm performs the best, with overall accuracy of 88.46%, sensitivity of 84.62%, specificity of 92.31%, F1-score of 88% and Kappa of 0.8277. VGG16 Architecture is used in CNN model for 838 images in Deep Learning model. The model's total accuracy is 99.17 %. © 2022 IEEE.
ABSTRACT
COVID-19 has affected many people across the globe. Though vaccines are available now, early detection of the disease plays a vital role in the better management of COVID-19 patients. An Artificial Neural Network (ANN) powered Computer Aided Diagnosis (CAD) system can automate the detection pipeline accounting for accurate diagnosis, overcoming the limitations of manual methods. This work proposes a CAD system for COVID-19 that detects and classifies abnormalities in lung CT images using Artificial Bee Colony (ABC) optimised ANN (ABCNN). The proposed ABCNN approach works by segmenting the suspicious regions from the CT images of non-COVID and COVID patients using an ABC optimised region growing process and extracting the texture and intensity features from those suspicious regions. Further, an optimised ANN model whose input features, initial weights and hidden nodes are optimised using ABC optimisation classifies those abnormal regions into COVID and non-COVID classes. The proposed ABCNN approach is evaluated using the lung CT images collected from the public datasets. In comparison to other available techniques, the proposed ABCNN approach achieved a high classification accuracy of 92.37% when evaluated using a set of 470 lung CT images. Author
ABSTRACT
The coronavirus disease-19 (COVID-19) pandemic caused dietary changes. Humans reduced social activities to prevent the spread of COVID-19, which led to increasing demand for machines to help cook. This work studies the effect of different stirrer modes on the texture of celery, asparagus, green peppers, and spinach during the cooking process and the functional loss of components in vegetables by measuring the changes in vitamin C, total polyphenols, and total flavonoids. The results showed that colour changes and loss of nutrients in each vegetable varied under different stirrer modes. Stirring was found to be the best mode for cooking all four vegetables. In addition, there was a positive correlation between the a* value and functional components during the cooking process, which means that the colour difference and nutritional loss of vegetables can be modulated together. This study provides theoretical guidance for developing the stirring unit in a cooking machine.
ABSTRACT
Deep learning (DL) algorithms have demonstrated a high ability to perform speedy and accurate COVID-19 diagnosis utilizing computed tomography (CT) and X-Ray scans. The spatial information in these images was used to train DL models in the majority of relevant studies. However, training these models with images generated by radiomics approaches could enhance diagnostic accuracy. Furthermore, combining information from several radiomics approaches with time-frequency representations of the COVID-19 patterns can increase performance even further. This study introduces "RADIC", an automated tool that uses three DL models that are trained using radiomics-generated images to detect COVID-19. First, four radiomics approaches are used to analyze the original CT and X-ray images. Next, each of the three DL models is trained on a different set of radiomics, X-ray, and CT images. Then, for each DL model, deep features are obtained, and their dimensions are decreased using the Fast Walsh Hadamard Transform, yielding a time-frequency representation of the COVID-19 patterns. The tool then uses the discrete cosine transform to combine these deep features. Four classification models are then used to achieve classification. In order to validate the performance of RADIC, two benchmark datasets (CT and X-Ray) for COVID-19 are employed. The final accuracy attained using RADIC is 99.4% and 99% for the first and second datasets respectively. To prove the competing ability of RADIC, its performance is compared with related studies in the literature. The results reflect that RADIC achieve superior performance compared to other studies. The results of the proposed tool prove that a DL model can be trained more effectively with images generated by radiomics techniques than the original X-Ray and CT images. Besides, the incorporation of deep features extracted from DL models trained with multiple radiomics approaches will improve diagnostic accuracy.
ABSTRACT
BACKGROUND: Numerous traditional filtering approaches and deep learning-based methods have been proposed to improve the quality of ultrasound (US) image data. However, their results tend to suffer from over-smoothing and loss of texture and fine details. Moreover, they perform poorly on images with different degradation levels and mainly focus on speckle reduction, even though texture and fine detail enhancement are of crucial importance in clinical diagnosis. METHODS: We propose an end-to-end framework termed US-Net for simultaneous speckle suppression and texture enhancement in US images. The architecture of US-Net is inspired by U-Net, whereby a feature refinement attention block (FRAB) is introduced to enable an effective learning of multi-level and multi-contextual representative features. Specifically, FRAB aims to emphasize high-frequency image information, which helps boost the restoration and preservation of fine-grained and textural details. Furthermore, our proposed US-Net is trained essentially with real US image data, whereby real US images embedded with simulated multi-level speckle noise are used as an auxiliary training set. RESULTS: Extensive quantitative and qualitative experiments indicate that although trained with only one US image data type, our proposed US-Net is capable of restoring images acquired from different body parts and scanning settings with different degradation levels, while exhibiting favorable performance against state-of-the-art image enhancement approaches. Furthermore, utilizing our proposed US-Net as a pre-processing stage for COVID-19 diagnosis results in a gain of 3.6% in diagnostic accuracy. CONCLUSIONS: The proposed framework can help improve the accuracy of ultrasound diagnosis.
Subject(s)
COVID-19 Testing , COVID-19 , Humans , Ultrasonography/methods , Image Enhancement/methods , Image Processing, Computer-Assisted , AlgorithmsABSTRACT
The emerging field of radiomics that transforms standard-of-care images to quantifiable scalar statistics endeavors to reveal the information hidden in these macroscopic images. The concept of texture is widely used and essential in many radiomic-based studies. Practice usually reduces spatial multidimensional texture matrices, e.g., gray-level co-occurrence matrices (GLCMs), to summary scalar features. These statistical features have been demonstrated to be strongly correlated and tend to contribute redundant information; and does not account for the spatial information hidden in the multivariate texture matrices. This study proposes a novel pipeline to deal with spatial texture features in radiomic studies. A new set of textural features that preserve the spatial information inherent in GLCMs is proposed and used for classification purposes. The set of the new features uses the Wasserstein metric from optimal mass transport theory (OMT) to quantify the spatial similarity between samples within a given label class. In particular, based on a selected subset of texture GLCMs from the training cohort, we propose new representative spatial texture features, which we incorporate into a supervised image classification pipeline. The pipeline relies on the support vector machine (SVM) algorithm along with Bayesian optimization and the Wasserstein metric. The selection of the best GLCM references is considered for each classification label and is performed during the training phase of the SVM classifier using a Bayesian optimizer. We assume that sample fitness is defined based on closeness (in the sense of the Wasserstein metric) and high correlation (Spearman's rank sense) with other samples in the same class. Moreover, the newly defined spatial texture features consist of the Wasserstein distance between the optimally selected references and the remaining samples. We assessed the performance of the proposed classification pipeline in diagnosing the coronavirus disease 2019 (COVID-19) from computed tomographic (CT) images. To evaluate the proposed spatial features' added value, we compared the performance of the proposed classification pipeline with other SVM-based classifiers that account for different texture features, namely: statistical features only, optimized spatial features using Euclidean metric, non-optimized spatial features with Wasserstein metric. The proposed technique, which accounts for the optimized spatial texture feature with Wasserstein metric, shows great potential in classifying new COVID CT images that the algorithm has not seen in the training step. The MATLAB code of the proposed classification pipeline is made available. It can be used to find the best reference samples in other data cohorts, which can then be employed to build different prediction models.
Subject(s)
COVID-19 , Humans , Bayes Theorem , COVID-19/diagnostic imaging , Support Vector Machine , Algorithms , Tomography, X-Ray Computed/methodsABSTRACT
As obesity is a serious problem in the human population, overloading of the horse's thoracolumbar region often affects sport and school horses. The advances in using infrared thermography (IRT) to assess the horse's back overload will shortly integrate the IRT-based rider-horse fit into everyday equine practice. This study aimed to evaluate the applicability of entropy measures to select the most informative measures and color components, and the accuracy of rider:horse bodyweight ratio detection. Twelve horses were ridden by each of the six riders assigned to the light, moderate, and heavy groups. Thermal images were taken pre- and post-exercise. For each thermal image, two-dimensional sample (SampEn), fuzzy (FuzzEn), permutation (PermEn), dispersion (DispEn), and distribution (DistEn) entropies were measured in the withers and the thoracic spine areas. Among 40 returned measures, 30 entropy measures were exercise-dependent, whereas 8 entropy measures were bodyweight ratio-dependent. Moreover, three entropy measures demonstrated similarities to entropy-related gray level co-occurrence matrix (GLCM) texture features, confirming the higher irregularity and complexity of thermal image texture when horses worked under heavy riders. An application of DispEn to red color components enables identification of the light and heavy rider groups with higher accuracy than the previously used entropy-related GLCM texture features.