Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Language
Publication year range
1.
J Chromatogr A ; 1669: 462967, 2022 Apr 26.
Article in English | MEDLINE | ID: mdl-35305457

ABSTRACT

Peptide therapeutics plays a prominent role in medical practice. Both peptides and proteins have been used in several disease conditions like diabetes, cancer, bacterial infections etc. The optimization of a peptide library is a time consuming and expensive chore. The tools of computational chemistry offer a way to optimize the properties of peptides. Quantitative Structure Retention (Chromatographic) Relationships (QSRR) is a powerful tool which statistically derives relationships between chromatographic parameters and descriptors that characterize the molecular structure of analytes. In this paper, we show how Comparative Protein ModelingQuantitative Structure Retention Relationship (acronym ComProM-QSRR) can be used to predict the retention time of peptide sequences. This formalism is founded on our earlier published QSAR methodology HomoSAR. ComProM-QSRR can recognize and distinguish the contribution of amino acids at specific positions in the peptide sequences to the retention phenomena through their related physicochemical properties. This study firmly establishes the fact that this approach can be pragmatically used to predict the retention time to all classes of peptides regardless of size or sequence.


Subject(s)
Proteins , Quantitative Structure-Activity Relationship , Amino Acid Sequence , Chromatography, High Pressure Liquid/methods , Peptides/chemistry
2.
ISA Trans ; 128(Pt B): 565-578, 2022 Sep.
Article in English | MEDLINE | ID: mdl-34953588

ABSTRACT

Many industrial control problems related to multi-objective optimization, such as controller parameters tuning, often require operators to perform multiple-step interactions without considering the changes of decision-makers' affective states and quantitative description of decision-makers' preferences during the interactive decision. Regarding this problem, we developed a multilayer affective computing model (MACM), including three factors: human personality, emotional space, and affective states, to demonstrate the iterative affective computing during the interactions. First, a concise model of affective computing-driven interactive decision-making was built before three submodules involved were described in detail. (1) An affective state recognition method based on facial expressions was presented, providing the basis for obtaining expert affective states during decision-making. (2) An identification method of affective parameters was given, providing an approach to describing personalized affective state-changing rules of different persons. (3) A definition of decision-makers' preferences in interactive decision-making was specified. In addition, a decision-makers' preferences mining method was developed by the MACM and an iterative learning control (ILC) strategy. Thus, we proposed affective computing-driven interactive decision-making method, which provided a simplified approach to converting the interactive decision problems based on decision-makers' preferences to decision issues based on incremental decision vector, along with assisting computers to learn from human experts and perform decision-making automatically in a general sense. Then, two typical process control cases-PI controller tuning for decoupling problem and manipulate vector optimization for batch processes were used to show the correctness and effectiveness of the contributions. Compared with other traditional optimization algorithms without affective state tracking and recognition (fuzzy control, ILC, reinforcement learning, and so on), experimental results indicated that the proposed method could achieve good performance. Finally, this study presented the efficiency and limitations of using this technique for a specific application.


Subject(s)
Algorithms , Industry , Decision Making , Humans
3.
Artif Intell Med ; 119: 102156, 2021 09.
Article in English | MEDLINE | ID: mdl-34531015

ABSTRACT

COVID-19 (Coronavirus) went through a rapid escalation until it became a pandemic disease. The normal and manual medical infection discovery may take few days and therefore computer science engineers can share in the development of the automatic diagnosis for fast detection of that disease. The study suggests a hybrid COVID-19 framework (named HMB-HCF) based on deep learning (DL), genetic algorithm (GA), weighted sum (WS), and majority voting principles in nine phases. Its segmentation phase suggests a lung segmentation algorithm using X-Ray images (named HMB-LSAXI) for extracting lungs. Its classification phase is built from a hybrid convolutional neural network (CNN) architecture using an abstractly-designed CNN (named HMB1-COVID19) and transfer learning (TL) pre-trained models (VGG16, VGG19, ResNet50, ResNet101, Xception, DenseNet121, DenseNet169, MobileNet, and MobileNetV2). The hybrid CNN architecture is used for learning, classification, and parameters optimization while GA is used to optimize the hyperparameters. This hybrid working mechanism is combined in an overall algorithm named HMB-DLGA. The study experiments implemented the WS approach to evaluate the models' performance using the loss, accuracy, F1-score, precision, recall, and area under curve (AUC) metrics with different pre-defined ratios. A collected, combined, and unified X-Ray dataset from 8 different public datasets was used alongside the regularization, dropout, and data augmentation techniques to limit the overall overfitting. The applied experiments reported state-of-the-art metrics. VGG16 reported 100% WS metric (i.e., 0.0097, 99.78%, 0.9984, 99.89%, 99.78%, and 0.9996 for the loss, accuracy, F1, precision, recall, and AUC respectively) concerning the highest WS. It also reported a 99.92% WS metric (i.e., 0.0099, 99.84%, 0.9984, 99.84%, 99.84%, and 0.9996 for the loss, accuracy, F1, precision, recall, and AUC respectively) concerning the last reported WS result. HMB-HCF was validated on 13 different public datasets to verify its generalization. The best-achieved metrics were compared with 13 related studies. These extensive experiments' target was the applicability verification and generalization.


Subject(s)
COVID-19 , Deep Learning , Algorithms , Humans , Neural Networks, Computer , SARS-CoV-2
4.
J Med Syst ; 43(3): 76, 2019 Feb 13.
Article in English | MEDLINE | ID: mdl-30756191

ABSTRACT

The recent studies in Morphometric Magnetic Resonance Imaging (MRI) have investigated the abnormalities in the brain volume that have been associated diagnosing of the Alzheimer's Disease (AD) by making use of the Voxel-Based Morphometry (VBM). The system permits the evaluation of the volumes of grey matter in subjects such as the AD or the conditions related to it and are compared in an automated manner with the healthy controls in the entire brain. The article also reviews the findings of the VBM that are related to various stages of the AD and also its prodrome known as the Mild Cognitive Impairment (MCI). For this work, the Ada Boost classifier has been proposed to be a good selector of feature that brings down the classification error's upper bound. A Principal Component Analysis (PCA) had been employed for the dimensionality reduction and for improving efficiency. The PCA is a powerful, as well as a reliable, tool in data analysis. Calculating fitness scores will be an independent process. For this reason, the Genetic Algorithm (GA) along with a greedy search may be computed easily along with some high-performance systems of computing. The primary goal of this work was to identify better collections or permutations of the classifiers that are weak to build stronger ones. The results of the experiment prove that the GAs is one more alternative technique used for boosting the permutation of weak classifiers identified in Ada Boost which can produce some better solutions compared to the classical Ada Boost.


Subject(s)
Alzheimer Disease/diagnosis , Brain/pathology , Diagnosis, Computer-Assisted/methods , Magnetic Resonance Imaging/methods , Principal Component Analysis , Algorithms , Alzheimer Disease/pathology , Cognitive Dysfunction/pathology , Gray Matter/pathology , Humans , Severity of Illness Index
5.
Waste Manag ; 78: 31-42, 2018 Aug.
Article in English | MEDLINE | ID: mdl-32559916

ABSTRACT

The management of disaster waste is one of the most critical tasks associated with recovery after a disaster. Having a general idea of the required capacity, cost and target clean-up time while considering the uncertainties involved in the system before the detailed plan of a disaster waste clean-up system is significant. Reliability analysis is a method to judge the performance of a system and deal with uncertainties in the system. Evaluating the reliability of the system, which can indicate the possibility to complete the clean-up within the target time and cost, and optimising the system to maximise the reliability to provide information to decision-makers regarding the capacity, cost and time required to finish the clean-up is the purpose of this paper. A mathematical model is developed applying the First Order Reliability Method (FORM) to address the problem. Additionally, a non-linear optimisation model is developed to improve the reliability of the disaster waste clean-up system with consideration of the total cost and clean-up time constraints, and solved using a Genetic Algorithm. The proposed models are implemented to solve a case study in Queensland, Australia. It is shown that the models have the capability of maximising the reliability and minimising the total clean-up costs by optimising the arrangement of vehicles during the clean-up process.

6.
Braz. arch. biol. technol ; 57(6): 962-970, Nov-Dec/2014. tab, graf
Article in English | LILACS | ID: lil-730391

ABSTRACT

Different culture conditions viz. additional carbon and nitrogen content, inoculum size and age, temperature and pH of the mixed culture of Bifidobacterium bifidum and Lactobacillus acidophilus were optimized using response surface methodology (RSM) and artificial neural network (ANN). Kinetic growth models were fitted for the cultivations using a Fractional Factorial (FF) design experiments for different variables. This novel concept of combining the optimization and modeling presented different optimal conditions for the mixture of B. bifidum and L. acidophilus growth from their one variable at-a-time (OVAT) optimization study. Through these statistical tools, the product yield (cell mass) of the mixture of B. bifidum and L. acidophilus was increased. Regression coefficients (R2) of both the statistical tools predicted that ANN was better than RSM and the regression equation was solved with the help of genetic algorithms (GA). The normalized percentage mean squared error obtained from the ANN and RSM models were 0.08 and 0.3%, respectively. The optimum conditions for the maximum biomass yield were at temperature 38°C, pH 6.5, inoculum volume 1.60 mL, inoculum age 30 h, carbon content 42.31% (w/v), and nitrogen content 14.20% (w/v). The results demonstrated a higher prediction accuracy of ANN compared to RSM.

7.
Braz. arch. biol. technol ; 57(1): 15-22, Jan.-Feb. 2014. ilus, graf, tab
Article in English | LILACS | ID: lil-702564

ABSTRACT

The culture conditions viz. additional carbon and nitrogen content, inoculum size, age, temperature and pH of Lactobacillus acidophilus were optimized using response surface methodology (RSM) and artificial neural network (ANN). Kinetic growth models were fitted to cultivations from a Box-Behnken Design (BBD) design experiments for different variables. This concept of combining the optimization and modeling presented different optimal conditions for L. acidophilus growth from their original optimization study. Through these statistical tools, the product yield (cell mass) of L. acidophilus was increased. Regression coefficients (R²) of both the statistical tools predicted that ANN was better than RSM and the regression equation was solved with the help of genetic algorithms (GA). The normalized percentage mean squared error obtained from the ANN and RSM models were 0.06 and 0.2%, respectively. The results demonstrated a higher prediction accuracy of ANN compared to RSM.

8.
Braz. arch. biol. technol ; 54(6): 1357-1366, Nov.-Dec. 2011. ilus, graf, tab
Article in English | LILACS | ID: lil-608449

ABSTRACT

The aim of this work was to optimize the biomass production by Bifidobacterium bifidum 255 using the response surface methodology (RSM) and artificial neural network (ANN) both coupled with GA. To develop the empirical model for the yield of probiotic bacteria, additional carbon and nitrogen content, inoculum size, age, temperature and pH were selected as the parameters. Models were developed using » fractional factorial design (FFD) of the experiments with the selected parameters. The normalized percentage mean squared error obtained from the ANN and RSM models were 0.05 and 0.1 percent, respectively. Regression coefficient (R²) of the ANN model showed higher prediction accuracy compared to that of the RSM model. The empirical yield model (for both ANN and RSM) obtained were utilized as the objective functions to be maximized with the help of genetic algorithm. The optimal conditions for the maximal biomass yield were 37.4 °C, pH 7.09, inoculum volume 1.97 ml, inoculum age 58.58 h, carbon content 41.74 percent (w/v), and nitrogen content 46.23 percent (w/v). The work reported is a novel concept of combining the statistical modeling and evolutionary optimization for an improved yield of cell mass of B. bifidum 255.

SELECTION OF CITATIONS
SEARCH DETAIL
...