Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 133
Filter
1.
N Biotechnol ; 83: 26-35, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38936658

ABSTRACT

D-1,2,4-butanetriol (BT) is a widely used fine chemical that can be manufactured by engineered Escherichia coli expressing heterologous pathways and using xylose as a substrate. The current study developed a glucose-xylose dual metabolic channel system in an engineered E. coli and Combinatorially optimized it using multiple strategies to promote BT production. The carbon catabolite repression effects were alleviated by deleting the gene ptsG that encodes the major glucose transporter IICBGlc and mutating the gene crp that encodes the catabolite repressor protein, thereby allowing C-fluxes of both glucose and xylose into their respective metabolic channels separately and simultaneously, which increased BT production by 33% compared with that of the original MJ133K-1 strain. Then, the branch metabolic pathways of intermediates in the BT channel were investigated, the transaminase HisC, the ketoreductases DlD, OLD, and IlvC, and the aldolase MhpE and YfaU were identified as the enzymes for the branched metabolism of 2-keto-3-deoxy-xylonate, deletion of the gene hisC increased BT titer by 21.7%. Furthermore, the relationship between BT synthesis and the intracellular NADPH level was examined, and deletion of the gene pntAB that encodes a transhydrogenase resulted in an 18.1% increase in BT production. The combination of the above approaches to optimize the metabolic network increased BT production by 47.5%, resulting in 2.67 g/L BT in 24 deep-well plates. This study provides insights into the BT biosynthesis pathway and demonstrates effective strategies to increase BT production, which will promote the industrialization of the biosynthesis of BT.

2.
Heliyon ; 10(10): e31297, 2024 May 30.
Article in English | MEDLINE | ID: mdl-38818174

ABSTRACT

The current best-known performance guarantees for the extensively studied Traveling Salesman Problem (TSP) of determinate approximation algorithms is 32, achieved by Christofides' algorithm 47 years ago. This paper investigates a new generalization problem of the TSP, termed the Minimum-Cost Bounded Degree Connected Subgraph (MBDCS) problem. In the MBDCS problem, the goal is to identify a minimum-cost connected subgraph containing n=|V| edges from an input graph G=(V,E) with degree upper bounds for particular vertices. We show that for certain special cases of MBDCS, the aim is equivalent to finding a minimum-cost Hamiltonian cycle for the input graph, same as the TSP. To appropriately solve MBDCS, we initially present an integer programming formulation for the problem. Subsequently, we propose an algorithm to approximate the optimal solution by applying the iterative rounding technique to solution of the integer programming relaxation. We demonstrate that the returned subgraph of our proposed algorithm is one of the best guarantees for the MBDCS problem in polynomial time, assuming P≠NP. This study views the optimization of TSP as finding a minimum-cost connected subgraph containing n edges with degree upper bounds for certain vertices, and it may provide new insights into optimizing the TSP in future research.

3.
Entropy (Basel) ; 26(5)2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38785647

ABSTRACT

Protein-ligand docking plays a significant role in structure-based drug discovery. This methodology aims to estimate the binding mode and binding free energy between the drug-targeted protein and candidate chemical compounds, utilizing protein tertiary structure information. Reformulation of this docking as a quadratic unconstrained binary optimization (QUBO) problem to obtain solutions via quantum annealing has been attempted. However, previous studies did not consider the internal degrees of freedom of the compound that is mandatory and essential. In this study, we formulated fragment-based protein-ligand flexible docking, considering the internal degrees of freedom of the compound by focusing on fragments (rigid chemical substructures of compounds) as a QUBO problem. We introduced four factors essential for fragment-based docking in the Hamiltonian: (1) interaction energy between the target protein and each fragment, (2) clashes between fragments, (3) covalent bonds between fragments, and (4) the constraint that each fragment of the compound is selected for a single placement. We also implemented a proof-of-concept system and conducted redocking for the protein-compound complex structure of Aldose reductase (a drug target protein) using SQBM+, which is a simulated quantum annealer. The predicted binding pose reconstructed from the best solution was near-native (RMSD = 1.26 Å), which can be further improved (RMSD = 0.27 Å) using conventional energy minimization. The results indicate the validity of our QUBO problem formulation.

4.
Adv Sci (Weinh) ; 11(26): e2310096, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38696663

ABSTRACT

Combinatorial optimization (CO) has a broad range of applications in various fields, including operations research, computer science, and artificial intelligence. However, many of these problems are classified as nondeterministic polynomial-time (NP)-complete or NP-hard problems, which are known for their computational complexity and cannot be solved in polynomial time on traditional digital computers. To address this challenge, continuous-time Ising machine solvers have been developed, utilizing different physical principles to map CO problems to ground state finding. However, most Ising machine prototypes operate at speeds comparable to digital hardware and rely on binarizing node states, resulting in increased system complexity and further limiting operating speed. To tackle these issues, a novel device-algorithm co-design method is proposed for fast sub-optimal solution finding with low hardware complexity. On the device side, a piezoelectric lithium niobate (LiNbO3) microelectromechanical system (MEMS) oscillator network-based Ising machine without second-harmonic injection locking (SHIL) is devised to solve Max-cut and graph coloring problems. The LiNbO3 oscillator operates at speeds greater than 9 GHz, making it one of the fastest oscillatory Ising machines. System-wise, an innovative grouping method is used that achieves a performance guarantee of 0.878 for Max-cut and 0.658 for graph coloring problems, which is comparable to Ising machines that utilize binarization.

5.
ACS Nano ; 18(16): 10758-10767, 2024 Apr 23.
Article in English | MEDLINE | ID: mdl-38598699

ABSTRACT

Neural networks are increasingly used to solve optimization problems in various fields, including operations research, design automation, and gene sequencing. However, these networks face challenges due to the nondeterministic polynomial time (NP)-hard issue, which results in exponentially increasing computational complexity as the problem size grows. Conventional digital hardware struggles with the von Neumann bottleneck, the slowdown of Moore's law, and the complexity arising from heterogeneous system design. Two-dimensional (2D) memristors offer a potential solution to these hardware challenges, with their in-memory computing, decent scalability, and rich dynamic behaviors. In this study, we explore the use of nonvolatile 2D memristors to emulate synapses in a discrete-time Hopfield neural network, enabling the network to solve continuous optimization problems, like finding the minimum value of a quadratic polynomial, and tackle combinatorial optimization problems like Max-Cut. Additionally, we coupled volatile memristor-based oscillators with nonvolatile memristor synapses to create an oscillatory neural network-based Ising machine, a continuous-time analog dynamic system capable of solving combinatorial optimization problems including Max-Cut and map coloring through phase synchronization. Our findings demonstrate that 2D memristors have the potential to significantly enhance the efficiency, compactness, and homogeneity of integrated Ising machines, which is useful for future advances in neural networks for optimization problems.

6.
Biometrika ; 111(1): 171-193, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38352626

ABSTRACT

Rooted and ranked phylogenetic trees are mathematical objects that are useful in modelling hierarchical data and evolutionary relationships with applications to many fields such as evolutionary biology and genetic epidemiology. Bayesian phylogenetic inference usually explores the posterior distribution of trees via Markov chain Monte Carlo methods. However, assessing uncertainty and summarizing distributions remains challenging for these types of structures. While labelled phylogenetic trees have been extensively studied, relatively less literature exists for unlabelled trees that are increasingly useful, for example when one seeks to summarize samples of trees obtained with different methods, or from different samples and environments, and wishes to assess the stability and generalizability of these summaries. In our paper, we exploit recently proposed distance metrics of unlabelled ranked binary trees and unlabelled ranked genealogies, or trees equipped with branch lengths, to define the Fréchet mean, variance and interquartile sets as summaries of these tree distributions. We provide an efficient combinatorial optimization algorithm for computing the Fréchet mean of a sample or of distributions on unlabelled ranked tree shapes and unlabelled ranked genealogies. We show the applicability of our summary statistics for studying popular tree distributions and for comparing the SARS-CoV-2 evolutionary trees across different locations during the COVID-19 epidemic in 2020. Our current implementations are publicly available at https://github.com/RSamyak/fmatrix.

7.
Brief Bioinform ; 25(2)2024 Jan 22.
Article in English | MEDLINE | ID: mdl-38261343

ABSTRACT

Cryo-Electron Microscopy (cryo-EM) is a widely used and effective method for determining the three-dimensional (3D) structure of biological molecules. For ab-initio Cryo-EM 3D reconstruction using single particle analysis (SPA), estimating the projection direction of the projection image is a crucial step. However, the existing SPA methods based on common lines are sensitive to noise. The error in common line detection will lead to a poor estimation of the projection directions and thus may greatly affect the final reconstruction results. To improve the reconstruction results, multiple candidate common lines are estimated for each pair of projection images. The key problem then becomes a combination optimization problem of selecting consistent common lines from multiple candidates. To solve the problem efficiently, a physics-inspired method based on a kinetic model is proposed in this work. More specifically, hypothetical attractive forces between each pair of candidate common lines are used to calculate a hypothetical torque exerted on each projection image in the 3D reconstruction space, and the rotation under the hypothetical torque is used to optimize the projection direction estimation of the projection image. This way, the consistent common lines along with the projection directions can be found directly without enumeration of all the combinations of the multiple candidate common lines. Compared with the traditional methods, the proposed method is shown to be able to produce more accurate 3D reconstruction results from high noise projection images. Besides the practical value, the proposed method also serves as a good reference for solving similar combinatorial optimization problems.


Subject(s)
Imaging, Three-Dimensional , Cryoelectron Microscopy , Kinetics
8.
EPJ Quantum Technol ; 11(1): 6, 2024.
Article in English | MEDLINE | ID: mdl-38261853

ABSTRACT

In recent years, variational quantum algorithms such as the Quantum Approximation Optimization Algorithm (QAOA) have gained popularity as they provide the hope of using NISQ devices to tackle hard combinatorial optimization problems. It is, however, known that at low depth, certain locality constraints of QAOA limit its performance. To go beyond these limitations, a non-local variant of QAOA, namely recursive QAOA (RQAOA), was proposed to improve the quality of approximate solutions. The RQAOA has been studied comparatively less than QAOA, and it is less understood, for instance, for what family of instances it may fail to provide high-quality solutions. However, as we are tackling NP-hard problems (specifically, the Ising spin model), it is expected that RQAOA does fail, raising the question of designing even better quantum algorithms for combinatorial optimization. In this spirit, we identify and analyze cases where (depth-1) RQAOA fails and, based on this, propose a reinforcement learning enhanced RQAOA variant (RL-RQAOA) that improves upon RQAOA. We show that the performance of RL-RQAOA improves over RQAOA: RL-RQAOA is strictly better on these identified instances where RQAOA underperforms and is similarly performing on instances where RQAOA is near-optimal. Our work exemplifies the potentially beneficial synergy between reinforcement learning and quantum (inspired) optimization in the design of new, even better heuristics for complex problems.

9.
Cell Syst ; 14(12): 1113-1121.e9, 2023 12 20.
Article in English | MEDLINE | ID: mdl-38128483

ABSTRACT

CRISPR-Cas9-based genome editing combined with single-cell sequencing enables the tracing of the history of cell divisions, or cellular lineage, in tissues and whole organisms. Although standard phylogenetic approaches may be applied to reconstruct cellular lineage trees from this data, the unique features of the CRISPR-Cas9 editing process motivate the development of specialized models that describe the evolution of CRISPR-Cas9-induced mutations. Here, we introduce the "star homoplasy" evolutionary model that constrains a phylogenetic character to mutate at most once along a lineage, capturing the "non-modifiability" property of CRISPR-Cas9 mutations. We derive a combinatorial characterization of star homoplasy phylogenies and use this characterization to develop an algorithm, "Startle", that computes a maximum parsimony star homoplasy phylogeny. We demonstrate that Startle infers more accurate phylogenies on simulated lineage tracing data compared with existing methods and finds parsimonious phylogenies with fewer metastatic migrations on lineage tracing data from mouse metastatic lung adenocarcinoma.


Subject(s)
CRISPR-Cas Systems , Gene Editing , Animals , Mice , CRISPR-Cas Systems/genetics , Phylogeny , Gene Editing/methods , Cell Lineage/genetics , Mutation
10.
Cell Syst ; 14(12): 1122-1130.e3, 2023 12 20.
Article in English | MEDLINE | ID: mdl-38128484

ABSTRACT

The efficacy of epitope vaccines depends on the included epitopes as well as the probability that the selected epitopes are presented by the major histocompatibility complex (MHC) proteins of a vaccinated individual. Designing vaccines that effectively immunize a high proportion of the population is challenging because of high MHC polymorphism, diverging MHC-peptide binding affinities, and physical constraints on epitope vaccine constructs. Here, we present HOGVAX, a combinatorial optimization approach for epitope vaccine design. To optimize population coverage within the constraint of limited vaccine construct space, HOGVAX employs a hierarchical overlap graph (HOG) to identify and exploit overlaps between selected peptides and explicitly models the structure of linkage disequilibrium in the MHC. In a SARS-CoV-2 case study, we demonstrate that HOGVAX-designed vaccines contain substantially more epitopes than vaccines built from concatenated peptides and predict vaccine efficacy in over 98% of the population with high numbers of presented peptides in vaccinated individuals.


Subject(s)
COVID-19 , Vaccines , Humans , SARS-CoV-2 , COVID-19/prevention & control , Epitopes, T-Lymphocyte , Peptides
11.
BMC Bioinformatics ; 24(1): 431, 2023 Nov 14.
Article in English | MEDLINE | ID: mdl-37964228

ABSTRACT

BACKGROUND: Liquid chromatography-mass spectrometry is widely used in untargeted metabolomics for composition profiling. In multi-run analysis scenarios, features of each run are aligned into consensus features by feature alignment algorithms to observe the intensity variations across runs. However, most of the existing feature alignment methods focus more on accurate retention time correction, while underestimating the importance of feature matching. None of the existing methods can comprehensively consider feature correspondences among all runs and achieve optimal matching. RESULTS: To comprehensively analyze feature correspondences among runs, we propose G-Aligner, a graph-based feature alignment method for untargeted LC-MS data. In the feature matching stage, G-Aligner treats features and potential correspondences as nodes and edges in a multipartite graph, considers the multi-run feature matching problem an unbalanced multidimensional assignment problem, and provides three combinatorial optimization algorithms to find optimal matching solutions. In comparison with the feature alignment methods in OpenMS, MZmine2 and XCMS on three public metabolomics benchmark datasets, G-Aligner achieved the best feature alignment performance on all the three datasets with up to 9.8% and 26.6% increase in accurately aligned features and analytes, and helped all comparison software obtain more accurate results on their self-extracted features by integrating G-Aligner to their analysis workflow. G-Aligner is open-source and freely available at https://github.com/CSi-Studio/G-Aligner under a permissive license. Benchmark datasets, manual annotation results, evaluation methods and results are available at https://doi.org/10.5281/zenodo.8313034 CONCLUSIONS: In this study, we proposed G-Aligner to improve feature matching accuracy for untargeted metabolomics LC-MS data. G-Aligner comprehensively considered potential feature correspondences between all runs, converting the feature matching problem as a multidimensional assignment problem (MAP). In evaluations on three public metabolomics benchmark datasets, G-Aligner achieved the highest alignment accuracy on manual annotated and popular software extracted features, proving the effectiveness and robustness of the algorithm.


Subject(s)
Software , Tandem Mass Spectrometry , Chromatography, Liquid/methods , Tandem Mass Spectrometry/methods , Algorithms , Metabolomics/methods
12.
Brief Bioinform ; 24(6)2023 09 22.
Article in English | MEDLINE | ID: mdl-37874950

ABSTRACT

Cluster analysis is a crucial stage in the analysis and interpretation of single-cell gene expression (scRNA-seq) data. It is an inherently ill-posed problem whose solutions depend heavily on hyper-parameter and algorithmic choice. The popular approach of K-means clustering, for example, depends heavily on the choice of K and the convergence of the expectation-maximization algorithm to local minima of the objective. Exhaustive search of the space for multiple good quality solutions is known to be a complex problem. Here, we show that quantum computing offers a solution to exploring the cost function of clustering by quantum annealing, implemented on a quantum computing facility offered by D-Wave [1]. Out formulation extracts minimum vertex cover of an affinity graph to sub-sample the cell population and quantum annealing to optimise the cost function. A distribution of low-energy solutions can thus be extracted, offering alternate hypotheses about how genes group together in their space of expressions.


Subject(s)
Computing Methodologies , Quantum Theory , RNA-Seq , Sequence Analysis, RNA , Algorithms , Cluster Analysis , Gene Expression Profiling
13.
J Comput Biol ; 30(11): 1198-1225, 2023 11.
Article in English | MEDLINE | ID: mdl-37906100

ABSTRACT

Signaling and metabolic pathways, which consist of chains of reactions that produce target molecules from source compounds, are cornerstones of cellular biology. Properly modeling the reaction networks that represent such pathways requires directed hypergraphs, where each molecule or compound maps to a vertex, and each reaction maps to a hyperedge directed from its set of input reactants to its set of output products. Inferring the most likely series of reactions that produces a given set of targets from a given set of sources, where for each reaction its reactants are produced by prior reactions in the series, corresponds to finding a shortest hyperpath in a directed hypergraph, which is NP-complete. We give the first exact algorithm for general shortest hyperpaths that can find provably optimal solutions for large, real-world, reaction networks. In particular, we derive a novel graph-theoretic characterization of hyperpaths, which we leverage in a new integer linear programming formulation of shortest hyperpaths that for the first time handles cycles, and develop a cutting-plane algorithm that can solve this integer linear program to optimality in practice. Through comprehensive experiments over all of the thousands of instances from the standard Reactome and NCI-PID reaction databases, we demonstrate that our cutting-plane algorithm quickly finds an optimal hyperpath-inferring the most likely pathway-with a median running time of under 10 seconds, and a maximum time of less than 30 minutes, even on instances with thousands of reactions. We also explore for the first time how well hyperpaths infer true pathways, and show that shortest hyperpaths accurately recover known pathways, typically with very high precision and recall. Source code implementing our cutting-plane algorithm for shortest hyperpaths is available free for research use in a new tool called Mmunin.


Subject(s)
Computational Biology , Software , Algorithms , Metabolic Networks and Pathways , Signal Transduction
14.
Sensors (Basel) ; 23(18)2023 Sep 12.
Article in English | MEDLINE | ID: mdl-37765887

ABSTRACT

The minimum vertex cover (MVC) problem is a canonical NP-hard combinatorial optimization problem aiming to find the smallest set of vertices such that every edge has at least one endpoint in the set. This problem has extensive applications in cybersecurity, scheduling, and monitoring link failures in wireless sensor networks (WSNs). Numerous local search algorithms have been proposed to obtain "good" vertex coverage. However, due to the NP-hard nature, it is challenging to efficiently solve the MVC problem, especially on large graphs. In this paper, we propose an efficient local search algorithm for MVC called TIVC, which is based on two main ideas: a 3-improvements (TI) framework with a tiny perturbation and edge selection strategy. We conducted experiments on real-world large instances of a massive graph benchmark. Compared with three state-of-the-art MVC algorithms, TIVC shows superior performance in accuracy and possesses a remarkable ability to identify significantly smaller vertex covers on many graphs.

15.
Front Artif Intell ; 6: 1124553, 2023.
Article in English | MEDLINE | ID: mdl-37565044

ABSTRACT

This article provides a birds-eye view on the role of decision trees in machine learning and data science over roughly four decades. It sketches the evolution of decision tree research over the years, describes the broader context in which the research is situated, and summarizes strengths and weaknesses of decision trees in this context. The main goal of the article is to clarify the broad relevance to machine learning and artificial intelligence, both practical and theoretical, that decision trees still have today.

16.
PeerJ Comput Sci ; 9: e1192, 2023.
Article in English | MEDLINE | ID: mdl-37346673

ABSTRACT

We studied two problems called the Traveling Repairman Problem (TRPTW) and Traveling Salesman Problem (TSPTW) with time windows. The TRPTW wants to minimize the sum of travel durations between a depot and customer locations, while the TSPTW aims to minimize the total time to visit all customers. In these two problems, the deliveries are made during a specific time window given by the customers. The difference between the TRPTW and TSPTW is that the TRPTW takes a customer-oriented view, whereas the TSPTW is server-oriented. Existing algorithms have been developed for solving two problems independently in the literature. However, the literature does not have an algorithm that simultaneously solves two problems. Multifactorial Evolutionary Algorithm (MFEA) is a variant of the Evolutionary Algorithm (EA), aiming to solve multiple factorial tasks simultaneously. The main advantage of the approach is to allow transferrable knowledge between tasks. Therefore, it can improve the solution quality for multitasks. This article presents an efficient algorithm that combines the MFEA framework and Randomized Variable Neighborhood Search (RVNS) to solve two problems simultaneously. The proposed algorithm has transferrable knowledge between tasks from the MFEA and the ability to exploit good solution space from RVNS. The proposed algorithm is compared directly to the state-of-the-art MFEA on numerous datasets. Experimental results show the proposed algorithm outperforms the state-of-the-art MFEA in many cases. In addition, it finds several new best-known solutions.

17.
Evol Comput ; : 1-35, 2023 Jun 08.
Article in English | MEDLINE | ID: mdl-37290030

ABSTRACT

We contribute to the efficient approximation of the Pareto-set for the classical NP-hard multi-objective minimum spanning tree problem (moMST) adopting evolutionary computation. More precisely, by building upon preliminary work, we analyse the neighborhood structure of Pareto-optimal spanning trees and design several highly biased sub-graph-based mutation operators founded on the gained insights. In a nutshell, these operators replace (un)connected sub-trees of candidate solutions with locally optimal sub-trees. The latter (biased) step is realized by applying Kruskal's single-objective MST algorithm to a weighted sum scalarization of a sub-graph. We prove runtime complexity results for the introduced operators and investigate the desirable Pareto-beneficial property. This property states that mutants cannot be dominated by their parent. Moreover, we perform an extensive experimental benchmark study to showcase the operator's practical suitability. Our results confirm that the subgraph based operators beat baseline algorithms from the literature even with severely restricted computational budget in terms of function evaluations on four different classes of complete graphs with different shapes of the Pareto-front.

18.
J Comput Biol ; 30(6): 678-694, 2023 06.
Article in English | MEDLINE | ID: mdl-37327036

ABSTRACT

The problem of computing the Elementary Flux Modes (EFMs) and Minimal Cut Sets (MCSs) of metabolic network is a fundamental one in metabolic networks. A key insight is that they can be understood as a dual pair of monotone Boolean functions (MBFs). Using this insight, this computation reduces to the question of generating from an oracle a dual pair of MBFs. If one of the two sets (functions) is known, then the other can be computed through a process known as dualization. Fredman and Khachiyan provided two algorithms, which they called simply A and B that can serve as an engine for oracle-based generation or dualization of MBFs. We look at efficiencies available in implementing their algorithm B, which we will refer to as FK-B. Like their algorithm A, FK-B certifies whether two given MBFs in the form of Conjunctive Normal Form and Disjunctive Normal Form are dual or not, and in case of not being dual it returns a conflicting assignment (CA), that is, an assignment that makes one of the given Boolean functions True and the other one False. The FK-B algorithm is a recursive algorithm that searches through the tree of assignments to find a CA. If it does not find any CA, it means that the given Boolean functions are dual. In this article, we propose six techniques applicable to the FK-B and hence to the dualization process. Although these techniques do not reduce the time complexity, they considerably reduce the running time in practice. We evaluate the proposed improvements by applying them to compute the MCSs from the EFMs in the 19 small- and medium-sized models from the BioModels database along with 4 models of biomass synthesis in Escherichia coli that were used in an earlier computational survey Haus et al. (2008).


Subject(s)
Algorithms , Metabolic Networks and Pathways , Escherichia coli/metabolism , Models, Biological
19.
Adv Sci (Weinh) ; 10(19): e2300659, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37189211

ABSTRACT

Hardware neural networks with mechanical flexibility are promising next-generation computing systems for smart wearable electronics. Several studies have been conducted on flexible neural networks for practical applications; however, developing systems with complete synaptic plasticity for combinatorial optimization remains challenging. In this study, the metal-ion injection density is explored as a diffusive parameter of the conductive filament in organic memristors. Additionally, a flexible artificial synapse with bio-realistic synaptic plasticity is developed using organic memristors that have systematically engineered metal-ion injections, for the first time. In the proposed artificial synapse, short-term plasticity (STP), long-term plasticity, and homeostatic plasticity are independently achieved and are analogous to their biological counterparts. The time windows of the STP and homeostatic plasticity are controlled by the ion-injection density and electric-signal conditions, respectively. Moreover, stable capabilities for complex combinatorial optimization in the developed synapse arrays are demonstrated under spike-dependent operations. This effective concept for realizing flexible neuromorphic systems for complex combinatorial optimization is an essential building block for achieving a new paradigm of wearable smart electronics associated with artificial intelligent systems.

20.
Data Brief ; 48: 109189, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37206899

ABSTRACT

The data article describes a real operational dataset for the Concrete Delivery Problem (CDP). The dataset consists of 263 instances corresponding to daily orders of concrete from construction sites in Quebec, Canada. A concrete producer, i.e., a concrete-producing company that delivers concrete, provided the raw data. We cleaned the data by removing entries corresponding to non-complete orders. We processed these raw data to form instances useful for benchmarking optimization algorithms developed to solve the CDP. We also anonymized the published dataset by removing any client information and addresses corresponding to production or construction sites. The dataset is useful for researchers and practitioners studying the CDP. It can be processed to create artificial data for variations of the CDP. In its current form, the data contain information about intra-day orders. Thus, selected instances from the dataset are useful for CDP's dynamic aspect considering real-time orders.

SELECTION OF CITATIONS
SEARCH DETAIL
...