Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37363844

ABSTRACT

Deep reinforcement learning (DRL) has empowered a variety of artificial intelligence fields, including pattern recognition, robotics, recommendation systems, and gaming. Similarly, graph neural networks (GNNs) have also demonstrated their superior performance in supervised learning for graph-structured data. In recent times, the fusion of GNN with DRL for graph-structured environments has attracted a lot of attention. This article provides a comprehensive review of these hybrid works. These works can be classified into two categories: 1) algorithmic contributions, where DRL and GNN complement each other with an objective of addressing each other's shortcomings and 2) application-specific contributions that leverage a combined GNN-DRL formulation to address problems specific to different applications. This fusion effectively addresses various complex problems in engineering and life sciences. Based on the review, we further analyze the applicability and benefits of fusing these two domains, especially in terms of increasing generalizability and reducing computational complexity. Finally, the key challenges in integrating DRL and GNN, and potential future research directions are highlighted, which will be of interest to the broader machine learning community.

2.
Risk Anal ; 43(11): 2280-2297, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36746175

ABSTRACT

Critical infrastructures such as cyber-physical energy systems (CPS-E) integrate information flow and physical operations that are vulnerable to natural and targeted failures. Safe, secure, and reliable operation and control of CPS-E is critical to ensure societal well-being and economic prosperity. Automated control is key for real-time operations and may be mathematically cast as a sequential decision-making problem under uncertainty. Emergence of data-driven techniques for decision making under uncertainty, such as reinforcement learning (RL), have led to promising advances for addressing sequential decision-making problems for risk-based robust CPS-E control. However, existing research challenges include understanding the applicability of RL methods across diverse CPS-E applications, addressing the effect of risk preferences across multiple RL methods, and development of open-source domain-aware simulation environments for RL experimentation within a CPS-E context. This article systematically analyzes the applicability of four types of RL methods (model-free, model-based, hybrid model-free and model-based, and hierarchical) for risk-based robust CPS-E control. Problem features and solution stability for the RL methods are also discussed. We demonstrate and compare the performance of multiple RL methods under different risk specifications (risk-averse, risk-neutral, and risk-seeking) through the development and application of an open-source simulation environment. Motivating numerical simulation examples include representative single-zone and multizone building control use cases. Finally, six key insights for future research and broader adoption of RL methods are identified, with specific emphasis on problem features, algorithmic explainability, and solution stability.

3.
Philos Trans A Math Phys Eng Sci ; 378(2166): 20190056, 2020 Mar 06.
Article in English | MEDLINE | ID: mdl-31955678

ABSTRACT

As noted in Wikipedia, skin in the game refers to having 'incurred risk by being involved in achieving a goal', where 'skin is a synecdoche for the person involved, and game is the metaphor for actions on the field of play under discussion'. For exascale applications under development in the US Department of Energy Exascale Computing Project, nothing could be more apt, with the skin being exascale applications and the game being delivering comprehensive science-based computational applications that effectively exploit exascale high-performance computing technologies to provide breakthrough modelling and simulation and data science solutions. These solutions will yield high-confidence insights and answers to the most critical problems and challenges for the USA in scientific discovery, national security, energy assurance, economic competitiveness and advanced healthcare. This article is part of a discussion meeting issue 'Numerical algorithms for high-performance computational science'.

4.
Appl Netw Sci ; 3(1): 3, 2018.
Article in English | MEDLINE | ID: mdl-30839776

ABSTRACT

A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time - dynamic community detection - and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and find that the Riemannian methods are generally better suited to dynamic community detection. Next steps with the Riemannian framework include producing a Riemannian least-squares regression method for working with noisy data and developing support methods, such as spectral sparsification, to improve the scalability of our current methods.

5.
J Parallel Distrib Comput ; 76: 132-144, 2015 Feb 01.
Article in English | MEDLINE | ID: mdl-25767331

ABSTRACT

A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

SELECTION OF CITATIONS
SEARCH DETAIL
...