Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Front Artif Intell ; 5: 905104, 2022.
Article in English | MEDLINE | ID: mdl-35783353

ABSTRACT

Graph structured data is ubiquitous in daily life and scientific areas and has attracted increasing attention. Graph Neural Networks (GNNs) have been proved to be effective in modeling graph structured data and many variants of GNN architectures have been proposed. However, much human effort is often needed to tune the architecture depending on different datasets. Researchers naturally adopt Automated Machine Learning on Graph Learning, aiming to reduce human effort and achieve generally top-performing GNNs, but their methods focus more on the architecture search. To understand GNN practitioners' automated solutions, we organized AutoGraph Challenge at KDD Cup 2020, emphasizing automated graph neural networks for node classification. We received top solutions, especially from industrial technology companies like Meituan, Alibaba, and Twitter, which are already open sourced on GitHub. After detailed comparisons with solutions from academia, we quantify the gaps between academia and industry on modeling scope, effectiveness, and efficiency, and show that (1) academic AutoML for Graph solutions focus on GNN architecture search while industrial solutions, especially the winning ones in the KDD Cup, tend to obtain an overall solution (2) with only neural architecture search, academic solutions achieve on average 97.3% accuracy of industrial solutions (3) academic solutions are cheap to obtain with several GPU hours while industrial solutions take a few months' labors. Academic solutions also contain much fewer parameters.

2.
Adv Neural Inf Process Syst ; 32: 4869-4880, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32256024

ABSTRACT

Graph convolutional neural networks (GCNs) embed nodes in a graph into Euclidean space, which has been shown to incur a large distortion when embedding real-world graphs with scale-free or hierarchical structure. Hyperbolic geometry offers an exciting alternative, as it enables embeddings with much smaller distortion. However, extending GCNs to hyperbolic geometry presents several unique challenges because it is not clear how to define neural network operations, such as feature transformation and aggregation, in hyperbolic space. Furthermore, since input features are often Euclidean, it is unclear how to transform the features into hyperbolic embeddings with the right amount of curvature. Here we propose Hyperbolic Graph Convolutional Neural Network (HGCN), the first inductive hyperbolic GCN that leverages both the expressiveness of GCNs and hyperbolic geometry to learn inductive node representations for hierarchical and scale-free graphs. We derive GCNs operations in the hyperboloid model of hyperbolic space and map Euclidean input features to embeddings in hyperbolic spaces with different trainable curvature at each layer. Experiments demonstrate that HGCN learns embeddings that preserve hierarchical structure, and leads to improved performance when compared to Euclidean analogs, even with very low dimensional embeddings: compared to state-of-the-art GCNs, HGCN achieves an error reduction of up to 63.1% in ROC AUC for link prediction and of up to 47.5% in F1 score for node classification, also improving state-of-the art on the Pubmed dataset.

3.
Adv Neural Inf Process Syst ; 32: 9240-9251, 2019 Dec.
Article in English | MEDLINE | ID: mdl-32265580

ABSTRACT

Graph Neural Networks (GNNs) are a powerful tool for machine learning on graphs. GNNs combine node feature information with the graph structure by recursively passing neural messages along edges of the input graph. However, incorporating both graph structure and feature information leads to complex models and explaining predictions made by GNNs remains unsolved. Here we propose GnnExplainer, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task. Given an instance, GnnExplainer identifies a compact subgraph structure and a small subset of node features that have a crucial role in GNN's prediction. Further, GnnExplainer can generate consistent and concise explanations for an entire class of instances. We formulate GnnExplainer as an optimization task that maximizes the mutual information between a GNN's prediction and distribution of possible subgraph structures. Experiments on synthetic and real-world graphs show that our approach can identify important graph structures as well as node features, and outperforms alternative baseline approaches by up to 43.0% in explanation accuracy. GnnExplainer provides a variety of benefits, from the ability to visualize semantically relevant structures to interpretability, to giving insights into errors of faulty GNNs.

4.
J Biomech ; 49(16): 4113-4118, 2016 12 08.
Article in English | MEDLINE | ID: mdl-27789037

ABSTRACT

Analyses of muscular activity during rhythmic behaviors provide critical data for biomechanical studies. Electrical potentials measured from muscles using electromyography (EMG) require discrimination of noise regions as the first step in analysis. An experienced analyst can accurately identify the onset and offset of EMG but this process takes hours to analyze a short (10-15s) record of rhythmic EMG bursts. Existing computational techniques reduce this time but have limitations. These include a universal threshold for delimiting noise regions (i.e., a single signal value for identifying the EMG signal onset and offset), pre-processing using wide time intervals that dampen sensitivity for EMG signal characteristics, poor performance when a low frequency component (e.g., DC offset) is present, and high computational complexity leading to lack of time efficiency. We present a new statistical method and MATLAB script (EMG-Extractor) that includes an adaptive algorithm to discriminate noise regions from EMG that avoids these limitations and allows for multi-channel datasets to be processed. We evaluate the EMG-Extractor with EMG data on mammalian jaw-adductor muscles during mastication, a rhythmic behavior typified by low amplitude onsets/offsets and complex signal pattern. The EMG-Extractor consistently and accurately distinguishes noise from EMG in a manner similar to that of an experienced analyst. It outputs the raw EMG signal region in a form ready for further analysis.


Subject(s)
Electromyography/methods , Algorithms , Animals , Humans , Mastication/physiology , Masticatory Muscles/physiology , Periodicity , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio
SELECTION OF CITATIONS
SEARCH DETAIL
...