Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38687671

ABSTRACT

The proliferation of Internet-of-Things (IoT) technologies in modern smart society enables massive data exchange for offering intelligent services. It becomes essential to ensure secure communications while exchanging highly sensitive IoT data efficiently, which leads to high demands for lightweight models or algorithms with limited computation capability provided by individual IoT devices. In this study, a graph representation learning model, which seamlessly incorporates graph neural network (GNN) and knowledge distillation (KD) techniques, named reconstructed graph with global-local distillation (RG-GLD), is designed to realize the lightweight anomaly detection across IoT communication networks. In particular, a new graph network reconstruction strategy, which treats data communications as nodes in a directed graph while edges are then connected according to two specifically defined rules, is devised and applied to facilitate the graph representation learning in secure and efficient IoT communications. Both the structural and traffic features are then extracted from the graph data and flow data respectively, based on the graph attention network (GAT) and multilayer perceptron (MLP) techniques. These can benefit the GNN-based KD process in accordance with the more effective feature fusion and representation, considering both structural and data levels across the dynamic IoT networks. Furthermore, a lightweight local subgraph preservation mechanism improved by the graph attention mechanism and downsampling scheme to better utilize the topological information, and a so-called global information alignment defined based on the self-attention mechanism to effectively preserve the global information, are developed and incorporated in a refined graph attention based KD scheme. Compared with four different baseline methods, experiments and evaluations conducted based on two public datasets demonstrate the usefulness and effectiveness of our proposed model in improving the efficiency of knowledge transfer with higher classification accuracy but lower computational load, which can be deployed for lightweight anomaly detection in sustainable IoT computing environments.

2.
Article in English | MEDLINE | ID: mdl-38498735

ABSTRACT

Identifying unseen faults is a crux of the digital transformation of process manufacturing. The ever-changing manufacturing process requires preset models to cope with unseen problems. However, most current works focus on recognizing objects seen during the training phase. Conventional zero-shot recognition methods perform poorly when they are applied directly to these tasks due to the different scenarios and limited generalizability. This article yields a tensor-based zero-shot fault diagnosis framework, termed MetaEvolver, which is dedicated to improving fault diagnosis accuracy and unseen domain generalizability for practical process manufacturing scenarios. MetaEvolver learns to evolve the dual prototype distributions for each uncertain meta-domain from seen faults and then adapt to unseen faults. We first propose the concept of the uncertain meta-domain and then construct corresponding sample prototypes with the guidance of class-level attributes, which produce the sample-attribute alignment at the prototype level. MetaEvolver further collaboratively evolves the uncertain meta-domain dual prototypes by injecting the prototype distribution information of another modality, boosting the sample-attribute alignment at the distribution level. Building on the uncertain meta-domain strategy, MetaEvolver is prone to achieving knowledge transferring and unseen domain generalization with the optimization of several devised loss functions. Comprehensive experimental results on five process manufacturing data groups and five zero-shot benchmarks demonstrate that our MetaEvolver has great superiority and potential to tackle zero-shot fault diagnosis for smart process manufacturing.

3.
IEEE Trans Cybern ; 54(5): 2683-2695, 2024 May.
Article in English | MEDLINE | ID: mdl-38512748

ABSTRACT

Smart manufacturing has been transforming toward industrial digitalization integrated with various advanced technologies. Metaverse has been evolving as a next-generation paradigm of a digital space extended and augmented by reality. In the metaverse, users are interconnected for various virtual activities. In consideration of advanced possibilities that may be brought by the metaverse, it is envisioned that industrial metaverse should be integrated into smart manufacturing to upgrade industry for more visible, intelligent and efficient production in the future. Therefore, a conceptual model, named IMverse Model, and novel characteristics of the industrial metaverse for smart manufacturing are proposed in this article. Besides, an industrial metaverse architecture, named IMverse Architecture, is proposed involving several key enabling technologies. Typical innovative applications of the industrial metaverse throughout the whole product life cycle for smart manufacturing are presented with insights. Nonetheless, in prospect of future, the industrial metaverse still faces limitations and is far from implementation. Thus, challenges and open issues of the industrial metaverse for smart manufacturing are discussed, then outlook is provided for further research and application.

4.
Article in English | MEDLINE | ID: mdl-38170656

ABSTRACT

Recently, deep learning-based models such as transformer have achieved significant performance for industrial remaining useful life (RUL) prediction due to their strong representation ability. In many industrial practices, RUL prediction algorithms are deployed on edge devices for real-time response. However, the high computational cost of deep learning models makes it difficult to meet the requirements of edge intelligence. In this article, a lightweight group transformer with multihierarchy time-series reduction (GT-MRNet) is proposed to alleviate this problem. Different from most existing RUL methods computing all time series, GT-MRNet can adaptively select necessary time steps to compute the RUL. First, a lightweight group transformer is constructed to extract features by employing group linear transformation with significantly fewer parameters. Then, a time-series reduction strategy is proposed to adaptively filter out unimportant time steps at each layer. Finally, a multihierarchy learning mechanism is developed to further stabilize the performance of time-series reduction. Extensive experimental results on the real-world condition datasets demonstrate that the proposed method can significantly reduce up to 74.7% parameters and 91.8% computation cost without sacrificing accuracy.

5.
Article in English | MEDLINE | ID: mdl-37695949

ABSTRACT

Graph neural networks (GNNs) have shown great ability in modeling graphs; however, their performance would significantly degrade when there are noisy edges connecting nodes from different classes. To alleviate negative effect of noisy edges on neighborhood aggregation, some recent GNNs propose to predict the label agreement between node pairs within a single network. However, predicting the label agreement of edges across different networks has not been investigated yet. Our work makes the pioneering attempt to study a novel problem of cross-network homophilous and heterophilous edge classification (CNHHEC) and proposes a novel domain-adaptive graph attention-supervised network (DGASN) to effectively tackle the CNHHEC problem. First, DGASN adopts multihead graph attention network (GAT) as the GNN encoder, which jointly trains node embeddings and edge embeddings via the node classification and edge classification losses. As a result, label-discriminative embeddings can be obtained to distinguish homophilous edges from heterophilous edges. In addition, DGASN applies direct supervision on graph attention learning based on the observed edge labels from the source network, thus lowering the negative effects of heterophilous edges while enlarging the positive effects of homophilous edges during neighborhood aggregation. To facilitate knowledge transfer across networks, DGASN employs adversarial domain adaptation to mitigate domain divergence. Extensive experiments on real-world benchmark datasets demonstrate that the proposed DGASN achieves the state-of-the-art performance in CNHHEC.

6.
IEEE Trans Neural Netw Learn Syst ; 34(10): 6861-6871, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37030753

ABSTRACT

Various stream learning methods are emerging in an endless stream to provide a wealth of solutions for artificial intelligence in streaming data scenarios. However, when each data stream is oriented to a different target space, it forces stream learning approaches oriented to the same task to be no longer applicable. Due to inconsistent target spaces for different tasks, the previous approaches fail on the new streaming tasks or it is impracticable to be trained from scratch with few labeled samples at the beginning. To this end, we have proposed an adaptive learning scheme for few-shot streaming tasks with the contributions of tensor and meta-learning. This adaptive scheme is conducive to mitigating the domain shift when a new task has few labeled samples. We elaborate a novel tensor-empowered attention mechanism derived from nonlocal neural networks, which enables to capture long-range dependency and preserve the high-dimensional structure to refine the global features of streaming tasks. Furthermore, we develop a fine-grained similarity computing approach, which is prone to better characterize the difference across few-shot streaming tasks. To show the superiority of our method, we have carried out extensive experiments on three popular few-shot datasets to simulate streaming tasks and evaluate the performance of adaptation. The results show that our proposed method has achieved competitive performance for few-shot streaming tasks compared with the state-of-the-art (SOTA).

7.
Article in English | MEDLINE | ID: mdl-37018339

ABSTRACT

Smart healthcare has emerged to provide healthcare services using data analysis techniques. Especially, clustering is playing an indispensable role in analyzing healthcare records. However, large multi-modal healthcare data imposes great challenges on clustering. Specifically, it is hard for traditional approaches to obtain desirable results for healthcare data clustering since they are not able to work for multi-modal data. This paper presents a new high-order multi-modal learning approach using multimodal deep learning and the Tucker decomposition (F- HoFCM). Furthermore, we propose an edge-cloud-aided private scheme to facilitate the clustering efficiency for its embedding in edge resources. Specifically, the computationally intensive tasks, such as parameter updating with high-order back propagation algorithm and clustering through high-order fuzzy c-means, are processed in a centralized location with cloud computing. The other tasks such as multi-modal data fusion and Tucker decomposition are performed at the edge resources. Since the feature fusion and Tucker decomposition are nonlinear operations, the cloud cannot obtain the raw data, thus protecting the privacy. Experimental results state that the presented approach produces significantly more accurate results than the existing high-order fuzzy c-means (HOFCM) on multi-modal healthcare datasets and furthermore the clustering efficiency are significantly improved by the developed edge-cloud-aided private healthcare system.

8.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7286-7298, 2023 Oct.
Article in English | MEDLINE | ID: mdl-35230953

ABSTRACT

Cyber-physical-social systems (CPSS), an emerging cross-disciplinary research area, combines cyber-physical systems (CPS) with social networking for the purpose of providing personalized services for humans. CPSS big data, recording various aspects of human lives, should be processed to mine valuable information for CPSS services. To efficiently deal with CPSS big data, artificial intelligence (AI), an increasingly important technology, is used for CPSS data processing and analysis. Meanwhile, the rapid development of edge devices with fast processors and large memories allows local edge computing to be a powerful real-time complement to global cloud computing. Therefore, to facilitate the processing and analysis of CPSS big data from the perspective of multi-attributes, a cloud-edge-aided quantized tensor-train distributed long short-term memory (QTT-DLSTM) method is presented in this article. First, a tensor is used to represent the multi-attributes CPSS big data, which will be decomposed into the QTT form to facilitate distributed training and computing. Second, a distributed cloud-edge computing model is used to systematically process the CPSS data, including global large-scale data processing in the cloud, and local small-scale data processed at the edge. Third, a distributed computing strategy is used to improve the efficiency of training via partitioning the weight matrix and large amounts of input data in the QTT form. Finally, the performance of the proposed QTT-DLSTM method is evaluated using experiments on a public discrete manufacturing process dataset, the Li-ion battery dataset, and a public social dataset.

9.
Article in English | MEDLINE | ID: mdl-31056517

ABSTRACT

Smart Chinese medicine has emerged to contribute to the evolution of healthcare and medical services by applying machine learning together with advanced computing techniques like cloud computing to computer-aided diagnosis and treatment in the health engineering and informatics. Specifically, smart Chinese medicine is considered to have the potential to treat difficult and complicated diseases such as diabetes and cancers. Unfortunately, smart Chinese medicine has made very limited progress in the past few years. In this paper, we present a unified smart Chinese medicine framework based on the edge-cloud computing system. The objective of the framework is to achieve computer-aided syndrome differentiation and prescription recommendation, and thus to provide pervasive, personalized, and patient-centralized services in healthcare and medicine. To accomplish this objective, we integrate deep learning and deep reinforcement learning into the traditional Chinese medicine. Furthermore, we propose a multi-modal deep computation model for syndrome recognition that is a crucial part of syndrome differentiation. Finally, we conduct experiments to validate the proposed model by comparing with the staked auto-encoder and multi-modal deep learning model for syndrome recognition of hypertension and cold.


Subject(s)
Cloud Computing , Delivery of Health Care/methods , Medical Informatics/methods , Medicine, Chinese Traditional , Humans , Machine Learning
10.
IEEE Trans Neural Netw Learn Syst ; 31(9): 3721-3731, 2020 Sep.
Article in English | MEDLINE | ID: mdl-32584772

ABSTRACT

Product quality prediction, as an important issue of industrial intelligence, is a typical task of industrial process analysis, in which product quality will be evaluated and improved as feedback for industrial process adjustment. Data-driven methods, with predictive model to analyze various industrial data, have been received considerable attention in recent years. However, to get an accurate prediction, it is an essential issue to extract quality features from industrial data, including several variables generated from supply chain and time-variant machining process. In this article, a data-driven method based on wide-deep-sequence (WDS) model is proposed to provide a reliable quality prediction for industrial process with different types of industrial data. To process industrial data of high redundancy, in this article, data reduction is first conducted on different variables by different techniques. Also, an improved wide-deep (WD) model is proposed to extract quality features from key time-invariant variables. Meanwhile, an long short-term memory (LSTM)-based sequence model is presented for exploring quality information from time-domain features. Under the joint training strategy, these models will be combined and optimized by a designed penalty mechanism for unreliable predictions, especially on reduction of defective products. Finally, experiments on a real-world manufacturing process data set are carried out to present the effectiveness of the proposed method in product quality prediction.

11.
J Clin Neurophysiol ; 27(1): 17-24, 2010 Feb.
Article in English | MEDLINE | ID: mdl-20087208

ABSTRACT

The authors have developed a new approach by combining the wavelet denoising and principal component analysis methods to reduce the number of required trials for efficient extraction of brain evoked-related potentials (ERPs). Evoked-related potentials were initially extracted using wavelet denoising to enhance the signal-to-noise ratio of raw EEG measurements. Principal components of ERPs accounting for 80% of the total variance were extracted as part of the subspace of the ERPs. Finally, the ERPs were reconstructed from the selected principal components. Computer simulation results showed that the combined approach provided estimations with higher signal-to-noise ratio and lower root mean squared error than each of them alone. The authors further tested this proposed approach in single-trial ERPs extraction during an emotional process and brain responses analysis to emotional stimuli. The experimental results also demonstrated the effectiveness of this combined approach in ERPs extraction and further supported the view that emotional stimuli are processed more intensely.


Subject(s)
Brain/physiology , Electroencephalography/methods , Evoked Potentials , Signal Processing, Computer-Assisted , Algorithms , Artifacts , Computer Simulation , Emotions/physiology , Event-Related Potentials, P300 , Humans , Neuropsychological Tests , Photic Stimulation , Principal Component Analysis , Time Factors , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...