Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37037244

ABSTRACT

Weakly supervised video anomaly detection (WS-VAD) aims to identify the snippets involving anomalous events in long untrimmed videos, with solely video-level binary labels. A typical paradigm among the existing WS-VAD methods is to employ multiple modalities as inputs, e.g., RGB, optical flow, and audio, as they can provide sufficient discriminative clues that are robust to the diverse, complicated real-world scenes. However, such a pipeline has high reliance on the availability of multiple modalities and is computationally expensive and storage demanding in processing long sequences, which limits its use in some applications. To address this dilemma, we propose a privileged knowledge distillation (KD) framework dedicated to the WS-VAD task, which can maintain the benefits of exploiting additional modalities, while avoiding the need for using multimodal data in the inference phase. We argue that the performance of the privileged KD framework mainly depends on two factors: 1) the effectiveness of the multimodal teacher network and 2) the completeness of the useful information transfer. To obtain a reliable teacher network, we propose a cross-modal interactive learning strategy and an anomaly normal discrimination loss, which target learning task-specific cross-modal features and encourage the separability of anomalous and normal representations, respectively. Furthermore, we design both representation-and logits-level distillation loss functions, which force the unimodal student network to distill abundant privileged knowledge from the well-trained multimodal teacher network, in a snippet-to-video fashion. Extensive experimental results on three public benchmarks demonstrate that the proposed privileged KD framework can train a lightweight yet effective detector, for localizing anomaly events under the supervision of video-level annotations.

2.
Article in English | MEDLINE | ID: mdl-36094995

ABSTRACT

The popularity of wearable devices has increased the demands for the research on first-person activity recognition. However, most of the current first-person activity datasets are built based on the assumption that only the human-object interaction (HOI) activities, performed by the camera-wearer, are captured in the field of view. Since humans live in complicated scenarios, in addition to the first-person activities, it is likely that third-person activities performed by other people also appear. Analyzing and recognizing these two types of activities simultaneously occurring in a scene is important for the camera-wearer to understand the surrounding environments. To facilitate the research on concurrent first-and third-person activity recognition (CFT-AR), we first created a new activity dataset, namely PolyU concurrent first-and third-person (CFT) Daily, which exhibits distinct properties and challenges, compared with previous activity datasets. Since temporal asynchronism and appearance gap usually exist between the first-and third-person activities, it is crucial to learn robust representations from all the activity-related spatio-temporal positions. Thus, we explore both holistic scene-level and local instance-level (person-level) features to provide comprehensive and discriminative patterns for recognizing both first-and third-person activities. On the one hand, the holistic scene-level features are extracted by a 3-D convolutional neural network, which is trained to mine shared and sample-unique semantics between video pairs, via two well-designed attention-based modules and a self-knowledge distillation (SKD) strategy. On the other hand, we further leverage the extracted holistic features to guide the learning of instance-level features in a disentangled fashion, which aims to discover both spatially conspicuous patterns and temporally varied, yet critical, cues. Experimental results on the PolyU CFT Daily dataset validate that our method achieves the state-of-the-art performance.

3.
IEEE Trans Image Process ; 30: 6081-6095, 2021.
Article in English | MEDLINE | ID: mdl-34185645

ABSTRACT

Invertible image decolorization is a useful color compression technique to reduce the cost in multimedia systems. Invertible decolorization aims to synthesize faithful grayscales from color images, which can be fully restored to the original color version. In this paper, we propose a novel color compression method to produce invertible grayscale images using invertible neural networks (INNs). Our key idea is to separate the color information from color images, and encode the color information into a set of Gaussian distributed latent variables via INNs. By this means, we force the color information lost in grayscale generation to be independent of the input color image. Therefore, the original color version can be efficiently recovered by randomly re-sampling a new set of Gaussian distributed variables, together with the synthetic grayscale, through the reverse mapping of INNs. To effectively learn the invertible grayscale, we introduce the wavelet transformation into a UNet-like INN architecture, and further present a quantization embedding to prevent the information omission in format conversion, which improves the generalizability of the framework in real-world scenarios. Extensive experiments on three widely used benchmarks demonstrate that the proposed method achieves a state-of-the-art performance in terms of both qualitative and quantitative results, which shows its superiority in multimedia communication and storage systems.

4.
J Biomed Inform ; 116: 103737, 2021 04.
Article in English | MEDLINE | ID: mdl-33737207

ABSTRACT

Named entity recognition (NER) is a fundamental task in Chinese natural language processing (NLP) tasks. Recently, Chinese clinical NER has also attracted continuous research attention because it is an essential preparation for clinical data mining. The prevailing deep learning method for Chinese clinical NER is based on long short-term memory (LSTM) network. However, the recurrent structure of LSTM makes it difficult to utilize GPU parallelism which to some extent lowers the efficiency of models. Besides, when the sentence is long, LSTM can hardly capture global context information. To address these issues, we propose a novel and efficient model completely based on convolutional neural network (CNN) which can fully utilize GPU parallelism to improve model efficiency. Moreover, we construct multi-level CNN to capture short-term and long-term context information. We also design a simple attention mechanism to obtain global context information which is conductive to improving model performance in sequence labeling tasks. Besides, a data augmentation method is proposed to expand the data volume and try to explore more semantic information. Extensive experiments show that our model achieves competitive performance with higher efficiency compared with other remarkable clinical NER models.


Subject(s)
Electronic Health Records , Neural Networks, Computer , China , Data Mining , Natural Language Processing
SELECTION OF CITATIONS
SEARCH DETAIL
...