Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 23
Filter
1.
Article in English | MEDLINE | ID: mdl-38739517

ABSTRACT

In point cloud, some regions typically exist nodes from multiple categories, i.e., these regions have both homophilic and heterophilic nodes. However, most existing methods ignore the heterophily of edges during the aggregation of the neighborhood node features, which inevitably mixes unnecessary information of heterophilic nodes and leads to blurred boundaries of segmentation. To address this problem, we model the point cloud as a homophilic-heterophilic graph and propose a graph regulation network (GRN) to produce finer segmentation boundaries. The proposed method can adaptively adjust the propagation mechanism with the degree of neighborhood homophily. Moreover, we build a prototype feature extraction module, which is utilised to mine the homophily features of nodes from the global prototype space. Theoretically, we prove that our convolution operation can constrain the similarity of representations between nodes based on their degree of homophily. Extensive experiments on fully and weakly supervised point cloud semantic segmentation tasks demonstrate that our method achieves satisfactory performance. Especially in the case of weak supervision, that is, each sample has only 1%-10% labeled points, the proposed method has a significant improvement in segmentation performance.

2.
IEEE J Biomed Health Inform ; 28(5): 2830-2841, 2024 May.
Article in English | MEDLINE | ID: mdl-38376972

ABSTRACT

Deep learning-based methods have been widely used in medical image segmentation recently. However, existing works are usually difficult to simultaneously capture global long-range information from images and topological correlations among feature maps. Further, medical images often suffer from blurred target edges. Accordingly, this paper proposes a novel medical image segmentation framework named a label-decoupled network with spatial-channel graph convolution and dual attention enhancement mechanism (LADENet for short). It constructs learnable adjacency matrices and utilizes graph convolutions to effectively capture global long-range information on spatial locations and topological dependencies between different channels in an image. Then a label-decoupled strategy based on distance transformation is introduced to decouple an original segmentation label into a body label and an edge label for supervising the body branch and edge branch. Again, a dual attention enhancement mechanism, designing a body attention block in the body branch and an edge attention block in the edge branch, is built to promote the learning ability of spatial region and boundary features. Besides, a feature interactor is devised to fully consider the information interaction between the body and edge branches to improve segmentation performance. Experiments on benchmark datasets reveal the superiority of LADENet compared to state-of-the-art approaches.


Subject(s)
Deep Learning , Humans , Algorithms , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Databases, Factual
3.
Neural Netw ; 173: 106169, 2024 May.
Article in English | MEDLINE | ID: mdl-38359642

ABSTRACT

Graph neural networks have revealed powerful potential in ranking recommendation. Existing methods based on bipartite graphs for ranking recommendation mainly focus on homogeneous graphs and usually treat user and item nodes as the same kind of nodes, however, the user-item bipartite graph is always heterogeneous. Additionally, various types of nodes have varying effects on recommendations, and a good node representation can be learned by successfully differentiating the same type of nodes. In this paper, we develop a node-personalized multi-graph convolutional network (NP-MGCN) for ranking recommendation. It consists of a node importance awareness block, a graph construction module, and a node information propagation and aggregation framework. Specifically, a node importance awareness block is proposed to encode nodes using node degree information to highlight the differences between nodes. Subsequently, the Jaccard similarity and co-occurrence matrix fusion graph construction module is devised to acquire user-user and item-item graphs, enriching correlation information between users and between items. Finally, a composite hop node information propagation and aggregation framework, including single-hop and double-hop branches, is designed. The high-order connectivity is used to aggregate heterogeneous information for the single-hop branch, while the multi-hop dependency is utilized to aggregate homogeneous information for the double-hop branch. It makes user and item node embedding more discriminative and integrates the different nodes' heterogeneity into the model. Experiments on several datasets manifest that NP-MGCN achieves outstanding recommendation performance than existing methods.


Subject(s)
Eye Diseases, Hereditary , Genetic Diseases, X-Linked , Humans , Learning , Neural Networks, Computer
4.
Med Biol Eng Comput ; 62(2): 537-549, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37945794

ABSTRACT

Cortical surface parcellation aims to segment the surface into anatomically and functionally significant regions, which are crucial for diagnosing and treating numerous neurological diseases. However, existing methods generally ignore the difficulty in learning labeling patterns of boundaries, hindering the performance of parcellation. To this end, this paper proposes a joint parcellation and boundary network (JPBNet) to promote the effectiveness of cortical surface parcellation. Its core is developing a multi-rate-shared dilated graph attention (MDGA) module and incorporating boundary learning into the parcellation process. The former, in particular, constructs a dilated graph attention strategy, extending the dilated convolution from regular data to irregular graph data. We fuse it with different dilated rates to extract context information in various scales by devising a shared graph attention layer. After that, a boundary enhancement module and a parcellation enhancement module based on graph attention mechanisms are built in each layer, forcing MDGA to capture informative and valuable features for boundary detection and parcellation tasks. Integrating MDGA, the boundary enhancement module, and the parcellation enhancement module at each layer to supervise boundary and parcellation information, an effective JPBNet is formed by stacking multiple layers. Experiments on the public dataset reveal that the proposed method outperforms comparison methods and performs well on boundaries for cortical surface parcellation.


Subject(s)
Cerebral Cortex , Learning , Image Processing, Computer-Assisted
5.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14975-14989, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37490384

ABSTRACT

Graph convolutional neural networks can effectively process geometric data and thus have been successfully used in point cloud data representation. However, existing graph-based methods usually adopt the K-nearest neighbor (KNN) algorithm to construct graphs, which may not be optimal for point cloud analysis tasks, owning to the solution of KNN is independent of network training. In this paper, we propose a novel graph structure learning convolutional neural network (GSLCN) for multiple point cloud analysis tasks. The fundamental concept is to propose a general graph structure learning architecture (GSL) that builds long-range and short-range dependency graphs. To learn optimal graphs that best serve to extract local features and investigate global contextual information, respectively, we integrated the GSL with the designed graph convolution operator under a unified framework. Furthermore, we design the graph structure losses with some prior knowledge to guide graph learning during network training. The main benefit is that given labels and prior knowledge are taken into account in GSLCN, providing useful supervised information to build graphs and thus facilitating the graph convolution operation for the point cloud. Experimental results on challenging benchmarks demonstrate that the proposed framework achieves excellent performance for point cloud classification, part segmentation, and semantic segmentation.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 2751-2768, 2023 Mar.
Article in English | MEDLINE | ID: mdl-35704541

ABSTRACT

Graph Convolutional Networks (GCNs), as a prominent example of graph neural networks, are receiving extensive attention for their powerful capability in learning node representations on graphs. There are various extensions, either in sampling and/or node feature aggregation, to further improve GCNs' performance, scalability and applicability in various domains. Still, there is room for further improvements on learning efficiency because performing batch gradient descent using the full dataset for every training iteration, as unavoidable for training (vanilla) GCNs, is not a viable option for large graphs. The good potential of random features in speeding up the training phase in large-scale problems motivates us to consider carefully whether GCNs with random weights are feasible. To investigate theoretically and empirically this issue, we propose a novel model termed Graph Convolutional Networks with Random Weights (GCN-RW) by revising the convolutional layer with random filters and simultaneously adjusting the learning objective with regularized least squares loss. Theoretical analyses on the model's approximation upper bound, structure complexity, stability and generalization, are provided with rigorous mathematical proofs. The effectiveness and efficiency of GCN-RW are verified on semi-supervised node classification task with several benchmark datasets. Experimental results demonstrate that, in comparison with some state-of-the-art approaches, GCN-RW can achieve better or matched accuracies with less training time cost.

7.
Neural Netw ; 157: 444-459, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36427414

ABSTRACT

Graph neural networks (GNNs) have shown strong graph-structured data processing capabilities. However, most of them are generated based on the message-passing mechanism and lack of the systematic approach to guide their developments. Meanwhile, a unified point of view is hard to explain the design concepts of different GNN models. This paper presents a unified optimization framework from hybrid regularized graph signal reconstruction to establish the connection between the aggregation operations of different GNNs, showing that exploring the optimal solution is the process of GNN information aggregation. We use this new framework to mathematically explain several classic GNN models and summarizes their commonalities and differences from a macro perspective. The proposed framework not only provides convenience to understand GNNs, but also has a guiding significance for the proposal of new GNNs. Moreover, we design a model-driven fixed-point iteration method and a data-driven dictionary learning network according to the corresponding optimization objective and sparse representation. Then the new model, GNN based on model-driven and data-driven (GNN-MD), is established by using alternating iteration methods. We also theoretically analyze its convergence. Numerous node classification experiments on multiple datasets illustrate that the proposed GNN-MD has excellent performance and outperforms all baselines on high-feature-dimension datasets.


Subject(s)
Learning , Neural Networks, Computer
8.
Article in English | MEDLINE | ID: mdl-35286267

ABSTRACT

Although convolutional neural networks (CNNs) have shown good performance on grid data, they are limited in the semantic segmentation of irregular point clouds. This article proposes a novel and effective graph CNN framework, referred to as the local-global graph convolutional method (LGGCM), which can achieve short- and long-range dependencies on point clouds. The key to this framework is the design of local spatial attention convolution (LSA-Conv). The design includes two parts: generating a weighted adjacency matrix of the local graph composed of neighborhood points, and updating and aggregating the features of nodes to obtain the spatial geometric features of the local point cloud. In addition, a smooth module for central points is incorporated into the process of LSA-Conv to enhance the robustness of the convolution against noise interference by adjusting the position coordinates of the points adaptively. The learned robust LSA-Conv features are then fed into a global spatial attention module with the gated unit to extract long-range contextual information and dynamically adjust the weights of features from different stages. The proposed framework, consisting of both encoding and decoding branches, is an end-to-end trainable network for semantic segmentation of 3-D point clouds. The theoretical analysis of the approximation capabilities of LSA-Conv is discussed to determine whether the features of the point cloud can be accurately represented. Experimental results on challenging benchmarks of the 3-D point cloud demonstrate that the proposed framework achieves excellent performance.

9.
Neural Netw ; 144: 755-765, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34688017

ABSTRACT

Deep learning has shown its great potential in the field of image classification due to its powerful feature extraction ability, which heavily depends on the number of available training samples. However, it is still a huge challenge on how to obtain an effective feature representation and further learn a promising classifier by deep networks when faced with few-shot classification tasks. This paper proposes a multi-features adaptive aggregation meta-learning method with an information enhancer for few-shot classification tasks, referred to as MFAML. It contains three main modules, including a feature extraction module, an information enhancer, and a multi-features adaptive aggregation classifier (MFAAC). During the meta-training stage, the information enhancer comprised of some deconvolutional layers is designed to promote the effective utilization of samples and thereby capturing more valuable information in the process of feature extraction. Simultaneously, the MFAAC module integrates the features from several convolutional layers of the feature extraction module. The obtained features then feed into the similarity module so that implementing the adaptive adjustment of the predicted label. The information enhancer and MFAAC are connected by a hybrid loss, providing an excellent feature representation. During the meta-test stage, the information enhancer is removed and we keep the remaining architecture for fast adaption on the final target task. The whole MFAML framework is solved by the optimization strategy of model-agnostic meta-learner (MAML) and can effectively improve generalization performance. Experimental results on several benchmark datasets demonstrate the superiority of the proposed method over other representative few-shot classification methods.


Subject(s)
Neural Networks, Computer
10.
IEEE Trans Image Process ; 30: 4773-4787, 2021.
Article in English | MEDLINE | ID: mdl-33929959

ABSTRACT

Inspired by the perceived saturation of human visual system, this paper proposes a two-stream hybrid networks to simulate binocular vision for salient object detection (SOD). Each stream in our system consists of unsupervised and supervised methods to form a two-branch module, so as to model the interaction between human intuition and memory. The two-branch module parallel processes visual information with bottom-up and top-down SODs, and output two initial saliency maps. Then a polyharmonic neural network with random-weight (PNNRW) is utilized to fuse two-branch's perception and refine the salient objects by learning online via multi-source cues. Depend on visual perceptual saturation, we can select optimal parameter of superpixel for unsupervised branch, locate sampling regions for PNNRW, and construct a positive feedback loop to facilitate perception saturated after the perception fusion. By comparing the binary outputs of the two-stream, the pixel annotation of predicted object with high saturation degree could be taken as new training samples. The presented method constitutes a semi-supervised learning framework actually. Supervised branches only need to be pre-trained initial, the system can collect the training samples with high confidence level and then train new models by itself. Extensive experiments show that the new framework can improve performance of the existing SOD methods, that exceeds the state-of-the-art methods in six popular benchmarks.

11.
Neural Netw ; 132: 394-404, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33010715

ABSTRACT

This study builds a fully deconvolutional neural network (FDNN) and addresses the problem of single image super-resolution (SISR) by using the FDNN. Although SISR using deep neural networks has been a major research focus, the problem of reconstructing a high resolution (HR) image with an FDNN has received little attention. A few recent approaches toward SISR are to embed deconvolution operations into multilayer feedforward neural networks. This paper constructs a deep FDNN for SISR that possesses two remarkable advantages compared to existing SISR approaches. The first improves the network performance without increasing the depth of the network or embedding complex structures. The second replaces all convolution operations with deconvolution operations to implement an effective reconstruction. That is, the proposed FDNN only contains deconvolution layers and learns an end-to-end mapping from low resolution (LR) to HR images. Furthermore, to avoid the oversmoothness of the mean squared error loss, the trained image is treated as a probability distribution, and the Kullback-Leibler divergence is introduced into the final loss function to achieve enhanced recovery. Although the proposed FDNN only has 10 layers, it is successfully evaluated through extensive experiments. Compared with other state-of-the-art methods and deep convolution neural networks with 20 or 30 layers, the proposed FDNN achieves better performance for SISR.


Subject(s)
Image Processing, Computer-Assisted/methods , Neural Networks, Computer
12.
Neural Netw ; 132: 84-95, 2020 Dec.
Article in English | MEDLINE | ID: mdl-32861917

ABSTRACT

In recent years, convolutional neural networks have been successfully applied to single image super-resolution (SISR) tasks, making breakthrough progress both in accuracy and speed. In this work, an improved dual-scale residual network (IDSRN), achieving promising reconstruction performance without sacrificing too much calculations, is proposed for SISR. The proposed network extracts features through two independent parallel branches: dual-scale feature extraction branch and texture attention branch. The improved dual-scale residual block (IDSRB) combined with active weighted mapping strategy constitutes the dual-scale feature extraction branch, which aims to capture dual-scale features of the image. As regards the texture attention branch, an encoder-decoder network employing symmetric full convolutional-deconvolution structure acts as a feature selector to enhance the high-frequency details. The integration of two branches reaches the goal of capturing dual-scale features with high-frequency information. Comparative experiments and extensive studies indicate that the proposed IDSRN can catch up with the state-of-the-art approaches in terms of accuracy and efficiency.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Humans
13.
Article in English | MEDLINE | ID: mdl-31170070

ABSTRACT

Matrix completion has been widely used in image processing, in which the popular approach is to formulate this issue as a general low-rank matrix approximation problem. This paper proposes a novel regularization method referred to as truncated Frobenius norm (TFN), and presents a hybrid truncated norm (HTN) model combining the truncated nuclear norm and truncated Frobenius norm for solving matrix completion problems. To address this model, a simple and effective two-step iteration algorithm is designed. Further, an adaptive way to change the penalty parameter is introduced to reduce the computational cost. Also, the convergence of the proposed method is discussed and proved mathematically. The proposed approach could not only effectively improve the recovery performance but also greatly promote the stability of the model. Meanwhile, the use of this new method could eliminate large variations that exist when estimating complex models, and achieve competitive successes in matrix completion. Experimental results on the synthetic data, real-world images as well as recommendation systems, particularly the use of the statistical analysis strategy, verify the effectiveness and superiority of the proposed method, i.e. the proposed method is more stable and effective than other state-of-the-art approaches.

14.
Neural Netw ; 101: 94-100, 2018 May.
Article in English | MEDLINE | ID: mdl-29494875

ABSTRACT

It is well known that the support vector machine (SVM) is an effective learning algorithm. The alternating direction method of multipliers (ADMM) algorithm has emerged as a powerful technique for solving distributed optimisation models. This paper proposes a distributed SVM algorithm in a master-slave mode (MS-DSVM), which integrates a distributed SVM and ADMM acting in a master-slave configuration where the master node and slave nodes are connected, meaning the results can be broadcasted. The distributed SVM is regarded as a regularised optimisation problem and modelled as a series of convex optimisation sub-problems that are solved by ADMM. Additionally, the over-relaxation technique is utilised to accelerate the convergence rate of the proposed MS-DSVM. Our theoretical analysis demonstrates that the proposed MS-DSVM has linear convergence, meaning it possesses the fastest convergence rate among existing standard distributed ADMM algorithms. Numerical examples demonstrate that the convergence and accuracy of the proposed MS-DSVM are superior to those of existing methods under the ADMM framework.


Subject(s)
Support Vector Machine
15.
Neural Netw ; 94: 115-124, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28772239

ABSTRACT

There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods.


Subject(s)
Biometric Identification/methods , Machine Learning , Neural Networks, Computer , Pattern Recognition, Automated/methods
16.
IEEE J Biomed Health Inform ; 21(6): 1644-1655, 2017 11.
Article in English | MEDLINE | ID: mdl-27834657

ABSTRACT

Segmentation of white blood cells (WBCs) image is meaningful but challenging due to the complex internal characteristics of the cells and external factors, such as illumination and different microscopic views. This paper addresses two problems of the segmentation: WBC location and subimage segmentation. To locate WBCs, a method that uses multiple windows obtained by scoring multiscale cues to extract a rectangular region is proposed. In this manner, the location window not only covers the whole WBC completely, but also achieves adaptive adjustment. In the subimage segmentation, the subimages preprocessed from the location window with a replace procedure are taken as initialization, and the GrabCut algorithm based on dilation is iteratively run to obtain more precise results. The proposed algorithm is extensively evaluated using a CellaVision dataset as well as a more challenging Jiashan dataset. Compared with the existing methods, the proposed algorithm is not only concise, but also can produce high-quality segmentations. The results demonstrate that the proposed algorithm consistently outperforms other location and segmentation methods, yielding higher recall and better precision rates.


Subject(s)
Algorithms , Image Processing, Computer-Assisted/methods , Leukocytes/cytology , Databases, Factual , Humans , Leukocytes/classification , Microscopy
17.
Med Biol Eng Comput ; 55(8): 1287-1301, 2017 Aug.
Article in English | MEDLINE | ID: mdl-27822698

ABSTRACT

The detection and classification of white blood cells (WBCs, also known as Leukocytes) is a hot issue because of its important applications in disease diagnosis. Nowadays the morphological analysis of blood cells is operated manually by skilled operators, which results in some drawbacks such as slowness of the analysis, a non-standard accuracy, and the dependence on the operator's skills. Although there have been many papers studying the detection of WBCs or classification of WBCs independently, few papers consider them together. This paper proposes an automatic detection and classification system for WBCs from peripheral blood images. It firstly proposes an algorithm to detect WBCs from the microscope images based on the simple relation of colors R, B and morphological operation. Then a granularity feature (pairwise rotation invariant co-occurrence local binary pattern, PRICoLBP feature) and SVM are applied to classify eosinophil and basophil from other WBCs firstly. Lastly, convolution neural networks are used to extract features in high level from WBCs automatically, and a random forest is applied to these features to recognize the other three kinds of WBCs: neutrophil, monocyte and lymphocyte. Some detection experiments on Cellavison database and ALL-IDB database show that our proposed detection method has better effect almost than iterative threshold method with less cost time, and some classification experiments show that our proposed classification method has better accuracy almost than some other methods.


Subject(s)
Image Interpretation, Computer-Assisted/methods , Leukocytes/pathology , Microscopy/methods , Neural Networks, Computer , Pattern Recognition, Automated/methods , Precursor Cell Lymphoblastic Leukemia-Lymphoma/pathology , Algorithms , Cells, Cultured , Humans , Machine Learning , Reproducibility of Results , Sensitivity and Specificity
18.
Neural Netw ; 85: 10-20, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27814461

ABSTRACT

Recovering the low-rank, sparse components of a given matrix is a challenging problem that arises in many real applications. Existing traditional approaches aimed at solving this problem are usually recast as a general approximation problem of a low-rank matrix. These approaches are based on the nuclear norm of the matrix, and thus in practice the rank may not be well approximated. This paper presents a new approach to solve this problem that is based on a new norm of a matrix, called the truncated nuclear norm (TNN). An efficient iterative scheme developed under the linearized alternating direction method multiple framework is proposed, where two novel iterative algorithms are designed to recover the sparse and low-rank components of matrix. More importantly, the convergence of the linearized alternating direction method multiple on our matrix recovering model is discussed and proved mathematically. To validate the effectiveness of the proposed methods, a series of comparative trials are performed on a variety of synthetic data sets. More specifically, the new methods are used to deal with problems associated with background subtraction (foreground object detection), and removing shadows and peculiarities from images of faces. Our experimental results illustrate that our new frameworks are more effective and accurate when compared with other methods.


Subject(s)
Algorithms , Neural Networks, Computer
19.
IEEE Trans Neural Netw Learn Syst ; 27(7): 1550-61, 2016 07.
Article in English | MEDLINE | ID: mdl-26766382

ABSTRACT

Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

20.
Neural Netw ; 50: 90-7, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24321614

ABSTRACT

In this article, we address the problem of compressed classification learning. A generalization bound of the support vector machines (SVMs) compressed classification algorithm with uniformly ergodic Markov chain samples is established. This bound indicates that the accuracy of the SVM classifier in the compressed domain is close to that of the best classifier in the data domain. In a sense, the fact that the compressed learning can avoid the curse of dimensionality in the learning process is shown. In addition, we show that compressed classification learning reduces the learning time at the price of decreasing the classification accuracy, but the decrement can be controlled. The numerical experiments further verify the results claimed in this article.


Subject(s)
Generalization, Psychological/physiology , Learning/classification , Learning/physiology , Markov Chains , Algorithms , Humans , Pattern Recognition, Automated , Support Vector Machine
SELECTION OF CITATIONS
SEARCH DETAIL
...