Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-37022811

ABSTRACT

Disentangled representation learning is typically achieved by a generative model, variational encoder (VAE). Existing VAE-based methods try to disentangle all the attributes simultaneously in a single hidden space, while the separation of the attribute from irrelevant information varies in complexity. Thus, it should be conducted in different hidden spaces. Therefore, we propose to disentangle the disentanglement itself by assigning the disentanglement of each attribute to different layers. To achieve this, we present a stair disentanglement net (STDNet), a stair-like structure network with each step corresponding to the disentanglement of an attribute. An information separation principle is employed to peel off the irrelevant information to form a compact representation of the targeted attribute within each step. Compact representations, thus, obtained together form the final disentangled representation. To ensure the final disentangled representation is compressed as well as complete with respect to the input data, we propose a variant of the information bottleneck (IB) principle, the stair IB (SIB) principle, to optimize a tradeoff between compression and expressiveness. In particular, for the assignment to the network steps, we define an attribute complexity metric to assign the attributes by the complexity ascending rule (CAR) that dictates a sequencing of the attribute disentanglement in ascending order of complexity. Experimentally, STDNet achieves state-of-the-art results in representation learning and image generation on multiple benchmarks, including Mixed National Institute of Standards and Technology database (MNIST), dSprites, and CelebA. Furthermore, we conduct thorough ablation experiments to show how the strategies employed here contribute to the performance, including neurons block, CAR, hierarchical structure, and variational form of SIB.

2.
Neural Netw ; 162: 412-424, 2023 May.
Article in English | MEDLINE | ID: mdl-36963145

ABSTRACT

With the development of graph neural networks, how to handle large-scale graph data has become an increasingly important topic. Currently, most graph neural network models which can be extended to large-scale graphs are based on random sampling methods. However, the sampling process in these models is detached from the forward propagation of neural networks. Moreover, quite a few works design sampling based on statistical estimation methods for graph convolutional networks and the weights of message passing in GCNs nodes are fixed, making these sampling methods not scalable to message passing networks with variable weights, such as graph attention networks. Noting the end-to-end learning capability of neural networks, we propose a learnable sampling method. It solves the problem that random sampling operations cannot calculate gradients and samples nodes with an unfixed probability. In this way, the sampling process is dynamically combined with the forward propagation process of the features, allowing for better training of the networks. And it can be generalized to all message passing models. In addition, we apply the learnable sampling method to GNNs and propose two models. Our method can be flexibly combined with different graph neural network models and achieves excellent accuracy on benchmark datasets with large graphs. Meanwhile, loss function converges to smaller values at a faster rate during training than past methods.


Subject(s)
Benchmarking , Learning , Neural Networks, Computer , Probability
3.
BMC Bioinformatics ; 21(Suppl 6): 202, 2020 Nov 18.
Article in English | MEDLINE | ID: mdl-33203394

ABSTRACT

BACKGROUND: Electron tomography (ET) is an important technique for the study of complex biological structures and their functions. Electron tomography reconstructs the interior of a three-dimensional object from its projections at different orientations. However, due to the instrument limitation, the angular tilt range of the projections is limited within +70∘ to -70∘. The missing angle range is known as the missing wedge and will cause artifacts. RESULTS: In this paper, we proposed a novel algorithm, compressed sensing improved iterative reconstruction-reprojection (CSIIRR), which follows the schedule of improved iterative reconstruction-reprojection but further considers the sparsity of the biological ultra-structural content in specimen. The proposed algorithm keeps both the merits of the improved iterative reconstruction-reprojection (IIRR) and compressed sensing, resulting in an estimation of the electron tomography with faster execution speed and better reconstruction result. A comprehensive experiment has been carried out, in which CSIIRR was challenged on both simulated and real-world datasets as well as compared with a number of classical methods. The experimental results prove the effectiveness and efficiency of CSIIRR, and further show its advantages over the other methods. CONCLUSIONS: The proposed algorithm has an obvious advance in the suppression of missing wedge effects and the restoration of missing information, which provides an option to the structural biologist for clear and accurate tomographic reconstruction.


Subject(s)
Algorithms , Artifacts , Electron Microscope Tomography , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
4.
IEEE Trans Image Process ; 23(2): 489-501, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24196860

ABSTRACT

A new fingerprint compression algorithm based on sparse representation is introduced. Obtaining an overcomplete dictionary from a set of fingerprint patches allows us to represent them as a sparse linear combination of dictionary atoms. In the algorithm, we first construct a dictionary for predefined fingerprint image patches. For a new given fingerprint images, represent its patches according to the dictionary by computing l(0)-minimization and then quantize and encode the representation. In this paper, we consider the effect of various factors on compression results. Three groups of fingerprint images are tested. The experiments demonstrate that our algorithm is efficient compared with several competing compression techniques (JPEG, JPEG 2000, and WSQ), especially at high compression ratios. The experiments also illustrate that the proposed algorithm is robust to extract minutiae.


Subject(s)
Biometric Identification/methods , Data Compression/methods , Dermatoglyphics , Fingers/anatomy & histology , Image Interpretation, Computer-Assisted/methods , Skin/anatomy & histology , Algorithms , Humans , Image Enhancement/methods , Pattern Recognition, Automated/methods , Photography/methods , Reproducibility of Results , Sensitivity and Specificity
5.
ScientificWorldJournal ; 2013: 246596, 2013.
Article in English | MEDLINE | ID: mdl-24223028

ABSTRACT

Recently, the existed proximal gradient algorithms had been used to solve non-smooth convex optimization problems. As a special nonsmooth convex problem, the singly linearly constrained quadratic programs with box constraints appear in a wide range of applications. Hence, we propose an accelerated proximal gradient algorithm for singly linearly constrained quadratic programs with box constraints. At each iteration, the subproblem whose Hessian matrix is diagonal and positive definite is an easy model which can be solved efficiently via searching a root of a piecewise linear function. It is proved that the new algorithm can terminate at an ε-optimal solution within [Formula: see text] iterations. Moreover, no line search is needed in this algorithm, and the global convergence can be proved under mild conditions. Numerical results are reported for solving quadratic programs arising from the training of support vector machines, which show that the new algorithm is efficient.


Subject(s)
Algorithms , Programming, Linear
6.
IEEE Trans Pattern Anal Mach Intell ; 30(6): 929-40, 2008 Jun.
Article in English | MEDLINE | ID: mdl-18421101

ABSTRACT

An algorithm is proposed which combines Zero-pole Model and Hough Transform(HT) to detect singular points. Orientation of singular points is defined on basis of Zero-pole Model which can further explain the practicability of Zero-pole Model. Contrary to orientation field generation, detection of singular points is simplified to determine the parameters of Zero-pole Model. HT uses rather global information of fingerprint images to detect singular points. This makes our algorithm more robust to noise than methods which only use local information. As Zero-pole Model may have a little warp from actual fingerprint orientation field, Poincare index is used to make position adjustment in neighborhood of the detected candidate singular points. Experimental results show that our algorithm performs well and fast enough for real time application in database NIST-4.


Subject(s)
Algorithms , Artificial Intelligence , Biometry/methods , Dermatoglyphics/classification , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...