Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Neural Netw Learn Syst ; 26(6): 1260-74, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25095266

ABSTRACT

The paper presents a new framework for complex support vector regression (SVR) as well as Support Vector Machines (SVM) for quaternary classification. The method exploits the notion of widely linear estimation to model the input-out relation for complex-valued data and considers two cases: 1) the complex data are split into their real and imaginary parts and a typical real kernel is employed to map the complex data to a complexified feature space and 2) a pure complex kernel is used to directly map the data to the induced complex feature space. The recently developed Wirtinger's calculus on complex reproducing kernel Hilbert spaces is employed to compute the Lagrangian and derive the dual optimization problem. As one of our major results, we prove that any complex SVM/SVR task is equivalent with solving two real SVM/SVR tasks exploiting a specific real kernel, which is generated by the chosen complex kernel. In particular, the case of pure complex kernels leads to the generation of new kernels, which have not been considered before. In the classification case, the proposed framework inherently splits the complex space into four parts. This leads naturally to solving the four class-task (quaternary classification), instead of the typical two classes of the real SVM. In turn, this rationale can be used in a multiclass problem as a split-class scenario based on four classes, as opposed to the one-versus-all method; this can lead to significant computational savings. Experiments demonstrate the effectiveness of the proposed framework for regression and classification tasks that involve complex data.

2.
IEEE Trans Neural Netw Learn Syst ; 23(2): 260-76, 2012 Feb.
Article in English | MEDLINE | ID: mdl-24808505

ABSTRACT

This paper introduces a wide framework for online, i.e., time-adaptive, supervised multiregression tasks. The problem is formulated in a general infinite-dimensional reproducing kernel Hilbert space (RKHS). In this context, a fairly large number of nonlinear multiregression models fall as special cases, including the linear case. Any convex, continuous, and not necessarily differentiable function can be used as a loss function in order to quantify the disagreement between the output of the system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. To this end, we demonstrate a way to calculate the subgradients of robust loss functions, suitable for the multiregression task. As it is by now well documented, when dealing with online schemes in RKHS, the memory keeps increasing with each iteration step. To attack this problem, a simple sparsification strategy is utilized, which leads to an algorithmic scheme of linear complexity with respect to the number of unknown parameters. A convergence analysis of the technique, based on arguments of convex analysis, is also provided. To demonstrate the capacity of the proposed method, the multiregressor is applied to the multiaccess multiple-input multiple-output channel equalization task for a setting with poor resources and nonavailable channel information. Numerical results verify the potential of the method, when its performance is compared with those of the state-of-the-art linear techniques, which, in contrast, use space-time coding, more antenna elements, as well as full channel information.

3.
IEEE Trans Neural Netw Learn Syst ; 23(3): 425-38, 2012 Mar.
Article in English | MEDLINE | ID: mdl-24808549

ABSTRACT

This paper presents a wide framework for non-linear online supervised learning tasks in the context of complex valued signal processing. The (complex) input data are mapped into a complex reproducing kernel Hilbert space (RKHS), where the learning phase is taking place. Both pure complex kernels and real kernels (via the complexification trick) can be employed. Moreover, any convex, continuous and not necessarily differentiable function can be used to measure the loss between the output of the specific system and the desired response. The only requirement is the subgradient of the adopted loss function to be available in an analytic form. In order to derive analytically the subgradients, the principles of the (recently developed) Wirtinger's calculus in complex RKHS are exploited. Furthermore, both linear and widely linear (in RKHS) estimation filters are considered. To cope with the problem of increasing memory requirements, which is present in almost all online schemes in RKHS, the sparsification scheme, based on projection onto closed balls, has been adopted. We demonstrate the effectiveness of the proposed framework in a non-linear channel identification task, a non-linear channel equalization problem and a quadrature phase shift keying equalization scheme, using both circular and non circular synthetic signal sources.

4.
IEEE Trans Image Process ; 19(6): 1465-79, 2010 Jun.
Article in English | MEDLINE | ID: mdl-20236901

ABSTRACT

The main contribution of this paper is the development of a novel approach, based on the theory of Reproducing Kernel Hilbert Spaces (RKHS), for the problem of noise removal in the spatial domain. The proposed methodology has the advantage that it is able to remove any kind of additive noise (impulse, gaussian, uniform, etc.) from any digital image, in contrast to the most commonly used denoising techniques, which are noise dependent. The problem is cast as an optimization task in a RKHS, by taking advantage of the celebrated Representer Theorem in its semi-parametric formulation. The semi-parametric formulation, although known in theory, has so far found limited, to our knowledge, application. However, in the image denoising problem, its use is dictated by the nature of the problem itself. The need for edge preservation naturally leads to such a modeling. Examples verify that in the presence of gaussian noise the proposed methodology performs well compared to wavelet based technics and outperforms them significantly in the presence of impulse or mixed noise.


Subject(s)
Algorithms , Artifacts , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Computer Simulation , Data Interpretation, Statistical , Models, Biological , Models, Statistical , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...