Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neural Netw ; 175: 106295, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38614023

RESUMO

Multi-view unsupervised feature selection (MUFS) is an efficient approach for dimensional reduction of heterogeneous data. However, existing MUFS approaches mostly assign the samples the same weight, thus the diversity of samples is not utilized efficiently. Additionally, due to the presence of various regularizations, the resulting MUFS problems are often non-convex, making it difficult to find the optimal solutions. To address this issue, a novel MUFS method named Self-paced Regularized Adaptive Multi-view Unsupervised Feature Selection (SPAMUFS) is proposed. Specifically, the proposed approach firstly trains the MUFS model with simple samples, and gradually learns complex samples by using self-paced regularizer. l2,p-norm (0

Assuntos
Algoritmos , Aprendizado de Máquina não Supervisionado , Humanos , Redes Neurais de Computação
2.
Artigo em Inglês | MEDLINE | ID: mdl-37956013

RESUMO

This article investigates a class of systems of nonlinear equations (SNEs). Three distributed neurodynamic models (DNMs), namely a two-layer model (DNM-I) and two single-layer models (DNM-II and DNM-III), are proposed to search for such a system's exact solution or a solution in the sense of least-squares. Combining a dynamic positive definite matrix with the primal-dual method, DNM-I is designed and it is proved to be globally convergent. To obtain a concise model, based on the dynamic positive definite matrix, time-varying gain, and activation function, DNM-II is developed and it enjoys global convergence. To inherit DNM-II's concise structure and improved convergence, DNM-III is proposed with the aid of time-varying gain and activation function, and this model possesses global fixed-time consensus and convergence. For the smooth case, DNM-III's globally exponential convergence is demonstrated under the Polyak-Lojasiewicz (PL) condition. Moreover, for the nonsmooth case, DNM-III's globally finite-time convergence is proved under the Kurdyka-Lojasiewicz (KL) condition. Finally, the proposed DNMs are applied to tackle quadratic programming (QP), and some numerical examples are provided to illustrate the effectiveness and advantages of the proposed models.

3.
Neural Netw ; 165: 971-981, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37454612

RESUMO

This paper proposes three novel accelerated inverse-free neurodynamic approaches to solve absolute value equations (AVEs). The first two are finite-time converging approaches and the third one is a fixed-time converging approach. It is shown that the proposed first two neurodynamic approaches converge to the solution of the concerned AVEs in a finite-time while, under some mild conditions, the third one converges to the solution in a fixed-time. It is also shown that the settling time for the proposed fixed-time converging approach has an uniform upper bound for all initial conditions, while the settling times for the proposed finite-time converging approaches are dependent on initial conditions. The proposed neurodynamic approaches have the advantage that they are all robust against bounded vanishing perturbations. The theoretical results are validated by means of a numerical example and an application in boundary value problems.


Assuntos
Redes Neurais de Computação
4.
Artigo em Inglês | MEDLINE | ID: mdl-37028079

RESUMO

In this work, we study a more realistic challenging scenario in multiview clustering (MVC), referred to as incomplete MVC (IMVC) where some instances in certain views are missing. The key to IMVC is how to adequately exploit complementary and consistency information under the incompleteness of data. However, most existing methods address the incompleteness problem at the instance level and they require sufficient information to perform data recovery. In this work, we develop a new approach to facilitate IMVC based on the graph propagation perspective. Specifically, a partial graph is used to describe the similarity of samples for incomplete views, such that the issue of missing instances can be translated into the missing entries of the partial graph. In this way, a common graph can be adaptively learned to self-guide the propagation process by exploiting the consistency information, and the propagated graph of each view is in turn used to refine the common self-guided graph in an iterative manner. Thus, the associated missing entries can be inferred through graph propagation by exploiting the consistency information across all views. On the other hand, existing approaches focus on the consistency structure only, and the complementary information has not been sufficiently exploited due to the data incompleteness issue. By contrast, under the proposed graph propagation framework, an exclusive regularization term can be naturally adopted to exploit the complementary information in our method. Extensive experiments demonstrate the effectiveness of the proposed method in comparison with state-of-the-art methods. The source code of our method is available at the https://github.com/CLiu272/TNNLS-PGP.

5.
J Healthc Eng ; 2023: 4387134, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36844948

RESUMO

In recent years, brain magnetic resonance imaging (MRI) image segmentation has drawn considerable attention. MRI image segmentation result provides a basis for medical diagnosis. The segmentation result influences the clinical treatment directly. Nevertheless, MRI images have shortcomings such as noise and the inhomogeneity of grayscale. The performance of traditional segmentation algorithms still needs further improvement. In this paper, we propose a novel brain MRI image segmentation algorithm based on fuzzy C-means (FCM) clustering algorithm to improve the segmentation accuracy. First, we introduce multitask learning strategy into FCM to extract public information among different segmentation tasks. It combines the advantages of the two algorithms. The algorithm enables to utilize both public information among different tasks and individual information within tasks. Then, we design an adaptive task weight learning mechanism, and a weighted multitask fuzzy C-means (WMT-FCM) clustering algorithm is proposed. Under the adaptive task weight learning mechanism, each task obtains the optimal weight and achieves better clustering performance. Simulated MRI images from McConnell BrainWeb have been used to evaluate the proposed algorithm. Experimental results demonstrate that the proposed method provides more accurate and stable segmentation results than its competitors on the MRI images with various noise and intensity inhomogeneity.


Assuntos
Encéfalo , Processamento de Imagem Assistida por Computador , Humanos , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Lógica Fuzzy , Imageamento por Ressonância Magnética/métodos , Algoritmos , Análise por Conglomerados
6.
Neural Netw ; 161: 638-658, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36827961

RESUMO

Multi-view clustering is widely used to improve clustering performance. Recently, the subspace clustering tensor learning method based on Markov chain is a crucial branch of multi-view clustering. Tensor learning is commonly used to apply tensor low-rank approximation to represent the relationships between data samples. However, most of the current tensor learning methods have the following shortcomings: the information of the local graph is not taken into account, the relationships between different views are not shown, and the existing tensor low-rank representation takes a biased tensor rank function for estimation. Therefore, a nonconvex low-rank tensor approximation with graph and consistent regularizations (NLRTGC) model is proposed for multi-view subspace learning. NLRTGC retains the local manifold information through graph regularization, and adopts a consistent regularization between multi-views to keep the diagonal block structure of representation matrices. Furthermore, a nonnegative nonconvex low-rank tensor kernel function is used to replace the existing classical tensor nuclear norm via tensor-singular value decomposition (t-SVD), so as to reduce the deviation from rank. Then, an alternating direction method of multipliers (ADMM) which makes the objective function monotonically non-increasing is proposed to solve NLRTGC. Finally, the effectiveness and superiority of the NLRTGC are shown through abundant comparative experiments with various state-of-the-art algorithms on noisy datasets and real world datasets.


Assuntos
Algoritmos , Aprendizagem , Análise por Conglomerados , Cadeias de Markov
7.
IEEE Trans Neural Netw Learn Syst ; 34(8): 4881-4891, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34788223

RESUMO

In this article, sparse nonnegative matrix factorization (SNMF) is formulated as a mixed-integer bicriteria optimization problem for minimizing matrix factorization errors and maximizing factorized matrix sparsity based on an exact binary representation of l0 matrix norm. The binary constraints of the problem are then equivalently replaced with bilinear constraints to convert the problem to a biconvex problem. The reformulated biconvex problem is finally solved by using a two-timescale duplex neurodynamic approach consisting of two recurrent neural networks (RNNs) operating collaboratively at two timescales. A Gaussian score (GS) is defined as to integrate the bicriteria of factorization errors and sparsity of resulting matrices. The performance of the proposed neurodynamic approach is substantiated in terms of low factorization errors, high sparsity, and high GS on four benchmark datasets.

8.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7500-7514, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35143401

RESUMO

This article proposes a novel fixed-time converging proximal neurodynamic network (FXPNN) via a proximal operator to deal with equilibrium problems (EPs). A distinctive feature of the proposed FXPNN is its better transient performance in comparison to most existing proximal neurodynamic networks. It is shown that the FXPNN converges to the solution of the corresponding EP in fixed-time under some mild conditions. It is also shown that the settling time of the FXPNN is independent of initial conditions and the fixed-time interval can be prescribed, unlike existing results with asymptotical or exponential convergence. Moreover, the proposed FXPNN is applied to solve composition optimization problems (COPs), l1 -regularized least-squares problems, mixed variational inequalities (MVIs), and variational inequalities (VIs). It is further shown, in the case of solving COPs, that the fixed-time convergence can be established via the Polyak-Lojasiewicz condition, which is a relaxation of the more demanding convexity condition. Finally, numerical examples are presented to validate the effectiveness and advantages of the proposed neurodynamic network.

9.
Neural Netw ; 153: 399-410, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35797801

RESUMO

This paper addresses portfolio selection based on neurodynamic optimization. The portfolio selection problem is formulated as a biconvex optimization problem with a variable weight in the Markowitz risk-return framework. In addition, the cardinality-constrained portfolio selection problem is formulated as a mixed-integer optimization problem and reformulated as a biconvex optimization problem. A two-timescale duplex neurodynamic approach is customized and applied for solving the reformulated portfolio optimization problem. In the two-timescale duplex neurodynamic approach, two recurrent neural networks operating at two timescales are employed for local searches, and their neuronal states are reinitialized upon local convergence using a particle swarm optimization rule to escape from local optima toward global ones. Experimental results on four datasets of world stock markets are elaborated to demonstrate the superior performance of the neurodynamic optimization approach to three baselines in terms of two major risk-adjusted performance criteria and portfolio returns.


Assuntos
Algoritmos , Redes Neurais de Computação , Simulação por Computador , Neurônios
10.
Neural Netw ; 154: 255-269, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35908375

RESUMO

In this paper, we formulate a mixed-integer problem for sparse signal reconstruction and reformulate it as a global optimization problem with a surrogate objective function subject to underdetermined linear equations. We propose a sparse signal reconstruction method based on collaborative neurodynamic optimization with multiple recurrent neural networks for scattered searches and a particle swarm optimization rule for repeated repositioning. We elaborate on experimental results to demonstrate the outperformance of the proposed approach against ten state-of-the-art algorithms for sparse signal reconstruction.


Assuntos
Algoritmos , Redes Neurais de Computação , Simulação por Computador , Idioma
11.
Neural Netw ; 142: 180-191, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-34020085

RESUMO

Feature selection is a crucial step in data processing and machine learning. While many greedy and sequential feature selection approaches are available, a holistic neurodynamics approach to supervised feature selection is recently developed via fractional programming by minimizing feature redundancy and maximizing relevance simultaneously. In view that the gradient of the fractional objective function is also fractional, alternative problem formulations are desirable to obviate the fractional complexity. In this paper, the fractional programming problem formulation is equivalently reformulated as bilevel and bilinear programming problems without using any fractional function. Two two-timescale projection neural networks are adapted for solving the reformulated problems. Experimental results on six benchmark datasets are elaborated to demonstrate the global convergence and high classification performance of the proposed neurodynamic approaches in comparison with six mainstream feature selection approaches.


Assuntos
Algoritmos , Projetos de Pesquisa , Simulação por Computador , Aprendizado de Máquina , Redes Neurais de Computação
12.
IEEE Trans Neural Netw Learn Syst ; 32(1): 36-48, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32149698

RESUMO

This article presents a two-timescale duplex neurodynamic approach to mixed-integer optimization, based on a biconvex optimization problem reformulation with additional bilinear equality or inequality constraints. The proposed approach employs two recurrent neural networks operating concurrently at two timescales. In addition, particle swarm optimization is used to update the initial neuronal states iteratively to escape from local minima toward better initial states. In spite of its minimal system complexity, the approach is proven to be almost surely convergent to optimal solutions. Its superior performance is substantiated via solving five benchmark problems.


Assuntos
Redes Neurais de Computação , Algoritmos , Benchmarking , Simulação por Computador , Modelos Lineares , Resolução de Problemas
13.
IEEE Trans Neural Netw Learn Syst ; 31(4): 1145-1154, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-31226092

RESUMO

This paper addresses task assignment (TA) for multivehicle systems. Multivehicle TA problems are formulated as a combinatorial optimization problem and further as a global optimization problem. To fulfill heterogeneous tasks, cooperation among heterogeneous vehicles is incorporated in the problem formulations. A collaborative neurodynamic optimization approach is developed for solving the TA problems. Experimental results on four types of TA problems are discussed to substantiate the efficacy of the approach.

14.
Neural Netw ; 114: 15-27, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30831379

RESUMO

In this paper, a collaborative neurodynamic optimization approach is proposed for global and combinatorial optimization. First, a combinatorial optimization problem is reformulated as a global optimization problem. Second, a neurodynamic optimization model based on an augmented Lagrangian function is proposed and its states are proven to be asymptotically stable at a strict local minimum in the presence of nonconvexity in objective function or constraints. In addition, multiple neurodynamic optimization models are employed to search for global optimal solutions collaboratively and particle swarm optimization (PSO) is used to optimize their initial states. The proposed approach is shown to be globally convergent to global optimal solutions as substantiated for solving benchmark problems.


Assuntos
Redes Neurais de Computação , Aglomeração
15.
IEEE Trans Neural Netw Learn Syst ; 30(8): 2503-2514, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30602424

RESUMO

This paper presents a two-timescale duplex neurodynamic system for constrained biconvex optimization. The two-timescale duplex neurodynamic system consists of two recurrent neural networks (RNNs) operating collaboratively at two timescales. By operating on two timescales, RNNs are able to avoid instability. In addition, based on the convergent states of the two RNNs, particle swarm optimization is used to optimize initial states of the RNNs to avoid local minima. It is proven that the proposed system is globally convergent to the global optimum with probability one. The performance of the two-timescale duplex neurodynamic system is substantiated based on the benchmark problems. Furthermore, the proposed system is applied for L1 -constrained nonnegative matrix factorization.


Assuntos
Simulação por Computador , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Humanos , Fatores de Tempo
16.
Neural Netw ; 103: 63-71, 2018 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-29642020

RESUMO

This paper presents an algorithm for nonnegative matrix factorization based on a biconvex optimization formulation. First, a discrete-time projection neural network is introduced. An upper bound of its step size is derived to guarantee the stability of the neural network. Then, an algorithm is proposed based on the discrete-time projection neural network and a backtracking step-size adaptation. The proposed algorithm is proven to be able to reduce the objective function value iteratively until attaining a partial optimum of the formulated biconvex optimization problem. Experimental results based on various data sets are presented to substantiate the efficacy of the algorithm.


Assuntos
Algoritmos , Bases de Dados Factuais , Redes Neurais de Computação , Bases de Dados Factuais/estatística & dados numéricos , Reconhecimento Automatizado de Padrão/métodos , Reconhecimento Automatizado de Padrão/tendências , Estimulação Luminosa/métodos , Fatores de Tempo
17.
Neural Netw ; 80: 110-7, 2016 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-27203554

RESUMO

In this paper, a recurrent neural network (RNN) is proposed for solving adaptive beamforming problem. In order to minimize sidelobe interference, the problem is described as a convex optimization problem based on linear array model. RNN is designed to optimize system's weight values in the feasible region which is derived from arrays' state and plane wave's information. The new algorithm is proven to be stable and converge to optimal solution in the sense of Lyapunov. So as to verify new algorithm's performance, we apply it to beamforming under array mismatch situation. Comparing with other optimization algorithms, simulations suggest that RNN has strong ability to search for exact solutions under the condition of large scale constraints.


Assuntos
Redes Neurais de Computação , Resolução de Problemas , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...