Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Behav Res Methods ; 55(4): 1734-1757, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35768745

RESUMO

Instance-based learning theory (IBLT) is a comprehensive account of how humans make decisions from experience during dynamic tasks. Since it was first proposed almost two decades ago, multiple computational models have been constructed based on IBLT (i.e., IBL models). These models have been demonstrated to be very successful in explaining and predicting human decisions in multiple decision-making contexts. However, as IBLT has evolved, the initial description of the theory has become less precise, and it is unclear how its demonstration can be expanded to more complex, dynamic, and multi-agent environments. This paper presents an updated version of the current theoretical components of IBLT in a comprehensive and precise form. It also provides an advanced implementation of the full set of theoretical mechanisms, SpeedyIBL, to unlock the capabilities of IBLT to handle a diverse taxonomy of individual and multi-agent decision-making problems. SpeedyIBL addresses a practical computational issue in past implementations of IBL models, the curse of exponential growth, that emerges from memory-based tabular computations. When more observations accumulate over time, there is an exponential growth of the memory of instances that leads directly to an exponential slowdown of the computational time. Thus, SpeedyIBL leverages parallel computation with vectorization to speed up the execution time of IBL models. We evaluate the robustness of SpeedyIBL over an existing implementation of IBLT in decision games of increased complexity. The results not only demonstrate the applicability of IBLT through a wide range of decision-making tasks, but also highlight the improvement of SpeedyIBL over its prior implementation as the complexity of decision features the of agents increase. The library is open sourced for the use of the broad research community.

2.
Neural Netw ; 132: 220-231, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32919312

RESUMO

We consider the large sum of DC (Difference of Convex) functions minimization problem which appear in several different areas, especially in stochastic optimization and machine learning. Two DCA (DC Algorithm) based algorithms are proposed: stochastic DCA and inexact stochastic DCA. We prove that the convergence of both algorithms to a critical point is guaranteed with probability one. Furthermore, we develop our stochastic DCA for solving an important problem in multi-task learning, namely group variables selection in multi class logistic regression. The corresponding stochastic DCA is very inexpensive, all computations are explicit. Numerical experiments on several benchmark datasets and synthetic datasets illustrate the efficiency of our algorithms and their superiority over existing methods, with respect to classification accuracy, sparsity of solution as well as running time.


Assuntos
Algoritmos , Aprendizado de Máquina , Modelos Logísticos , Processos Estocásticos
3.
Neural Netw ; 118: 220-234, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31319320

RESUMO

The need to select groups of variables arises in many statistical modeling problems and applications. In this paper, we consider the ℓp,0-norm regularization for enforcing group sparsity and investigate a DC (Difference of Convex functions) approximation approach for solving the ℓp,0-norm regularization problem. We show that, with suitable parameters, the original and approximate problems are equivalent. Considering two equivalent formulations of the approximate problem we develop DC programming and DCA (DC Algorithm) for solving them. As an application, we implement the proposed algorithms for group variable selection in the optimal scoring problem. The sparsity is obtained by using the ℓp,0-regularization that selects the same features in all discriminant vectors. The resulting sparse discriminant vectors provide a more interpretable low-dimensional representation of data. The experimental results on both simulated datasets and real datasets indicate the efficiency of the proposed algorithms.


Assuntos
Algoritmos , Bases de Dados Factuais/normas
4.
Neural Comput ; 29(11): 3040-3077, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28957024

RESUMO

This letter proposes a novel approach using the [Formula: see text]-norm regularization for the sparse covariance matrix estimation (SCME) problem. The objective function of SCME problem is composed of a nonconvex part and the [Formula: see text] term, which is discontinuous and difficult to tackle. Appropriate DC (difference of convex functions) approximations of [Formula: see text]-norm are used that result in approximation SCME problems that are still nonconvex. DC programming and DCA (DC algorithm), powerful tools in nonconvex programming framework, are investigated. Two DC formulations are proposed and corresponding DCA schemes developed. Two applications of the SCME problem that are considered are classification via sparse quadratic discriminant analysis and portfolio optimization. A careful empirical experiment is performed through simulated and real data sets to study the performance of the proposed algorithms. Numerical results showed their efficiency and their superiority compared with seven state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...