Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Front Artif Intell ; 6: 1151391, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37215064

RESUMO

The education sector has benefited enormously through integrating digital technology driven tools and platforms. In recent years, artificial intelligence based methods are being considered as the next generation of technology that can enhance the experience of education for students, teachers, and administrative staff alike. The concurrent boom of necessary infrastructure, digitized data and general social awareness has propelled these efforts further. In this review article, we investigate how artificial intelligence, machine learning, and deep learning methods are being utilized to support the education process. We do this through the lens of a novel categorization approach. We consider the involvement of AI-driven methods in the education process in its entirety-from students admissions, course scheduling, and content generation in the proactive planning phase to knowledge delivery, performance assessment, and outcome prediction in the reactive execution phase. We outline and analyze the major research directions under proactive and reactive engagement of AI in education using a representative group of 195 original research articles published in the past two decades, i.e., 2003-2022. We discuss the paradigm shifts in the solution approaches proposed, particularly with respect to the choice of data and algorithms used over this time. We further discuss how the COVID-19 pandemic influenced this field of active development and the existing infrastructural challenges and ethical concerns pertaining to global adoption of artificial intelligence for education.

2.
Front Neurosci ; 15: 715451, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34393719

RESUMO

Growth-transform (GT) neurons and their population models allow for independent control over the spiking statistics and the transient population dynamics while optimizing a physically plausible distributed energy functional involving continuous-valued neural variables. In this paper we describe a backpropagation-less learning approach to train a network of spiking GT neurons by enforcing sparsity constraints on the overall network spiking activity. The key features of the model and the proposed learning framework are: (a) spike responses are generated as a result of constraint violation and hence can be viewed as Lagrangian parameters; (b) the optimal parameters for a given task can be learned using neurally relevant local learning rules and in an online manner; (c) the network optimizes itself to encode the solution with as few spikes as possible (sparsity); (d) the network optimizes itself to operate at a solution with the maximum dynamic range and away from saturation; and (e) the framework is flexible enough to incorporate additional structural and connectivity constraints on the network. As a result, the proposed formulation is attractive for designing neuromorphic tinyML systems that are constrained in energy, resources, and network structure. In this paper, we show how the approach could be used for unsupervised and supervised learning such that minimizing a training error is equivalent to minimizing the overall spiking activity across the network. We then build on this framework to implement three different multi-layer spiking network architectures with progressively increasing flexibility in training and consequently, sparsity. We demonstrate the applicability of the proposed algorithm for resource-efficient learning using a publicly available machine olfaction dataset with unique challenges like sensor drift and a wide range of stimulus concentrations. In all of these case studies we show that a GT network trained using the proposed learning approach is able to minimize the network-level spiking activity while producing classification accuracy that are comparable to standard approaches on the same dataset.

3.
Front Neurosci ; 14: 425, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32477051

RESUMO

In neuromorphic engineering, neural populations are generally modeled in a bottom-up manner, where individual neuron models are connected through synapses to form large-scale spiking networks. Alternatively, a top-down approach treats the process of spike generation and neural representation of excitation in the context of minimizing some measure of network energy. However, these approaches usually define the energy functional in terms of some statistical measure of spiking activity (ex. firing rates), which does not allow independent control and optimization of neurodynamical parameters. In this paper, we introduce a new spiking neuron and population model where the dynamical and spiking responses of neurons can be derived directly from a network objective or energy functional of continuous-valued neural variables like the membrane potential. The key advantage of the model is that it allows for independent control over three neuro-dynamical properties: (a) control over the steady-state population dynamics that encodes the minimum of an exact network energy functional; (b) control over the shape of the action potentials generated by individual neurons in the network without affecting the network minimum; and (c) control over spiking statistics and transient population dynamics without affecting the network minimum or the shape of action potentials. At the core of the proposed model are different variants of Growth Transform dynamical systems that produce stable and interpretable population dynamics, irrespective of the network size and the type of neuronal connectivity (inhibitory or excitatory). In this paper, we present several examples where the proposed model has been configured to produce different types of single-neuron dynamics as well as population dynamics. In one such example, the network is shown to adapt such that it encodes the steady-state solution using a reduced number of spikes upon convergence to the optimal solution. In this paper, we use this network to construct a spiking associative memory that uses fewer spikes compared to conventional architectures, while maintaining high recall accuracy at high memory loads.

4.
IEEE Trans Neural Netw Learn Syst ; 29(5): 1961-1974, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-28436898

RESUMO

Growth transformations constitute a class of fixed-point multiplicative update algorithms that were originally proposed for optimizing polynomial and rational functions over a domain of probability measures. In this paper, we extend this framework to the domain of bounded real variables which can be applied towards optimizing the dual cost function of a generic support vector machine (SVM). The approach can, therefore, not only be used to train traditional soft-margin binary SVMs, one-class SVMs, and probabilistic SVMs but can also be used to design novel variants of SVMs with different types of convex and quasi-convex loss functions. In this paper, we propose an efficient training algorithm based on polynomial growth transforms, and compare and contrast the properties of different SVM variants using several synthetic and benchmark data sets. The preliminary experiments show that the proposed multiplicative update algorithm is more scalable and yields better convergence compared to standard quadratic and nonlinear programming solvers. While the formulation and the underlying algorithms have been validated in this paper only for SVM-based learning, the proposed approach is general and can be applied to a wide variety of optimization problems and statistical learning models.

5.
IEEE Trans Neural Netw Learn Syst ; 29(6): 2379-2391, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-28463206

RESUMO

This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...