Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Neural Netw ; 168: 484-495, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37806141

RESUMEN

Neurons are the fundamental units of neural networks. In this paper, we propose a method for explaining neural networks by visualizing the learning process of neurons. For a trained neural network, the proposed method obtains the features learned by each neuron and displays the features in a human-understandable form. The features learned by different neurons are combined to analyze the working mechanism of different neural network models. The method is applicable to neural networks without requiring any changes to the architectures of the models. In this study, we apply the proposed method to both Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) trained using the backpropagation learning algorithm. We conduct experiments on models for image classification tasks to demonstrate the effectiveness of the method. Through these experiments, we gain insights into the working mechanisms of various neural network architectures and evaluate neural network interpretability from diverse perspectives.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Humanos
2.
Nanomaterials (Basel) ; 13(16)2023 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-37630946

RESUMEN

Inspired by its highly efficient capability to deal with big data, the brain-like computational system has attracted a great amount of attention for its ability to outperform the von Neumann computation paradigm. As the core of the neuromorphic computing chip, an artificial synapse based on the memristor, with a high accuracy in processing images, is highly desired. We report, for the first time, that artificial synapse arrays with a high accuracy in image recognition can be obtained through the fabrication of a SiNz:H memristor with a gradient Si/N ratio. The training accuracy of SiNz:H synapse arrays for image learning can reach 93.65%. The temperature-dependent I-V characteristic reveals that the gradual Si dangling bond pathway makes the main contribution towards improving the linearity of the tunable conductance. The thinner diameter and fixed disconnection point in the gradual pathway are of benefit in enhancing the accuracy of visual identification. The artificial SiNz:H synapse arrays display stable and uniform biological functions, such as the short-term biosynaptic functions, including spike-duration-dependent plasticity, spike-number-dependent plasticity, and paired-pulse facilitation, as well as the long-term ones, such as long-term potentiation, long-term depression, and spike-time-dependent plasticity. The highly efficient visual learning capability of the artificial SiNz:H synapse with a gradual conductive pathway for neuromorphic systems hold great application potential in the age of artificial intelligence (AI).

3.
Neural Netw ; 151: 132-142, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35421708

RESUMEN

The long-tailed distribution in the dataset is one of the major challenges of deep learning. Convolutional Neural Networks have poor performance in identifying classes with only a few samples. For this problem, it has been proved that separating the feature learning stage and the classifier learning stage improves the performance of models effectively, which is called decoupled learning. We use soft labels to improve the performance of the decoupled learning framework by proposing a Dynamic Auxiliary Soft Labels (DaSL) method. Specifically, we design a dedicated auxiliary network to generate auxiliary soft labels for the two different training stages. In the feature learning stage, it helps to learn features with smaller variance within the class, and in the classifier learning stage it helps to alleviate the overconfidence of the model prediction. We also introduce a feature-level distillation method for the feature learning, and improve the learning of general features through multi-scale feature fusion. We conduct extensive experiments on three long-tailed recognition benchmark datasets to demonstrate the effectiveness of our DaSL.


Asunto(s)
Benchmarking , Redes Neurales de la Computación
4.
Neural Netw ; 145: 177-188, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34763244

RESUMEN

As a popular machine learning method, neural networks can be used to solve many complex tasks. Their strong generalization ability comes from the representation ability of the basic neuron models. The most popular neuron model is the McCulloch-Pitts (MP) neuron, which uses a simple transformation to process the input signal. A common trend is to use the MP neuron to design various neural networks. However, the optimization of the neuron structure is rarely considered. Inspired by the elastic collision model in physics, we propose a new neuron model that can represent more complex distributions. We term it the Inter-layer Collision (IC) neuron which divides the input space into multiple subspaces to represent different linear transformations. Through this operation, the IC neuron enhances the non-linear representation ability and emphasizes useful input features for a given task. We build the IC networks by integrating the IC neurons into the fully-connected, the convolutional, and the recurrent structures. The IC networks outperform the traditional neural networks in a wide range of tasks. Besides, we combine the IC neuron with deep learning models and show the superiority of the IC neuron. Our research proves that the IC neuron can be an effective basic unit to build network structures and make the network performance better.


Asunto(s)
Redes Neurales de la Computación , Neuronas , Generalización Psicológica
5.
IEEE Trans Neural Netw Learn Syst ; 33(1): 392-405, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33112751

RESUMEN

In order to quickly discover the low-dimensional representation of high-dimensional noisy data in online environments, we transform the linear dimensionality reduction problem into the problem of learning the bases of linear feature subspaces. Based on that, we propose a fast and robust dimensionality reduction framework for incremental subspace learning named evolutionary orthogonal component analysis (EOCA). By setting adaptive thresholds to automatically determine the target dimensionality, the proposed method extracts the orthogonal subspace bases of data incrementally to realize dimensionality reduction and avoids complex computations. Besides, EOCA can merge two learned subspaces that are represented by their orthonormal bases to a new one to eliminate the outlier effects, and the new subspace is proved to be unique. Extensive experiments and analysis demonstrate that EOCA is fast and achieves competitive results, especially for noisy data.

6.
IEEE Trans Neural Netw Learn Syst ; 32(5): 2180-2194, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-32584773

RESUMEN

Neurobiologists recently found the brain can use sudden emerged channels to process information. Based on this finding, we put forward a question whether we can build a computation model that is able to integrate a sudden emerged new type of perceptual channel into itself in an online way. If such a computation model can be established, it will introduce a channel-free property to the computation model and meanwhile deepen our understanding about the extendibility of the brain. In this article, a biologically inspired neural network named artificial evolution (AE) network is proposed to handle the problem. When a new perceptual channel emerges, the neurons in the network can grow new connections to connect the emerged channel according to the Hebb rule. In this article, we design a sensory channel expansion experiment to test the AE network. The experimental results demonstrate that the AE network can handle the sudden emerged perceptual channels effectively.


Asunto(s)
Inteligencia Artificial , Fenómenos Fisiológicos del Sistema Nervioso , Redes Neurales de la Computación , Algoritmos , Animales , Simulación por Computador , Humanos , Procesos Mentales , Modelos Neurológicos , Sistemas en Línea , Ratas , Aprendizaje Automático no Supervisado
7.
Artículo en Inglés | MEDLINE | ID: mdl-32286977

RESUMEN

Image clustering is more challenging than image classification. Without supervised information, current deep learning methods are difficult to be directly applied to image clustering problems. Image clustering needs to deal with three main problems: 1) the curse of dimensionality caused by highdimensional image data; 2) extracting the effective image features; 3) combining feature extraction, dimensionality reduction and clustering. In this paper, we propose a new clustering framework called Deep Embedded Dimensionality Reduction Clustering (DERC) via Probability-Based Triplet Loss, which effectively solves the above issues. To the best of our knowledge, the DERC is the first framework that effectively combines image embedding, dimensionality reduction, and clustering into the image clustering process. We also propose to incorporate a novel probability-based triplet loss measure to retrain the DERC network as a unified framework. By integrating the reconstruction loss and the probability-based triplet loss, we can improve the image clustering accuracy. Extensive experiments show that our proposed methods outperform state-of-the-art methods on many commonly used datasets.

8.
Neural Netw ; 121: 161-168, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31563699

RESUMEN

User response prediction makes a crucial contribution to the rapid development of online advertising system and recommendation system. The importance of learning feature interactions has been emphasized by many works. Many deep models are proposed to automatically learn high-order feature interactions. Since most features in advertising systems and recommendation systems are high-dimensional sparse features, deep models usually learn a low-dimensional distributed representation for each feature in the bottom layer. Besides traditional fully-connected architectures, some new operations, such as convolutional operations and product operations, are proposed to learn feature interactions better. In these models, the representation is shared among different operations. However, the best representation for each operation may be different. In this paper, we propose a new neural model named Operation-aware Neural Networks (ONN) which learns different representations for different operations. Our experimental results on two large-scale real-world ad click/conversion datasets demonstrate that ONN consistently outperforms the state-of-the-art models in both offline-training environment and online-training environment.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Aprendizaje Profundo/tendencias , Predicción , Aprendizaje Automático/tendencias
9.
Neural Netw ; 117: 274-284, 2019 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-31207480

RESUMEN

In this paper, we propose a locally linear classifier based on boundary anchor points encoding (LLBAP) to achieve the efficiency of linear SVM and the power of kernel SVM. LLBAP partitions linearly non-separable data into approximately linearly separable parts based on boundary point scanning and local coding. Each part of data is solved by a linear SVM. Experiments on large-scale benchmark datasets demonstrate that the proposed method is more efficient than kernel SVM in both training and testing phases; its efficiency and classification accuracy also outperform other locally linear classifiers on those benchmark datasets.


Asunto(s)
Inteligencia Artificial
10.
Neural Netw ; 110: 141-158, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30557793

RESUMEN

Data stream clustering is a branch of clustering where patterns are processed as an ordered sequence. In this paper, we propose an unsupervised learning neural network named Density Based Self Organizing Incremental Neural Network(DenSOINN) for data stream clustering tasks. DenSOINN is a self organizing competitive network that grows incrementally to learn suitable nodes to fit the distribution of learning data, combining online unsupervised learning and topology learning by means of competitive Hebbian learning rule. By adopting a density-based clustering mechanism, DenSOINN discovers arbitrarily shaped clusters and diminishes the negative effect of noise. In addition, we adopt a self-adaptive distance framework to obtain good performance for learning unnormalized input data. Experiments show that the DenSOINN can achieve high standard performance comparing to state-of-the-art methods.


Asunto(s)
Bases de Datos Factuales , Redes Neurales de la Computación , Aprendizaje Automático no Supervisado , Algoritmos , Análisis por Conglomerados , Bases de Datos Factuales/tendencias , Aprendizaje Automático no Supervisado/tendencias
11.
IEEE Trans Neural Netw Learn Syst ; 30(4): 1104-1118, 2019 04.
Artículo en Inglés | MEDLINE | ID: mdl-30137016

RESUMEN

To simulate the concept acquisition and binding of different senses in the brain, a biologically inspired neural network model named perception coordination network (PCN) is proposed. It is a hierarchical structure, which is functionally divided into the primary sensory area (PSA), the primary sensory association area (SAA), and the higher order association area (HAA). The PSA contains feature neurons which respond to many elementary features, e.g., colors, shapes, syllables, and basic flavors. The SAA contains primary concept neurons which combine the elementary features in the PSA to represent unimodal concept of objects, e.g., the image of an apple, the Chinese word "[píng guǒ]" which names the apple, and the taste of the apple. The HAA contains associated neurons which connect the primary concept neurons of several PSA, e.g., connects the image, the taste, and the name of an apple. It means that the associated neurons have a multimodal response mode. Therefore, this area executes multisensory integration. PCN is an online incremental learning system, it is able to continuously acquire and bind multimodality concepts in an online way. The experimental results suggest that PCN is able to handle the multimodal concept acquisition and binding effectively.

12.
Neural Netw ; 85: 33-50, 2017 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-27814465

RESUMEN

In this paper, we introduce a fast linear dimensionality reduction method named incremental orthogonal component analysis (IOCA). IOCA is designed to automatically extract desired orthogonal components (OCs) in an online environment. The OCs and the low-dimensional representations of original data are obtained with only one pass through the entire dataset. Without solving matrix eigenproblem or matrix inversion problem, IOCA learns incrementally from continuous data stream with low computational cost. By proposing an adaptive threshold policy, IOCA is able to automatically determine the dimension of feature subspace. Meanwhile, the quality of the learned OCs is guaranteed. The analysis and experiments demonstrate that IOCA is simple, but efficient and effective.


Asunto(s)
Redes Neurales de la Computación , Algoritmos , Aprendizaje Automático
13.
Neural Netw ; 84: 143-160, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-27718392

RESUMEN

In this paper, we propose an unsupervised incremental learning neural network based on local distribution learning, which is called Local Distribution Self-Organizing Incremental Neural Network (LD-SOINN). The LD-SOINN combines the advantages of incremental learning and matrix learning. It can automatically discover suitable nodes to fit the learning data in an incremental way without a priori knowledge such as the structure of the network. The nodes of the network store rich local information regarding the learning data. The adaptive vigilance parameter guarantees that LD-SOINN is able to add new nodes for new knowledge automatically and the number of nodes will not grow unlimitedly. While the learning process continues, nodes that are close to each other and have similar principal components are merged to obtain a concise local representation, which we call a relaxation data representation. A denoising process based on density is designed to reduce the influence of noise. Experiments show that the LD-SOINN performs well on both artificial and real-word data.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos , Humanos , Conocimiento , Aprendizaje
14.
IEEE Trans Neural Netw Learn Syst ; 27(3): 607-20, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-25935048

RESUMEN

The proposed perception evolution network (PEN) is a biologically inspired neural network model for unsupervised learning and online incremental learning. It is able to automatically learn suitable prototypes from learning data in an incremental way, and it does not require the predefined prototype number or the predefined similarity threshold. Meanwhile, being more advanced than the existing unsupervised neural network model, PEN permits the emergence of a new dimension of perception in the perception field of the network. When a new dimension of perception is introduced, PEN is able to integrate the new dimensional sensory inputs with the learned prototypes, i.e., the prototypes are mapped to a high-dimensional space, which consists of both the original dimension and the new dimension of the sensory inputs. In the experiment, artificial data and real-world data are used to test the proposed PEN, and the results show that PEN can work effectively.


Asunto(s)
Adaptación Biológica/fisiología , Cognición/fisiología , Aprendizaje/fisiología , Redes Neurales de la Computación , Percepción/fisiología , Células Receptoras Sensoriales/fisiología , Animales , Humanos , Modelos Teóricos
15.
Bing Du Xue Bao ; 31(5): 488-93, 2015 Sep.
Artículo en Chino | MEDLINE | ID: mdl-26738285

RESUMEN

The bovine enterovirus (BEV) is a pathogen found the digestive tracts of cattle. Recently, the BEV was discovered in cattle in a province in China. A rapid and effective detection method for the BEV is essential. An assay was carried out using two specific primers designed to amplify a highly conserved sequence of the 3D gene. A recombinant plasmid containing the target gene 3D was constructed as a standard control. The limit of detection of the reaction was 7.13 x 10(1) plasmid copies/µL of initial templates, which was tenfold more sensitive than the conventional reverse-transcription-polymerase chain reaction (RT-PCR). Moreover, the assay was highly specific because all negative controls and other viruses of clinical relevance did not develop positive results. Assay performance on field samples was evaluated on 44 (41 diarrhea and 3 aerosol) samples and compared with the conventional RT-PCR assay. Sixteen diarrhea samples were positive (16/41, 39. 02%) and 3 aerosol samples were positive (3/3, 100%). Preliminary results for clinical detection showed that the SYBR Green I real-time PCR assay was highly sensitive, specific and reproducible. The robustness and high-throughput performance of the developed assay make it a powerful tool in diagnostic applications for epidemics and in BEV research.


Asunto(s)
Enfermedades de los Bovinos/virología , Infecciones por Enterovirus/veterinaria , Enterovirus Bovino/aislamiento & purificación , Reacción en Cadena en Tiempo Real de la Polimerasa/métodos , Animales , Benzotiazoles , Bovinos , Enfermedades de los Bovinos/diagnóstico , Cartilla de ADN/química , Cartilla de ADN/genética , Diaminas , Infecciones por Enterovirus/diagnóstico , Infecciones por Enterovirus/virología , Enterovirus Bovino/genética , Compuestos Orgánicos/química , Quinolinas , Sensibilidad y Especificidad
16.
Neural Netw ; 21(10): 1537-47, 2008 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-18678468

RESUMEN

A fast prototype-based nearest neighbor classifier is introduced. The proposed Adjusted SOINN Classifier (ASC) is based on SOINN (self-organizing incremental neural network), it automatically learns the number of prototypes needed to determine the decision boundary, and learns new information without destroying old learned information. It is robust to noisy training data, and it realizes very fast classification. In the experiment, we use some artificial datasets and real-world datasets to illustrate ASC. We also compare ASC with other prototype-based classifiers with regard to its classification error, compression ratio, and speed up ratio. The results show that ASC has the best performance and it is a very efficient classifier.


Asunto(s)
Inteligencia Artificial , Redes Neurales de la Computación , Algoritmos , Distribución Normal , Reconocimiento de Normas Patrones Automatizadas/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...