Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Network ; 11(4): 321-32, 2000 Nov.
Article in English | MEDLINE | ID: mdl-11128170

ABSTRACT

A set of sigma-pi units randomly connected to two input vectors forms a type of hetero-associator related to convolution- and matrix-based associative memories. Associations are represented as patterns of activity rather than connection strengths. Decoding the associations requires another network of sigma-pi units, with connectivity dependent on the encoding network. Learning the connectivity of the decoding network involves setting n3 parameters (where n is the size of the vectors), and can be accomplished in approximately 3e n log n presentations of random patterns. This type of network encodes information in activation values rather than in weight values, which makes the information about relationships accessible to further processing. This accessibility is essential for higher-level cognitive tasks such as analogy processing. The fact that random networks can perform useful operations makes it more plausible that these types of associative network could have arisen in the nervous systems of natural organisms during the course of evolution.


Subject(s)
Cognition/physiology , Models, Neurological , Nerve Net/physiology , Neural Pathways/physiology , Neurons/physiology , Animals , Humans , Learning/physiology , Memory/physiology , Nonlinear Dynamics
2.
Neural Comput ; 12(6): 1337-53, 2000 Jun.
Article in English | MEDLINE | ID: mdl-10935716

ABSTRACT

A method for visualizing the function computed by a feedforward neural network is presented. It is most suitable for models with continuous inputs and a small number of outputs, where the output function is reasonably smooth, as in regression and probabilistic classification tasks. The visualization makes readily apparent the effects of each input and the way in which the functions deviate from a linear function. The visualization can also assist in identifying interactions in the fitted model. The method uses only the input-output relationship and thus can be applied to any predictive statistical model, including bagged and committee models, which are otherwise difficult to interpret. The visualization method is demonstrated on a neural network model of how the risk of lung cancer is affected by smoking and drinking.


Subject(s)
Feedback , Neural Networks, Computer , Alcohol Drinking , Carcinoma, Squamous Cell/epidemiology , Computer Graphics , Humans , Linear Models , Lung Neoplasms/epidemiology , Regression Analysis , Risk Assessment , Smoking
3.
IEEE Trans Neural Netw ; 6(3): 623-41, 1995.
Article in English | MEDLINE | ID: mdl-18263348

ABSTRACT

Associative memories are conventionally used to represent data with very simple structure: sets of pairs of vectors. This paper describes a method for representing more complex compositional structure in distributed representations. The method uses circular convolution to associate items, which are represented by vectors. Arbitrary variable bindings, short sequences of various lengths, simple frame-like structures, and reduced representations can be represented in a fixed width vector. These representations are items in their own right and can be used in constructing compositional structures. The noisy reconstructions extracted from convolution memories can be cleaned up by using a separate associative memory that has good reconstructive properties.

SELECTION OF CITATIONS
SEARCH DETAIL
...