Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
IEEE Trans Neural Netw ; 13(6): 1465-71, 2002.
Article in English | MEDLINE | ID: mdl-18244541

ABSTRACT

We extend, in two major ways, earlier work in which sigmoidal neural nonlinearities were implemented using stochastic counters. 1) We define the signal to noise limitations of unipolar and bipolar stochastic arithmetic and signal processing. 2) We generalize the use of stochastic counters to include neural transfer functions employed in Gaussian mixture models. The hardware advantages of (nonlinear) stochastic signal processing (SSP) may be offset by increased processing time; we quantify these issues. The ability to realize accurate Gaussian activation functions for neurons in pulsed digital networks using simple hardware with stochastic signals is also analyzed quantitatively.

2.
Int J Neural Syst ; 11(4): 389-98, 2001 Aug.
Article in English | MEDLINE | ID: mdl-11706414

ABSTRACT

Robust signal processing for embedded systems requires the effective identification and representation of features within raw sensory data. This task is inherently difficult due to unavoidable long-term changes in the sensory systems and/or the sensed environment. In this paper we explore four variations of competitive learning and examine their suitability as an unsupervised technique for the automated identification of data clusters within a given input space. The relative performance of the four techniques is evaluated through their ability to effectively represent the structure underlying artificial and real-world data distributions. As a result of this study it was found that frequency sensitive competitive learning provides both reliable and efficient solutions to complex data distributions. As well, frequency sensitive and soft competitive learning are shown to exhibit properties which may permit the evolution of an appropriate network structure through the use of growing or pruning procedures.


Subject(s)
Cluster Analysis , Learning , Neural Networks, Computer , Learning/physiology
3.
IEEE Trans Neural Netw ; 12(5): 1147-62, 2001.
Article in English | MEDLINE | ID: mdl-18249941

ABSTRACT

A modified adaptive resonance theory (ART2) learning algorithm, which we employ in this paper, belongs to the family of NN algorithms whose main goal is the discovery of input data clusters, without considering their actual size. This feature makes the modified ART2 algorithm very convenient for image compression tasks, particularly when dealing with images with large background areas containing few details. Moreover, due to the ability to produce hierarchical quantization (clustering), the modified ART2 algorithm is proved to significantly reduce the computation time required for coding, and therefore enhance the overall compression process. Examples of the results obtained are presented, suggesting the benefits of using this algorithm for the purpose of VQ, i.e., image compression, over the other NN learning algorithms.

4.
IEEE Trans Neural Netw ; 12(6): 1505-12, 2001.
Article in English | MEDLINE | ID: mdl-18249980

ABSTRACT

Explores some of the properties of stochastic digital signal processing in which the input signals are represented as sequences of Bernoulli events. The event statistics of the resulting stochastic process may be governed by compound binomial processes, depending upon how the individual input variables to a neural network are stochastically multiplexed. Similar doubly stochastic statistics can also result from datasets which are Bernoulli mixtures, depending upon the temporal persistence of the mixture components at the input terminals to the network. The principal contribution of these results is in determining the required integration period of the stochastic signals for a given precision in pulsed digital neural networks.

5.
Int J Neural Syst ; 11(2): 203-10, 2001 Apr.
Article in English | MEDLINE | ID: mdl-14632172

ABSTRACT

Stochastic signal processing can implement gaussian activation functions for radial basis function networks, using stochastic counters. The statistics of neural inputs which control the increment and decrement operations of the counter are governed by Bernoulli distributions. The transfer functions relating the input and output pulse probabilities can closely approximate gaussian activation functions which improve with the number of states in the counter. The means and variances of these gaussian approximations can be controlled by varying the output combinational logic function of the binary counter variables.


Subject(s)
Stochastic Processes , Binomial Distribution , Neural Networks, Computer , Neurons/physiology
6.
Int J Neural Syst ; 10(4): 311-20, 2000 Aug.
Article in English | MEDLINE | ID: mdl-11052417

ABSTRACT

A genetic algorithm (GA) is used to search for a set of local feature detectors or hidden units. These are in turn employed as a representation of the input data for neural learning in the upper layer of a multilayer perceptron (MLP) which performs an image classification task. Three different methods of encoding hidden unit weights in the chromosome of the GA are presented, including one which coevolves all the feature detectors in a single chromosome, and two which promote the cooperation of feature detectors by encoding them in their own individual chromosomes. The fitness function measures the MLP classification accuracy together with the confidence of the networks.


Subject(s)
Algorithms , Neural Networks, Computer , Models, Genetic , Models, Neurological
7.
Int J Neural Syst ; 10(4): 321-30, 2000 Aug.
Article in English | MEDLINE | ID: mdl-11052418

ABSTRACT

Simulations indicate that the deterministic Boltzmann machine, unlike the stochastic Boltzmann machine from which it is derived, exhibits unstable behavior during contrastive Hebbian learning of nonlinear problems, including oscillation in the learning algorithm and extreme sensitivity to small weight perturbations. Although careful choice of the initial weight magnitudes, the learning rate, and the annealing schedule will produce convergence in most cases, the stability of the resulting solution depends on the parameters in a complex and generally indiscernible way. We show that this unstable behavior is the result of over parameterization (excessive freedom in the weights), which leads to continuous rather than isolated optimal weight solution sets. This allows the weights to drift without correction by the learning algorithm until the free energy landscape changes in such a way that the settling procedure employed finds a different minimum of the free energy function than it did previously and a gross output error occurs. Because all the weight sets in a continuous optimal solution set produce exactly the same network outputs, we define reliability, a measure of the robustness of the network, as a new performance criterion.


Subject(s)
Neural Networks, Computer , Stochastic Processes , Algorithms , Computer Simulation
8.
IEEE Trans Neural Netw ; 9(1): 229-31, 1998.
Article in English | MEDLINE | ID: mdl-18252446

ABSTRACT

This paper investigates neuron activation statistics in artificial neural networks employing stochastic arithmetic. It is shown that a doubly stochastic Poisson process is an appropriate model for the signals in these circuits.

9.
IEEE Trans Neural Netw ; 6(5): 1045-52, 1995.
Article in English | MEDLINE | ID: mdl-18263395

ABSTRACT

In this paper we present results of simulations performed assuming both forward and backward computation are done on-chip using analog components. Aspects of analog hardware studied are component variability, limited voltage ranges, components (multipliers) that only approximate the computations in the backpropagation algorithm, and capacitive weight decay. It is shown that backpropagation networks can learn to compensate for all these shortcomings of analog circuits except for zero offsets, and the latter are correctable with minor circuit complications. Variability in multiplier gains is not a problem, and learning is still possible despite limited voltage ranges and function approximations. Fixed component variation from fabrication is shown to be less detrimental to learning than component variation due to noise. Weight decay is tolerable provided it is sufficiently small, which implies frequent refreshing by rehearsal on the training data or modest cooling of the circuits. The former approach allows for learning nonstationary problem sets.

10.
Appl Opt ; 19(8): 1309-15, 1980 Apr 15.
Article in English | MEDLINE | ID: mdl-20221033

ABSTRACT

A theoretical and experimental study of the zero-bias quantum efficiency eta(0) for metal (Au, Cu, Ag)-Ge Schottky barrier photodetectors in the near IR range (1.1 microm < lambda < 1.8 microm) has been performed. By an interactive computer programming technique, the optical parameters of the metal thin film electrodes (index of refraction n and extinction coefficient k) as a function of wavelength and of film thickness are determined. Starting with a two-layer calculation of the reflectance R, transmittance T, and absorptance A of the metal electrode, it is found that the eta(0) in this near IR range is dominated by the band-to-band excitation of electrons in the Ge substrate. Using the minority carrier diffusion length L(p) as an adjustable parameter, good agreement between theoretical and experimental results was found for L(p) approximately 150 microm; this value was obtained independent of choice of metal or metal thickness justifying the above procedure.

11.
Opt Lett ; 4(5): 146-8, 1979 May 01.
Article in English | MEDLINE | ID: mdl-19687829

ABSTRACT

Exact photocounting distributions are obtained for a pulse of light whose intensity is exponentially decaying in time, when the underlying photon statistics are Poisson. It is assumed that the starting time for the sampling interval (which is of arbitrary duration) is uniformly distributed. The probability of registering n counts in the fixed time T is given in terms of the incomplete gamma function for n >/= 1 and in terms of the exponential integral for n = 0. Simple closed-form expressions are obtained for the count mean and variance. The results are expected to be of interest in certain studies involving spontaneous emission, radiation damage in solids, and nuclear counting. They will also be useful in neurobiology and psychophysics, since habituation and sensitization processes may sometimes be characterized by the same stochastic model.

SELECTION OF CITATIONS
SEARCH DETAIL
...