Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-11088572

ABSTRACT

Naive scale invariance is not a true property of natural images. Natural monochrome images possess a much richer geometrical structure, which is particularly well described in terms of multiscaling relations. This means that the pixels of a given image can be decomposed into sets, the fractal components of the image, with well-defined scaling exponents [Turiel and Parga, Neural Comput. 12, 763 (2000)]. Here it is shown that hyperspectral representations of natural scenes also exhibit multiscaling properties, observing the same kind of behavior. A precise measure of the informational relevance of the fractal components is also given, and it is shown that there are important differences between the intrinsically redundant red-green-blue system and the decorrelated one defined in Ruderman, Cronin, and Chiao [J. Opt. Soc. Am. A 15, 2036 (1998)].

2.
Phys Rev Lett ; 85(15): 3325-8, 2000 Oct 09.
Article in English | MEDLINE | ID: mdl-11019332

ABSTRACT

Natural images are characterized by the multiscaling properties of their contrast gradient, in addition to their power spectrum. In this Letter we show that those properties uniquely define an intrinsic wavelet and present a suitable technique to obtain it from an ensemble of images. Once this wavelet is known, images can be represented as expansions in the associated wavelet basis. The resulting code has the remarkable properties that it separates independent features at different resolution level, reducing the redundancy, and remains essentially unchanged under changes in the power spectrum. The possible generalization of this representation to other systems is discussed.


Subject(s)
Fractals , Models, Theoretical , Image Processing, Computer-Assisted , Vision, Ocular
3.
Neural Netw ; 13(2): 225-37, 2000 Mar.
Article in English | MEDLINE | ID: mdl-10935762

ABSTRACT

This paper describes an investigation of a recurrent artificial neural network which uses association to build transform-invariant representations. The simulation implements the analytic model of Parga and Rolls [(1998). Transform-invariant recognition by association in a recurrent network. Neural Computation 10(6), 1507-1525.] which defines multiple (e.g. "view") patterns to be within the basin of attraction of a shared (e.g. "object") representation. First, it was shown that the network could store and correctly retrieve an "object" representation from any one of the views which define that object, with capacity as predicted analytically. Second, new results extended the analysis by showing that correct object retrieval could occur where retrieval cues were distorted; where there was some association between the views of different objects; and where connectivity was diluted, even when this dilution was asymmetric. The simulations also extended the analysis by showing that the system could work well with sparse patterns; and showing how pattern sparseness interacts with the number of views of each object (as a result of the statistical properties of the pattern coding) to give predictable object retrieval performance. The results thus usefully extend a recurrent model of invariant pattern recognition.


Subject(s)
Artificial Intelligence , Models, Neurological , Neural Networks, Computer
4.
Network ; 11(2): 131-52, 2000 May.
Article in English | MEDLINE | ID: mdl-10880003

ABSTRACT

We report results on the scaling properties of changes in contrast of natural images in different visual environments. This study confirms the existence, in a vast class of images, of a multiplicative process relating the variations in contrast seen at two different scales, as was found in Turiel et al (Turiel A, Mato G, Parga N and Nadal J-P 1998 Self-Similarity Properties of Natural Images: Proc. NIPS'97 (Cambridge, MA: MIT Press), Turiel A, Mato G, Parga N and Nadal J-P 1998 Phys. Rev. Lett. 80 1098-101). But it also shows that the scaling exponents are not universal: even if most images follow the same type of statistics, they do it with different values of the distribution parameters. Motivated by these results, we also present the analysis of a generative model of images that reproduces those properties and that has the correct power spectrum. Possible implications for visual processing are also discussed.


Subject(s)
Environment , Models, Neurological , Vision, Ocular/physiology , Visual Pathways/physiology , Animals
5.
Neural Comput ; 12(4): 763-93, 2000 Apr.
Article in English | MEDLINE | ID: mdl-10770831

ABSTRACT

We present a formalism that leads naturally to a hierarchical description of the different contrast structures in images, providing precise definitions of sharp edges and other texture components. Within this formalism, we achieve a decomposition of pixels of the image in sets, the fractal components of the image, such that each set contains only points characterized by a fixed strength of the singularity of the contrast gradient in its neighborhood. A crucial role in this description of images is played by the behavior of contrast differences under changes in scale. Contrary to naive scaling ideas where the image is thought to have uniform transformation properties (Field, 1987), each of these fractal components has its own transformation law and scaling exponents. A conjecture on their biological relevance is also given.

6.
Network ; 10(3): 237-55, 1999 Aug.
Article in English | MEDLINE | ID: mdl-10496475

ABSTRACT

The existence of recurrent collateral connections between pyramidal cells within a cortical area and, in addition, reciprocal connections between connected cortical areas, is well established. In this work we analyse the properties of a tri-modular architecture of this type in which two input modules have convergent connections to a third module (which in the brain might be the next module in cortical processing or a bi-modal area receiving connections from two different processing pathways). Memory retrieval is analysed in this system which has Hebb-like synaptic modifiability in the connections and attractor states. Local activity features are stored in the intra-modular connections while the associations between corresponding features in different modules present during training are stored in the inter-modular connections. The response of the network when tested with corresponding and contradictory stimuli to the two input pathways is studied in detail. The model is solved quantitatively using techniques of statistical physics. In one type of test, a sequence of stimuli is applied, with a delay between them. It is found that if the coupling between the modules is low a regime exists in which they retain the capability to retrieve any of their stored features independently of the features being retrieved by the other modules. Although independent in this sense, the modules still influence each other in this regime through persistent modulatory currents which are strong enough to initiate recall in the whole network when only a single module is stimulated, and to raise the mean firing rates of the neurons in the attractors if the features in the different modules are corresponding. Some of these mechanisms might be useful for the description of many phenomena observed in single neuron activity recorded during short term memory tasks such as delayed match-to-sample. It is also shown that with contradictory stimulation of the two input modules the model accounts for many of the phenomena observed in the McGurk effect, in which contradictory auditory and visual inputs can lead to misperception.


Subject(s)
Cerebral Cortex/physiology , Memory/physiology , Neural Networks, Computer , Action Potentials/physiology , Cerebral Cortex/cytology , Humans , Pyramidal Cells/physiology
7.
Neural Comput ; 11(6): 1349-88, 1999 Aug 15.
Article in English | MEDLINE | ID: mdl-10423499

ABSTRACT

Cortical areas are characterized by forward and backward connections between adjacent cortical areas in a processing stream. Within each area there are recurrent collateral connections between the pyramidal cells. We analyze the properties of this architecture for memory storage and processing. Hebb-like synaptic modifiability in the connections and attractor states are incorporated. We show the following: (1) The number of memories that can be stored in the connected modules is of the same order of magnitude as the number that can be stored in any one module using the recurrent collateral connections, and is proportional to the number of effective connections per neuron. (2) Cooperation between modules leads to a small increase in memory capacity. (3) Cooperation can also help retrieval in a module that is cued with a noisy or incomplete pattern. (4) If the connection strength between modules is strong, then global memory states that reflect the pairs of patterns on which the modules were trained together are found. (5) If the intermodule connection strengths are weaker, then separate, local memory states can exist in each module. (6) The boundaries between the global and local retrieval states, and the nonretrieval state, are delimited. All of these properties are analyzed quantitatively with the techniques of statistical physics.


Subject(s)
Cerebral Cortex/physiology , Memory/physiology , Models, Neurological , Synaptic Transmission/physiology , Efferent Pathways/physiology
8.
Network ; 9(2): 207-17, 1998 May.
Article in English | MEDLINE | ID: mdl-9861986

ABSTRACT

We prove that maximization of mutual information between the output and the input of a feedforward neural network leads to full redundancy reduction under the following sufficient conditions: (i) the input signal is a (possibly nonlinear) invertible mixture of independent components; (ii) there is no input noise; (iii) the activity of each output neuron is a (possibly) stochastic variable with a probability distribution depending on the stimulus through a deterministic function of the inputs (where both the probability distributions and the functions can be different from neuron to neuron); (iv) optimization of the mutual information is performed over all these deterministic functions. This result extends that obtained by Nadal and Parga (1994) who considered the case of deterministic outputs.


Subject(s)
Information Theory , Neural Networks, Computer , Nonlinear Dynamics , Stochastic Processes , Feedback/physiology , Neurons/physiology
9.
Neural Comput ; 10(6): 1507-25, 1998 Aug 15.
Article in English | MEDLINE | ID: mdl-9698355

ABSTRACT

Objects can be recognized independently of the view they present, of their position on the retina, or their scale. It has been suggested that one basic mechanism that makes this possible is a memory effect, or a trace, that allows associations to be made between consecutive views of one object. In this work, we explore the possibility that this memory trace is provided by the sustained activity of neurons in layers of the visual pathway produced by an extensive recurrent connectivity. We describe a model that contains this high recurrent connectivity and synaptic efficacies built with contributions from associations between pairs of views that is simple enough to be treated analytically. The main result is that there is a change of behavior as the strength of the association between views of the same object, relative to the association within each view of an object, increases. When its value is small, sustained activity in the network is produced by the views themselves. As it increases above a threshold value, the network always reaches a particular state (which represents the object) independent of the particular view that was seen as a stimulus. In this regime, the network can still store an extensive number of objects, each defined by a finite (although it can be large) number of views.


Subject(s)
Form Perception/physiology , Fourier Analysis , Neural Networks, Computer , Pattern Recognition, Visual , Retina/physiology , Synapses/physiology , Visual Cortex/physiology
10.
Phys Rev Lett ; 54(5): 369-372, 1985 Feb 04.
Article in English | MEDLINE | ID: mdl-10031497
SELECTION OF CITATIONS
SEARCH DETAIL
...