Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Comput ; 10(6): 1547-66, 1998 Jul 28.
Article in English | MEDLINE | ID: mdl-9698357

ABSTRACT

Image segmentation in spin-lattice models relies on the fast and reliable assignment of correct labels to those groups of spins that represent the same object. Commonly used local spin-update algorithms are slow because in each iteration only a single spin is flipped and a careful annealing schedule has to be designed in order to avoid local minima and correctly label larger areas. Updating of complete spin clusters is more efficient, but often clusters that should represent different objects will be conjoined. In this study, we propose a cluster update algorithm that, similar to most local update algorithms, calculates an energy function and determines the probability for flipping a whole cluster of spins by the energy gain calculated for a neighborhood of the regarded cluster. The novel algorithm, called energy-based cluster update (ECU algorithm), is compared to its predecessors. A convergence proof is derived, and it is shown that the algorithm outperforms local update algorithms by far in speed and reliability. At the same time it is more robust and noise tolerant than other versions of cluster update algorithms, making annealing completely unnecessary. The reduction in computational effort achieved this way allows us to segment real images in about 1-5 sec on a regular workstation. The ECU-algorithm can recover fine details of the images, and it is to a large degree robust with respect to luminance-gradients across objects. In a final step, we introduce luminance dependent visual latencies (Opara and Worgotter, 1996; Worgotter, Opara, Funke, and Eysel, 1996) into the spin-lattice model. This step guarantees that only spins representing pixels with similar luminance become activated at the same time. The energy function is then computed only for the interaction of the regarded cluster with the currently active spins. This latency mechanism improves the quality of the image segmentation by another 40%. The results shown are based on the evaluation of gray-level differences. It is important to realize that all algorithmic components can be transferred easily to arbitrary image features, like disparity, texture, and motion.

2.
Neural Comput ; 8(7): 1493-520, 1996 Oct 01.
Article in English | MEDLINE | ID: mdl-8823944

ABSTRACT

An artificial neural network model is proposed that combines several aspects taken from physiological observations (oscillations, synchronizations) with a visual latency mechanism in order to achieve an improved analysis of visual scenes. The network consists of two parts. In the lower layers that contain no lateral connections the propagation velocity of the activity of the units depends on the contrast of the individual objects in the scene. In the upper layers lateral connections are used to achieve synchronization between corresponding image parts. This architecture assures that the activity that arises in response to a scene containing objects with different contrast is spread out over several layers in the network. Thereby adjacent objects with different contrast will be separated and synchronization occurs in the upper layers without mutual disturbance between different objects. A comparison with a one-layer network shows that synchronization occurs in the upper layers without mutual disturbance between different objects. A comparison with a one-layer network shows that synchronization in the latency dependent multilayer net is indeed achieved much faster as soon as more than five objects have to be recognized. In addition, it is shown that the network is highly robust against noise in the stimuli or variations in the propagation delays (latencies), respectively. For a consistent analysis of a visual scene the different features of an individual object have to be recognized as belonging together and separated from other objects. This study shows that temporal differences, naturally introduced by stimulus latencies in every biological sensory system, can strongly improve the performance and allow for an analysis of more complex scenes.


Subject(s)
Models, Neurological , Vision, Ocular/physiology , Artifacts , Humans , Neural Networks, Computer , Reaction Time
3.
Neuroreport ; 7(3): 741-4, 1996 Feb 29.
Article in English | MEDLINE | ID: mdl-8733735

ABSTRACT

A consistent analysis of a visual scene requires the recognition of different objects. In vertebrate brains this could be achieved by synchronization of the activity of disjunct nerve cell assemblies. During such a process cross-talk between spatially adjacent image parts occurs, preventing efficient synchronization. Temporal differences, naturally introduced by stimulus latencies in every sensory system, were utilized in this study to counteract this effect and strongly improve network performance. To this end in our model the image is 'spread out' in time as a function of contrast-dependent visual latencies, and synchronization of cell assemblies occurs without mutual disturbance. The network model requires a direct link between visual latencies and the onset of synchronous oscillations in cortical cells. This was confirmed experimentally.


Subject(s)
Cognition/physiology , Nerve Net/physiology , Neural Networks, Computer , Evoked Potentials, Visual/physiology , Feedback/physiology , Geniculate Bodies/physiology , Models, Neurological , Photic Stimulation , Retinaldehyde/physiology , Vision, Ocular/physiology , Visual Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...