Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 20
Filter
Add more filters










Publication year range
1.
Front Comput Neurosci ; 16: 789253, 2022.
Article in English | MEDLINE | ID: mdl-35386856

ABSTRACT

We develop biologically plausible training mechanisms for self-supervised learning (SSL) in deep networks. Specifically, by biologically plausible training we mean (i) all updates of weights are based on current activities of pre-synaptic units and current, or activity retrieved from short term memory of post synaptic units, including at the top-most error computing layer, (ii) complex computations such as normalization, inner products and division are avoided, (iii) asymmetric connections between units, and (iv) most learning is carried out in an unsupervised manner. SSL with a contrastive loss satisfies the third condition as it does not require labeled data and it introduces robustness to observed perturbations of objects, which occur naturally as objects or observers move in 3D and with variable lighting over time. We propose a contrastive hinge based loss whose error involves simple local computations satisfying (ii), as opposed to the standard contrastive losses employed in the literature, which do not lend themselves easily to implementation in a network architecture due to complex computations involving ratios and inner products. Furthermore, we show that learning can be performed with one of two more plausible alternatives to backpropagation that satisfy conditions (i) and (ii). The first is difference target propagation (DTP), which trains network parameters using target-based local losses and employs a Hebbian learning rule, thus overcoming the biologically implausible symmetric weight problem in backpropagation. The second is layer-wise learning, where each layer is directly connected to a layer computing the loss error. The layers are either updated sequentially in a greedy fashion (GLL) or in random order (RLL), and each training stage involves a single hidden layer network. Backpropagation through one layer needed for each such network can either be altered with fixed random feedback weights (RF) or using updated random feedback weights (URF) as in Amity's study 2019. Both methods represent alternatives to the symmetric weight issue of backpropagation. By training convolutional neural networks (CNNs) with SSL and DTP, GLL or RLL, we find that our proposed framework achieves comparable performance to standard BP learning downstream linear classifier evaluation of the learned embeddings.

2.
Front Comput Neurosci ; 13: 18, 2019.
Article in English | MEDLINE | ID: mdl-31019458

ABSTRACT

We show that deep networks can be trained using Hebbian updates yielding similar performance to ordinary back-propagation on challenging image datasets. To overcome the unrealistic symmetry in connections between layers, implicit in back-propagation, the feedback weights are separate from the feedforward weights. The feedback weights are also updated with a local rule, the same as the feedforward weights-a weight is updated solely based on the product of activity of the units it connects. With fixed feedback weights as proposed in Lillicrap et al. (2016) performance degrades quickly as the depth of the network increases. If the feedforward and feedback weights are initialized with the same values, as proposed in Zipser and Rumelhart (1990), they remain the same throughout training thus precisely implementing back-propagation. We show that even when the weights are initialized differently and at random, and the algorithm is no longer performing back-propagation, performance is comparable on challenging datasets. We also propose a cost function whose derivative can be represented as a local Hebbian update on the last layer. Convolutional layers are updated with tied weights across space, which is not biologically plausible. We show that similar performance is achieved with untied layers, also known as locally connected layers, corresponding to the connectivity implied by the convolutional layers, but where weights are untied and updated separately. In the linear case we show theoretically that the convergence of the error to zero is accelerated by the update of the feedback weights.

3.
Nat Neurosci ; 18(12): 1804-10, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26523643

ABSTRACT

Information about external stimuli is thought to be stored in cortical circuits through experience-dependent modifications of synaptic connectivity. These modifications of network connectivity should lead to changes in neuronal activity as a particular stimulus is repeatedly encountered. Here we ask what plasticity rules are consistent with the differences in the statistics of the visual response to novel and familiar stimuli in inferior temporal cortex, an area underlying visual object recognition. We introduce a method that allows one to infer the dependence of the presumptive learning rule on postsynaptic firing rate, and we show that the inferred learning rule exhibits depression for low postsynaptic rates and potentiation for high rates. The threshold separating depression from potentiation is strongly correlated with both mean and s.d. of the firing rate distribution. Finally, we show that network models implementing a rule extracted from data show stable learning dynamics and lead to sparser representations of stimuli.


Subject(s)
Action Potentials/physiology , Learning/physiology , Neurons/physiology , Temporal Lobe/physiology , Animals , Macaca mulatta , Male , Temporal Lobe/cytology
4.
PLoS Comput Biol ; 11(10): e1004517, 2015 Oct.
Article in English | MEDLINE | ID: mdl-26439258

ABSTRACT

This paper presents a method for automated detection of complex (non-self-avoiding) postures of the nematode Caenorhabditis elegans and its application to analyses of locomotion defects. Our approach is based on progressively detailed statistical models that enable detection of the head and the body even in cases of severe coilers, where data from traditional trackers is limited. We restrict the input available to the algorithm to a single digitized frame, such that manual initialization is not required and the detection problem becomes embarrassingly parallel. Consequently, the proposed algorithm does not propagate detection errors and naturally integrates in a "big data" workflow used for large-scale analyses. Using this framework, we analyzed the dynamics of postures and locomotion of wild-type animals and mutants that exhibit severe coiling phenotypes. Our approach can readily be extended to additional automated tracking tasks such as tracking pairs of animals (e.g., for mating assays) or different species.


Subject(s)
Algorithms , Caenorhabditis elegans/physiology , Image Interpretation, Computer-Assisted/methods , Locomotion/physiology , Posture/physiology , Whole Body Imaging/methods , Animals , Caenorhabditis elegans/anatomy & histology , Computer Simulation , Data Interpretation, Statistical , Models, Statistical , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
5.
PLoS Comput Biol ; 10(8): e1003727, 2014 Aug.
Article in English | MEDLINE | ID: mdl-25101662

ABSTRACT

In standard attractor neural network models, specific patterns of activity are stored in the synaptic matrix, so that they become fixed point attractors of the network dynamics. The storage capacity of such networks has been quantified in two ways: the maximal number of patterns that can be stored, and the stored information measured in bits per synapse. In this paper, we compute both quantities in fully connected networks of N binary neurons with binary synapses, storing patterns with coding level [Formula: see text], in the large [Formula: see text] and sparse coding limits ([Formula: see text]). We also derive finite-size corrections that accurately reproduce the results of simulations in networks of tens of thousands of neurons. These methods are applied to three different scenarios: (1) the classic Willshaw model, (2) networks with stochastic learning in which patterns are shown only once (one shot learning), (3) networks with stochastic learning in which patterns are shown multiple times. The storage capacities are optimized over network parameters, which allows us to compare the performance of the different models. We show that finite-size effects strongly reduce the capacity, even for networks of realistic sizes. We discuss the implications of these results for memory storage in the hippocampus and cerebral cortex.


Subject(s)
Memory/physiology , Models, Neurological , Nerve Net/physiology , Synapses/physiology , Animals , Cerebral Cortex/physiology , Computational Biology , Hippocampus/physiology , Neurons/physiology , Rats
6.
Front Hum Neurosci ; 7: 765, 2013.
Article in English | MEDLINE | ID: mdl-24294199

ABSTRACT

The Delay-Match-to-Sample (DMS) task has been used in countless studies of memory, undergoing numerous modifications, making the task more and more challenging to participants. The physiological correlate of memory is modified neural activity during the cue-to-match delay period reflecting reverberating attractor activity in multiple interconnected cells. DMS tasks may use a fixed set of well-practiced stimulus images-allowing for creation of attractors-or unlimited novel images, for which no attractor exists. Using well-learned stimuli requires that participants determine if a remembered image was seen in the same or a preceding trial, only responding to the former. Thus, trial-to-trial transitions must include a "reset" mechanism to mark old images as such. We test two groups of monkeys on a delay-match-to-multiple-images task, one with well-trained and one with novel images. Only the first developed a reset mechanism. We then switched tasks between the groups. We find that introducing fixed images initiates development of reset, and once established, switching to novel images does not disable its use. Without reset, memory decays slowly, leaving ~40% recognizable after a minute. Here, presence of reward further enhances memory of previously-seen images.

7.
Front Hum Neurosci ; 7: 408, 2013.
Article in English | MEDLINE | ID: mdl-23908619

ABSTRACT

Delay match to sample (DMS) experiments provide an important link between the theory of recurrent network models and behavior and neural recordings. We define a simple recurrent network of binary neurons with stochastic neural dynamics and Hebbian synaptic learning. Most DMS experiments involve heavily learned images, and in this setting we propose a readout mechanism for match occurrence based on a smaller increment in overall network activity when the matched pattern is already in working memory, and a reset mechanism to clear memory from stimuli of previous trials using random network activity. Simulations show that this model accounts for a wide range of variations on the original DMS tasks, including ABBA tasks with distractors, and more general repetition detection tasks with both learned and novel images. The differences in network settings required for different tasks derive from easily defined changes in the levels of noise and inhibition. The same models can also explain experiments involving repetition detection with novel images, although in this case the readout mechanism for match is based on higher overall network activity. The models give rise to interesting predictions that may be tested in neural recordings.

8.
Article in English | MEDLINE | ID: mdl-23576955

ABSTRACT

During a reach, neural activity recorded from motor cortex is typically thought to linearly encode the observed movement. However, it has also been reported that during a double-step reaching paradigm, neural coding of the original movement is replaced by that of the corrective movement. Here, we use neural data recorded from multi-electrode arrays implanted in the motor and premotor cortices of rhesus macaques to directly compare these two hypotheses. We show that while a majority of neurons display linear encoding of movement during a double-step, a minority display a dramatic drop in firing rate that is predicted by the replacement hypothesis. Neural activity in the subpopulation showing replacement is more likely to lag the observed movement, and may therefore be involved in the monitoring of the sensory consequences of a motor command.


Subject(s)
Motor Cortex/physiology , Movement/physiology , Nerve Net/physiology , Animals , Conditioning, Operant/physiology , Macaca mulatta , Psychomotor Performance/physiology , Reaction Time/physiology
9.
Article in English | MEDLINE | ID: mdl-22737121

ABSTRACT

We describe an attractor network of binary perceptrons receiving inputs from a retinotopic visual feature layer. Each class is represented by a random subpopulation of the attractor layer, which is turned on in a supervised manner during learning of the feed forward connections. These are discrete three state synapses and are updated based on a simple field dependent Hebbian rule. For testing, the attractor layer is initialized by the feedforward inputs and then undergoes asynchronous random updating until convergence to a stable state. Classification is indicated by the sub-population that is persistently activated. The contribution of this paper is two-fold. This is the first example of competitive classification rates of real data being achieved through recurrent dynamics in the attractor layer, which is only stable if recurrent inhibition is introduced. Second, we demonstrate that employing three state synapses with feedforward inhibition is essential for achieving the competitive classification rates due to the ability to effectively employ both positive and negative informative features.

10.
J Physiol Paris ; 106(3-4): 112-9, 2012.
Article in English | MEDLINE | ID: mdl-21939762

ABSTRACT

We have previously shown that the responses of primary motor cortical neurons are more accurately predicted if one assumes that individual neurons encode temporally-extensive movement fragments or preferred trajectories instead of static movement parameters (Hatsopoulos et al., 2007). Building on these findings, we examine here how these preferred trajectories can be combined to generate a rich variety of preferred movement trajectories when neurons fire simultaneously. Specifically, we used a generalized linear model to fit each neuron's spike rate to an exponential function of the inner product between the actual movement trajectory and the preferred trajectory; then, assuming conditional independence, when two neurons fire simultaneously their spiking probabilities multiply implying that their preferred trajectories add. We used a similar exponential model to fit the probability of simultaneous firing and found that the majority of neuron pairs did combine their preferred trajectories using a simple additive rule. Moreover, a minority of neuron pairs that engaged in significant synchronization combined their preferred trajectories through a small scaling adjustment to the additive rule in the exponent, while preserving the shape of the predicted trajectory representation from the additive rule. These results suggest that complex movement representations can be synthesized in simultaneously firing neuronal ensembles by adding the trajectory representations of the constituents in the ensemble.


Subject(s)
Motor Cortex/physiology , Movement/physiology , Animals , Linear Models , Macaca mulatta , Models, Neurological , Neurons/physiology
11.
J Comput Neurosci ; 30(3): 699-720, 2011 Jun.
Article in English | MEDLINE | ID: mdl-20978831

ABSTRACT

We define the memory capacity of networks of binary neurons with finite-state synapses in terms of retrieval probabilities of learned patterns under standard asynchronous dynamics with a predetermined threshold. The threshold is set to control the proportion of non-selective neurons that fire. An optimal inhibition level is chosen to stabilize network behavior. For any local learning rule we provide a computationally efficient and highly accurate approximation to the retrieval probability of a pattern as a function of its age. The method is applied to the sequential models (Fusi and Abbott, Nat Neurosci 10:485-493, 2007) and meta-plasticity models (Fusi et al., Neuron 45(4):599-611, 2005; Leibold and Kempter, Cereb Cortex 18:67-77, 2008). We show that as the number of synaptic states increases, the capacity, as defined here, either plateaus or decreases. In the few cases where multi-state models exceed the capacity of binary synapse models the improvement is small.


Subject(s)
Models, Statistical , Nerve Net/physiology , Neural Networks, Computer , Neurons/physiology , Synaptic Transmission/physiology , Action Potentials/physiology , Animals , Brain/physiology , Humans
12.
J Neurosci ; 30(50): 17079-90, 2010 Dec 15.
Article in English | MEDLINE | ID: mdl-21159978

ABSTRACT

Few studies have investigated how the cortex encodes the preshaping of the hand as an object is grasped, an ethological movement referred to as prehension. We developed an encoding model of hand kinematics to test whether primary motor cortex (MI) neurons encode temporally extensive combinations of joint motions that characterize a prehensile movement. Two female rhesus macaque monkeys were trained to grasp 4 different objects presented by a robot while their arm was held in place by a thermoplastic brace. We used multielectrode arrays to record MI neurons and an infrared camera motion tracking system to record the 3-D positions of 14 markers placed on the monkeys' wrist and digits. A generalized linear model framework was used to predict the firing rate of each neuron in a 4 ms time interval, based on its own spiking history and the spatiotemporal kinematics of the joint angles of the hand. Our results show that the variability of the firing rate of MI neurons is better described by temporally extensive combinations of finger and wrist joint angle kinematics rather than any individual joint motion or any combination of static kinematic parameters at their optimal lag. Moreover, a higher percentage of neurons encoded joint angular velocities than joint angular positions. These results suggest that neurons encode the covarying trajectories of the hand's joints during a prehensile movement.


Subject(s)
Hand Strength/physiology , Motor Cortex/physiology , Movement/physiology , Neurons/physiology , Action Potentials/physiology , Animals , Arm/physiology , Biomechanical Phenomena , Female , Hand/physiology , Joints/physiology , Linear Models , Macaca mulatta
13.
Neural Comput ; 22(3): 660-88, 2010 Mar.
Article in English | MEDLINE | ID: mdl-19842984

ABSTRACT

We compute retrieval probabilities as a function of pattern age for networks with binary neurons and synapses updated with the simple Hebbian learning model studied in Amit and Fusi ( 1994 ). The analysis depends on choosing a neural threshold that enables patterns to stabilize in the neural dynamics. In contrast to most earlier work, where selective neurons for each pattern are drawn independently with fixed probability f, here we analyze the situation where f is drawn from some distribution on a range of coding levels. In order to set a workable threshold in this setting, it is necessary to introduce a simple inhibition in the neural dynamics whose magnitude depends on the total activity of the network. Proper choice of the threshold depends on the value of the covariances between the synapses for which we provide an explicit formula. Retrieval probabilities depend on the distribution of the fields induced by a learned pattern. We show that the field induced by the first learned pattern evolves as a Markov chain during subsequent learning epochs, leading to a recursive formula for the distribution. Alternatively, the distribution can be computed using a normal approximation, which involves the value of the synaptic covariances. Capacity is computed as the sum of the retrieval probabilities over all ages. We show through simulation that the chosen threshold enables retrieval with asynchronous dynamics even in the presence of significant noise in the initial state of the pattern. The computed probabilities with both methods are shown to be very close to probabilities estimated from simulation. The analysis is extended to randomly connected networks.


Subject(s)
Neural Networks, Computer , Aging , Algorithms , Computer Simulation , Humans , Learning/physiology , Markov Chains , Memory/physiology , Neural Inhibition , Neurons/physiology , Probability , Synapses/physiology
14.
J Neurophysiol ; 102(2): 1331-9, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19535480

ABSTRACT

The use of chronic intracortical multielectrode arrays has become increasingly prevalent in neurophysiological experiments. However, it is not obvious whether neuronal signals obtained over multiple recording sessions come from the same or different neurons. Here, we develop a criterion to assess single-unit stability by measuring the similarity of 1) average spike waveforms and 2) interspike interval histograms (ISIHs). Neuronal activity was recorded from four Utah arrays implanted in primary motor and premotor cortices in three rhesus macaque monkeys during 10 recording sessions over a 15- to 17-day period. A unit was defined as stable through a given day if the stability criterion was satisfied on all recordings leading up to that day. We found that 57% of the original units were stable through 7 days, 43% were stable through 10 days, and 39% were stable through 15 days. Moreover, stable units were more likely to remain stable in subsequent recording sessions (i.e., 89% of the neurons that were stable through four sessions remained stable on the fifth). Using both waveform and ISIH data instead of just waveforms improved performance by reducing the number of false positives. We also demonstrate that this method can be used to track neurons across days, even during adaptation to a visuomotor rotation. Identifying a stable subset of neurons should allow the study of long-term learning effects across days and has practical implications for pooling of behavioral data across days and for increasing the effectiveness of brain-machine interfaces.


Subject(s)
Electrodes, Implanted , Electrophysiology/instrumentation , Electrophysiology/methods , Neurons/physiology , Action Potentials , Adaptation, Psychological/physiology , Algorithms , Animals , Frontal Lobe/physiology , Macaca mulatta , Motor Activity , Motor Cortex/physiology , Time Factors
15.
IEEE Trans Pattern Anal Mach Intell ; 30(11): 1998-2010, 2008 Nov.
Article in English | MEDLINE | ID: mdl-18787247

ABSTRACT

We describe an algorithm for the efficient annotation of events of interest in video microscopy. The specific application involves the detection and tracking of multiple p ossibly overlapping vesicles in total internal reflection fluorescent microscopy images. A st atistical model for the dynamic image data of vesicle configurations allows us to properly weight various hypotheses online. The goal is to find the most likely trajectories given a sequence of images. The computational challenge is addressed by defining a sequence of coarse-to-fine tests, derived from the statistical model, to quickly eliminate most candidate positions at each time frame. The computational load of the tests is initially very low and gradually in creases as the false positives become more difficult to eliminate. Only at the last step, state variables are estimated from a complete time- dependent model. Processing time thus mainly depends on the number of vesicles in the image and not on image size.


Subject(s)
Database Management Systems , Databases, Factual , Documentation/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Microscopy, Video/methods , Transport Vesicles/ultrastructure , Artificial Intelligence , Image Enhancement/methods , Pattern Recognition, Automated/methods
16.
Neural Comput ; 20(8): 1928-50, 2008 Aug.
Article in English | MEDLINE | ID: mdl-18386988

ABSTRACT

A network of excitatory synapses trained with a conservative version of Hebbian learning is used as a model for recognizing the familiarity of thousands of once-seen stimuli from those never seen before. Such networks were initially proposed for modeling memory retrieval (selective delay activity). We show that the same framework allows the incorporation of both familiarity recognition and memory retrieval, and estimate the network's capacity. In the case of binary neurons, we extend the analysis of Amit and Fusi (1994) to obtain capacity limits based on computations of signal-to-noise ratio of the field difference between selective and non-selective neurons of learned signals. We show that with fast learning (potentiation probability approximately 1), the most recently learned patterns can be retrieved in working memory (selective delay activity). A much higher number of once-seen learned patterns elicit a realistic familiarity signal in the presence of an external field. With potentiation probability much less than 1 (slow learning), memory retrieval disappears, whereas familiarity recognition capacity is maintained at a similarly high level. This analysis is corroborated in simulations. For analog neurons, where such analysis is more difficult, we simplify the capacity analysis by studying the excess number of potentiated synapses above the steady-state distribution. In this framework, we derive the optimal constraint between potentiation and depression probabilities that maximizes the capacity.


Subject(s)
Brain/physiology , Learning/physiology , Neural Networks, Computer , Neurons/physiology , Synapses/physiology , Algorithms , Computer Simulation , Models, Statistical , Pattern Recognition, Automated/methods , Synaptic Transmission/physiology
17.
J Neurosci ; 27(19): 5105-14, 2007 May 09.
Article in English | MEDLINE | ID: mdl-17494696

ABSTRACT

Previous studies have suggested that complex movements can be elicited by electrical stimulation of the motor cortex. Most recording studies in the motor cortex, however, have investigated the encoding of time-independent features of movement such as direction, velocity, position, or force. Here, we show that single motor cortical neurons encode temporally evolving movement trajectories and not simply instantaneous movement parameters. We explicitly characterize the preferred trajectories of individual neurons using a simple exponential encoding model and demonstrate that temporally extended trajectories not only capture the tuning of motor cortical neurons more accurately, but can be used to decode the instantaneous movement direction with less error. These findings suggest that single motor cortical neurons encode whole movement fragments, which are temporally extensive and can be quite complex.


Subject(s)
Action Potentials/physiology , Motor Cortex/physiology , Movement/physiology , Neural Pathways/physiology , Neurons/physiology , Animals , Conditioning, Operant , Macaca mulatta , Models, Neurological , Nerve Net/physiology , Orientation/physiology , Proprioception/physiology , Reaction Time/physiology , Signal Processing, Computer-Assisted , Space Perception/physiology , Synaptic Transmission/physiology , Time Factors
18.
J Acoust Soc Am ; 118(4): 2634-48, 2005 Oct.
Article in English | MEDLINE | ID: mdl-16266183

ABSTRACT

We consider a novel approach to the problem of detecting phonological objects like phonemes, syllables, or words, directly from the speech signal. We begin by defining local features in the time-frequency plane with built in robustness to intensity variations and time warping. Global templates of phonological objects correspond to the coincidence in time and frequency of patterns of the local features. These global templates are constructed by using the statistics of the local features in a principled way. The templates have clear phonetic interpretability, are easily adaptable, have built in invariances, and display considerable robustness in the face of additive noise and clutter from competing speakers. We provide a detailed evaluation of the performance of some diphone detectors and a word detector based on this approach. We also perform some phonetic classification experiments based on the edge-based features suggested here.


Subject(s)
Algorithms , Phonetics , Speech Acoustics , Speech Perception/physiology , Acoustic Stimulation , Databases, Factual , Humans , Models, Biological , Noise , ROC Curve , Sound Spectrography , Speech Production Measurement , Time Factors
19.
IEEE Trans Pattern Anal Mach Intell ; 26(12): 1606-21, 2004 Dec.
Article in English | MEDLINE | ID: mdl-15573821

ABSTRACT

Multiclass shape detection, in the sense of recognizing and localizing instances from multiple shape classes, is formulated as a two-step process in which local indexing primes global interpretation. During indexing a list of instantiations (shape identities and poses) is compiled, constrained only by no missed detections at the expense of false positives. Global information, such as expected relationships among poses, is incorporated afterward to remove ambiguities. This division is motivated by computational efficiency. In addition, indexing itself is organized as a coarse-to-fine search simultaneously in class and pose. This search can be interpreted as successive approximations to likelihood ratio tests arising from a simple ("naive Bayes") statistical model for the edge maps extracted from the original images. The key to constructing efficient "hypothesis tests" for multiple classes and poses is local ORing; in particular, spread edges provide imprecise but common and locally invariant features. Natural tradeoffs then emerge between discrimination and the pattern of spreading. These are analyzed mathematically within the model-based framework and the whole procedure is illustrated by experiments in reading license plates.

20.
Vision Res ; 43(19): 2073-88, 2003 Sep.
Article in English | MEDLINE | ID: mdl-12842160

ABSTRACT

We describe an architecture for invariant visual detection and recognition. Learning is performed in a single central module. The architecture makes use of a replica module consisting of copies of retinotopic layers of local features, with a particular design of inputs and outputs, that allows them to be primed either to attend to a particular location, or to attend to a particular object representation. In the former case the data at a selected location can be classified in the central module. In the latter case all instances of the selected object are detected in the field of view. The architecture is used to explain a number of psychophysical and physiological observations: object based attention, the different response time slopes of target detection among distractors, and observed attentional modulation of neuronal responses. We hypothesize that the organization of visual cortex in columns of neurons responding to the same feature at the same location may provide the copying architecture needed for translation invariance.


Subject(s)
Attention , Form Perception/physiology , Humans , Image Processing, Computer-Assisted , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Visual Cortex/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...