Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
Add more filters










Publication year range
1.
Cell Rep ; 41(6): 111608, 2022 11 08.
Article in English | MEDLINE | ID: mdl-36351381

ABSTRACT

A major issue in modern neuroscience is to understand how cell populations present multiple spatial and motor features during goal-directed movements. The direction and distance (depth) of arm movements often appear to be controlled independently during behavior, but it is unknown whether they share neural resources or not. Using information theory, singular value decomposition, and dimensionality reduction methods, we compare direction and depth effects and their convergence across three parietal areas during an arm movement task. All methods show a stronger direction effect during early movement preparation, whereas depth signals prevail during movement execution. Going from anterior to posterior sectors, we report an increased number of cells processing both signals and stronger depth effects. These findings suggest a serial direction and depth processing consistent with behavioral evidence and reveal a gradient of joint versus independent control of these features in parietal cortex that supports its role in sensorimotor transformations.


Subject(s)
Macaca , Psychomotor Performance , Animals , Parietal Lobe , Movement , Forelimb
2.
Neuroimage ; 233: 117896, 2021 06.
Article in English | MEDLINE | ID: mdl-33667671

ABSTRACT

Humans are fast and accurate when they recognize familiar faces. Previous neurophysiological studies have shown enhanced representations for the dichotomy of familiar vs. unfamiliar faces. As familiarity is a spectrum, however, any neural correlate should reflect graded representations for more vs. less familiar faces along the spectrum. By systematically varying familiarity across stimuli, we show a neural familiarity spectrum using electroencephalography. We then evaluated the spatiotemporal dynamics of familiar face recognition across the brain. Specifically, we developed a novel informational connectivity method to test whether peri-frontal brain areas contribute to familiar face recognition. Results showed that feed-forward flow dominates for the most familiar faces and top-down flow was only dominant when sensory evidence was insufficient to support face recognition. These results demonstrate that perceptual difficulty and the level of familiarity influence the neural representation of familiar faces and the degree to which peri-frontal neural networks contribute to familiar face recognition.


Subject(s)
Brain/physiology , Facial Recognition/physiology , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation/methods , Recognition, Psychology/physiology , Adult , Brain/diagnostic imaging , Electroencephalography/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male , Multivariate Analysis , Nerve Net/diagnostic imaging , Young Adult
3.
J Comp Neurol ; 528(17): 3108-3122, 2020 12 01.
Article in English | MEDLINE | ID: mdl-32080849

ABSTRACT

Goal-directed movements involve a series of neural computations that compare the sensory representations of goal location and effector position, and transform these into motor commands. Neurons in posterior parietal cortex (PPC) control several effectors (e.g., eye, hand, foot) and encode goal location in a variety of spatial coordinate systems, including those anchored to gaze direction, and to the positions of the head, shoulder, or hand. However, there is little evidence on whether reference frames depend also on the effector and/or type of motor response. We addressed this issue in macaque PPC area V6A, where previous reports using a fixate-to-reach in depth task, from different starting arm positions, indicated that most units use mixed body/hand-centered coordinates. Here, we applied singular value decomposition and gradient analyses to characterize the reference frames in V6A while the animals, instead of arm reaching, performed a nonspatial motor response (hand lift). We found that most neurons used mixed body/hand coordinates, instead of "pure" body-, or hand-centered coordinates. During the task progress the effect of hand position on activity became stronger compared to target location. Activity consistent with body-centered coding was present only in a subset of neurons active early in the task. Applying the same analyses to a population of V6A neurons recorded during the fixate-to-reach task yielded similar results. These findings suggest that V6A neurons use consistent reference frames between spatial and nonspatial motor responses, a functional property that may allow the integration of spatial awareness and movement control.


Subject(s)
Movement/physiology , Neurons/physiology , Parietal Lobe/physiology , Psychomotor Performance/physiology , Reaction Time/physiology , Space Perception/physiology , Animals , Macaca fascicularis , Male , Parietal Lobe/cytology , Photic Stimulation/methods , Random Allocation
4.
J Vis ; 19(9): 1, 2019 08 01.
Article in English | MEDLINE | ID: mdl-31369042

ABSTRACT

Behavioral studies in humans indicate that peripheral vision can do object recognition to some extent. Moreover, recent studies have shown that some information from brain regions retinotopic to visual periphery is somehow fed back to regions retinotopic to the fovea and disrupting this feedback impairs object recognition in human. However, it is unclear to what extent the information in visual periphery contributes to human object categorization. Here, we designed two series of rapid object categorization tasks to first investigate the performance of human peripheral vision in categorizing natural object images at different eccentricities and abstraction levels (superordinate, basic, and subordinate). Then, using a delayed foveal noise mask, we studied how modulating the foveal representation impacts peripheral object categorization at any of the abstraction levels. We found that peripheral vision can quickly and accurately accomplish superordinate categorization, while its performance in finer categorization levels dramatically drops as the object presents further in the periphery. Also, we found that a 300-ms delayed foveal noise mask can significantly disturb categorization performance in basic and subordinate levels, while it has no effect on the superordinate level. Our results suggest that human peripheral vision can easily process objects at high abstraction levels, and the information is fed back to foveal vision to prime foveal cortex for finer categorizations when a saccade is made toward the target object.


Subject(s)
Form Perception/physiology , Fovea Centralis/physiology , Pattern Recognition, Visual/physiology , Visual Fields/physiology , Adult , Discrimination, Psychological , Female , Humans , Male
5.
Nat Commun ; 10(1): 941, 2019 02 26.
Article in English | MEDLINE | ID: mdl-30808863

ABSTRACT

Sensory systems face a barrage of stimulation that continually changes along multiple dimensions. These simultaneous changes create a formidable problem for the nervous system, as neurons must dynamically encode each stimulus dimension, despite changes in other dimensions. Here, we measured how neurons in visual cortex encode orientation following changes in luminance and contrast, which are critical for visual processing, but nuisance variables in the context of orientation coding. Using information theoretic analysis and population decoding approaches, we find that orientation discriminability is luminance and contrast dependent, changing over time due to firing rate adaptation. We also show that orientation discrimination in human observers changes during adaptation, in a manner consistent with the neuronal data. Our results suggest that adaptation does not maintain information rates per se, but instead acts to keep sensory systems operating within the limited dynamic range afforded by spiking activity, despite a wide range of possible inputs.


Subject(s)
Orientation, Spatial/physiology , Visual Cortex/physiology , Action Potentials/physiology , Adaptation, Physiological , Adult , Animals , Callithrix/physiology , Contrast Sensitivity/physiology , Female , Humans , Male , Neurons/physiology , Photic Stimulation , Psychophysics , Visual Perception/physiology , Young Adult
6.
Neural Netw ; 111: 47-63, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30682710

ABSTRACT

In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data.


Subject(s)
Action Potentials , Deep Learning , Models, Neurological , Neural Networks, Computer , Action Potentials/physiology , Algorithms , Brain/physiology , Deep Learning/trends , Humans , Machine Learning/trends , Neurons/physiology
7.
Prog Neurobiol ; 156: 214-255, 2017 Sep.
Article in English | MEDLINE | ID: mdl-28634086

ABSTRACT

The lateral geniculate nucleus (LGN) has often been treated in the past as a linear filter that adds little to retinal processing of visual inputs. Here we review anatomical, neurophysiological, brain imaging, and modeling studies that have in recent years built up a much more complex view of LGN. These include effects related to nonlinear dendritic processing, cortical feedback, synchrony and oscillations across LGN populations, as well as involvement of LGN in higher level cognitive processing. Although recent studies have provided valuable insights into early visual processing including the role of LGN, a unified model of LGN responses to real-world objects has not yet been developed. In the light of recent data, we suggest that the role of LGN deserves more careful consideration in developing models of high-level visual processing.


Subject(s)
Cognition/physiology , Geniculate Bodies/physiology , Vision, Ocular/physiology , Visual Pathways/physiology , Animals , Humans
8.
J Dent (Tehran) ; 14(5): 292-298, 2017 Sep.
Article in English | MEDLINE | ID: mdl-29296115

ABSTRACT

OBJECTIVES: It has been reported that the water, solvents, or the primer incorporated into adhesive resins decrease the polymerization, compromise the mechanical properties, reduce the bond strength, and lead to a poor bonding performance of self-etch adhesives. This article evaluated the effect of air-drying and light-curing duration of self-etch adhesives on the micro-shear bond strength between composite resin and dentin. MATERIALS AND METHODS: A total of 120 extracted sound human third molars were randomly divided into twelve groups (n=10). The occlusal dentin in each tooth was exposed. Clearfil SE Bond (CSEB) and Clearfil S3 Bond (CS3B) were used according to the manufacturer's instructions, followed by air-drying for 3 and 10 seconds in different groups. The adhesives were light-cured for 10, 20 and 40 seconds in different subgroups. Next, the composite resin (Clearfil AP-X) was placed on the dentin surface and was polymerized for 40 seconds. The micro-shear bond strength values were determined using a universal testing machine, and the results were statistically analyzed by three-way ANOVA and Tukey's post-hoc test (α=0.05). RESULTS: CSEB exhibited a significantly higher dentin bond strength than CS3B. Increasing the curing time of CSEB resulted in an increase in the bond strength, whereas an increase in the air-drying time did not affect the bond strength of the two adhesives. CONCLUSIONS: Within the limitations of this study, an increase in the curing time improved the bond strength of CSEB, whereas the air-drying time did not affect the bond strength of the evaluated adhesives.

9.
Front Hum Neurosci ; 10: 630, 2016.
Article in English | MEDLINE | ID: mdl-28018197

ABSTRACT

Humans are fast and accurate in categorizing complex natural images. It is, however, unclear what features of visual information are exploited by brain to perceive the images with such speed and accuracy. It has been shown that low-level contrast statistics of natural scenes can explain the variance of amplitude of event-related potentials (ERP) in response to rapidly presented images. In this study, we investigated the effect of these statistics on frequency content of ERPs. We recorded ERPs from human subjects, while they viewed natural images each presented for 70 ms. Our results showed that Weibull contrast statistics, as a biologically plausible model, explained the variance of ERPs the best, compared to other image statistics that we assessed. Our time-frequency analysis revealed a significant correlation between these statistics and ERPs' power within theta frequency band (~3-7 Hz). This is interesting, as theta band is believed to be involved in context updating and semantic encoding. This correlation became significant at ~110 ms after stimulus onset, and peaked at 138 ms. Our results show that not only the amplitude but also the frequency of neural responses can be modulated with low-level contrast statistics of natural images and highlights their potential role in scene perception.

10.
Sci Rep ; 6: 32672, 2016 09 07.
Article in English | MEDLINE | ID: mdl-27601096

ABSTRACT

Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases. Their architecture is somewhat similar to that of the human visual system: both use restricted receptive fields, and a hierarchy of layers which progressively extract more and more abstracted features. Yet it is unknown whether DCNNs match human performance at the task of view-invariant object recognition, whether they make similar errors and use similar representations for this task, and whether the answers depend on the magnitude of the viewpoint variations. To investigate these issues, we benchmarked eight state-of-the-art DCNNs, the HMAX model, and a baseline shallow model and compared their results to those of humans with backward masking. Unlike in all previous DCNN studies, we carefully controlled the magnitude of the viewpoint variations to demonstrate that shallow nets can outperform deep nets and humans when variations are weak. When facing larger variations, however, more layers were needed to match human performance and error distributions, and to have representations that are consistent with human behavior. A very deep net with 18 layers even outperformed humans at the highest variation level, using the most human-like representations.


Subject(s)
Vision, Ocular , Visual Perception , Humans , Nerve Net
11.
Front Comput Neurosci ; 10: 92, 2016.
Article in English | MEDLINE | ID: mdl-27642281

ABSTRACT

View-invariant object recognition is a challenging problem that has attracted much attention among the psychology, neuroscience, and computer vision communities. Humans are notoriously good at it, even if some variations are presumably more difficult to handle than others (e.g., 3D rotations). Humans are thought to solve the problem through hierarchical processing along the ventral stream, which progressively extracts more and more invariant visual features. This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best models for object recognition in natural images. Here, for the first time, we systematically compared human feed-forward vision and DCNNs at view-invariant object recognition task using the same set of images and controlling the kinds of transformation (position, scale, rotation in plane, and rotation in depth) as well as their magnitude, which we call "variation level." We used four object categories: car, ship, motorcycle, and animal. In total, 89 human subjects participated in 10 experiments in which they had to discriminate between two or four categories after rapid presentation with backward masking. We also tested two recent DCNNs (proposed respectively by Hinton's group and Zisserman's group) on the same tasks. We found that humans and DCNNs largely agreed on the relative difficulties of each kind of variation: rotation in depth is by far the hardest transformation to handle, followed by scale, then rotation in plane, and finally position (much easier). This suggests that DCNNs would be reasonable models of human feed-forward vision. In addition, our results show that the variation levels in rotation in depth and scale strongly modulate both humans' and DCNNs' recognition performances. We thus argue that these variations should be controlled in the image datasets used in vision research.

12.
Eur J Neurosci ; 44(10): 2759-2773, 2016 11.
Article in English | MEDLINE | ID: mdl-27563930

ABSTRACT

In natural vision, rapid and sustained variations in luminance and contrast change the reliability of information available about a visual scene, and markedly affect both neuronal and behavioural responses. The hallmark property of neurons in primary visual cortex (V1), orientation selectivity, is unaffected by changes in stimulus contrast, but it remains unclear how sustained differences in mean luminance and contrast affect the time-course of orientation selectivity, and the amount of information that neurons carry about orientation. We used reverse correlation with characterize the temporal dynamics of orientation selectivity in rat V1 neurons under four luminance-contrast conditions. We show that orientation selectivity and mutual information between neuronal responses and stimulus orientation are invariant to contrast or mean luminance. Critically, the time-course of the emergence of orientation selectivity was affected by both factors; response latencies were longer for low- than high-luminance gratings, and surprisingly, response latencies were also longer for high- than low-contrast gratings. Modelling suggests that luminance-modulated changes in feedforward gain, in combination with hyperpolarization caused by high contrasts can account for our physiological data. The hyperpolarization at high contrasts may increase signal-to-noise ratios, whereas a more depolarized membrane may lead to greater sensitivity to weak stimuli.


Subject(s)
Contrast Sensitivity , Orientation, Spatial , Visual Cortex/physiology , Animals , Male , Membrane Potentials , Models, Neurological , Neurons/physiology , Rats , Rats, Long-Evans , Reaction Time
13.
Sci Rep ; 6: 25025, 2016 04 26.
Article in English | MEDLINE | ID: mdl-27113635

ABSTRACT

Converging reports indicate that face images are processed through specialized neural networks in the brain -i.e. face patches in monkeys and the fusiform face area (FFA) in humans. These studies were designed to find out how faces are processed in visual system compared to other objects. Yet, the underlying mechanism of face processing is not completely revealed. Here, we show that a hierarchical computational model, inspired by electrophysiological evidence on face processing in primates, is able to generate representational properties similar to those observed in monkey face patches (posterior, middle and anterior patches). Since the most important goal of sensory neuroscience is linking the neural responses with behavioral outputs, we test whether the proposed model, which is designed to account for neural responses in monkey face patches, is also able to predict well-documented behavioral face phenomena observed in humans. We show that the proposed model satisfies several cognitive face effects such as: composite face effect and the idea of canonical face views. Our model provides insights about the underlying computations that transfer visual information from posterior to anterior face patches.


Subject(s)
Facial Recognition/physiology , Models, Theoretical , Animals , Cerebral Cortex/physiology , Haplorhini , Humans , Photic Stimulation
14.
Front Psychol ; 6: 303, 2015.
Article in English | MEDLINE | ID: mdl-25852617

ABSTRACT

Psychophysical and physiological studies of vision have traditionally used cathode ray tube (CRT) monitors to present stimuli. These monitors are no longer easily available, and liquid crystal display (LCD) technology is continually improving; therefore, we characterized a number of LCD monitors to determine if newer models are suitable replacements for CRTs in the laboratory. We compared the spatial and temporal characteristics of a CRT with five LCDs, including monitors designed with vision science in mind (ViewPixx and Display++), "prosumer" gaming monitors, and a consumer-grade LCD. All monitors had sufficient contrast, luminance range and reliability to support basic vision experiments with static images. However, the luminance of all LCDs depended strongly on viewing angle, which in combination with the poor spatial uniformity of all monitors except the VPixx, caused up to 80% drops in effective luminance in the periphery during central fixation. Further, all monitors showed significant spatial dependence, as the luminance of one area was modulated by the luminance of other areas. These spatial imperfections are most pronounced for experiments that use large or peripheral visual stimuli. In the temporal domain, the gaming LCDs were unable to generate reliable luminance patterns; one was unable to reach the requested luminance within a single frame whereas in the other the luminance of one frame affected the luminance of the next frame. The VPixx and Display++ were less affected by these problems, and had good temporal properties provided stimuli were presented for 2 or more frames. Of the consumer-grade and gaming displays tested, and if problems with spatial uniformity are taken into account, the Eizo FG2421 is the most suitable alternative to CRTs. The specialized ViewPixx performed best among all the tested LCDs, followed closely by the Display++; both are good replacements for a CRT, provided their spatial imperfections are considered.

15.
Article in English | MEDLINE | ID: mdl-25202259

ABSTRACT

It is debated whether the representation of objects in inferior temporal (IT) cortex is distributed over activities of many neurons or there are restricted islands of neurons responsive to a specific set of objects. There are lines of evidence demonstrating that fusiform face area (FFA-in human) processes information related to specialized object recognition (here we say within category object recognition such as face identification). Physiological studies have also discovered several patches in monkey ventral temporal lobe that are responsible for facial processing. Neuronal recording from these patches shows that neurons are highly selective for face images whereas for other objects we do not see such selectivity in IT. However, it is also well-supported that objects are encoded through distributed patterns of neural activities that are distinctive for each object category. It seems that visual cortex utilize different mechanisms for between category object recognition (e.g., face vs. non-face objects) vs. within category object recognition (e.g., two different faces). In this study, we address this question with computational simulations. We use two biologically inspired object recognition models and define two experiments which address these issues. The models have a hierarchical structure of several processing layers that simply simulate visual processing from V1 to aIT. We show, through computational modeling, that the difference between these two mechanisms of recognition can underlie the visual feature and extraction mechanism. It is argued that in order to perform generic and specialized object recognition, visual cortex must separate the mechanisms involved in within category from between categories object recognition. High recognition performance in within category object recognition can be guaranteed when class-specific features with intermediate size and complexity are extracted. However, generic object recognition requires a distributed universal dictionary of visual features in which the size of features does not have significant difference.

16.
Article in English | MEDLINE | ID: mdl-25100986

ABSTRACT

Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex.

17.
Vision Res ; 81: 36-44, 2013 Apr 05.
Article in English | MEDLINE | ID: mdl-23419619

ABSTRACT

The human visual system is developed by viewing natural scenes. In controlled experiments, natural stimuli therefore provide a realistic framework with which to study the underlying information processing steps involved in human vision. Studying the properties of natural images and their effects on the visual processing can help us to understand underlying mechanisms of visual system. In this study, we used a rapid animal vs. non-animal categorization task to assess the relationship between the reaction times of human subjects and the statistical properties of images. We demonstrated that statistical measures, such as the beta and gamma parameters of a Weibull, fitted to the edge histogram of an image, and the image entropy, are effective predictors of subject reaction times. Using these three parameters, we proposed a computational model capable of predicting the reaction times of human subjects.


Subject(s)
Form Perception/physiology , Pattern Recognition, Visual/physiology , Reaction Time/physiology , Adult , Female , Humans , Male , Models, Neurological , Models, Statistical , Photic Stimulation/methods , Young Adult
18.
PLoS One ; 7(6): e38478, 2012.
Article in English | MEDLINE | ID: mdl-22719892

ABSTRACT

The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.


Subject(s)
Face , Learning , Visual Perception , Humans , Models, Theoretical
19.
PLoS One ; 7(2): e32357, 2012.
Article in English | MEDLINE | ID: mdl-22384229

ABSTRACT

Humans can effectively and swiftly recognize objects in complex natural scenes. This outstanding ability has motivated many computational object recognition models. Most of these models try to emulate the behavior of this remarkable system. The human visual system hierarchically recognizes objects in several processing stages. Along these stages a set of features with increasing complexity is extracted by different parts of visual system. Elementary features like bars and edges are processed in earlier levels of visual pathway and as far as one goes upper in this pathway more complex features will be spotted. It is an important interrogation in the field of visual processing to see which features of an object are selected and represented by the visual cortex. To address this issue, we extended a hierarchical model, which is motivated by biology, for different object recognition tasks. In this model, a set of object parts, named patches, extracted in the intermediate stages. These object parts are used for training procedure in the model and have an important role in object recognition. These patches are selected indiscriminately from different positions of an image and this can lead to the extraction of non-discriminating patches which eventually may reduce the performance. In the proposed model we used an evolutionary algorithm approach to select a set of informative patches. Our reported results indicate that these patches are more informative than usual random patches. We demonstrate the strength of the proposed model on a range of object recognition tasks. The proposed model outperforms the original model in diverse object recognition tasks. It can be seen from the experiments that selected features are generally particular parts of target images. Our results suggest that selected features which are parts of target objects provide an efficient set for robust object recognition.


Subject(s)
Computational Biology/methods , Pattern Recognition, Visual , Vision, Ocular , Algorithms , Brain Mapping/methods , Computer Simulation , Humans , Models, Statistical , Models, Theoretical , Normal Distribution , Photic Stimulation/methods , Recognition, Psychology , Visual Pathways , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL
...