Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
PLoS One ; 14(12): e0226000, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31810079

RESUMO

Learned Categorical Perception (CP) occurs when the members of different categories come to look more dissimilar ("between-category separation") and/or members of the same category come to look more similar ("within-category compression") after a new category has been learned. To measure learned CP and its physiological correlates we compared dissimilarity judgments and Event Related Potentials (ERPs) before and after learning to sort multi-featured visual textures into two categories by trial and error with corrective feedback. With the same number of training trials and feedback, about half the subjects succeeded in learning the categories ("Learners": criterion 80% accuracy) and the rest did not ("Non-Learners"). At both lower and higher levels of difficulty, successful Learners showed significant between-category separation-and, to a lesser extent, within-category compression-in pairwise dissimilarity judgments after learning, compared to before; their late parietal ERP positivity (LPC, usually interpreted as decisional) also increased and their occipital N1 amplitude (usually interpreted as perceptual) decreased. LPC amplitude increased with response accuracy and N1 amplitude decreased with between-category separation for the Learners. Non-Learners showed no significant changes in dissimilarity judgments, LPC or N1, within or between categories. This is behavioral and physiological evidence that category learning can alter perception. We sketch a neural net model predictive of this effect.


Assuntos
Aprendizagem/fisiologia , Percepção Visual , Adolescente , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico , Eletroencefalografia , Potenciais Evocados , Feminino , Humanos , Masculino , Estimulação Luminosa , Adulto Jovem
2.
Opt Lett ; 41(5): 863-6, 2016 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-26974065

RESUMO

We present a novel 3D tracking approach capable of locating single particles with nanometric precision over wide axial ranges. Our method uses a fast acousto-optic liquid lens implemented in a bright field microscope to multiplex light based on color into different and selectable focal planes. By separating the red, green, and blue channels from an image captured with a color camera, information from up to three focal planes can be retrieved. Multiplane information from the particle diffraction rings enables precisely locating and tracking individual objects up to an axial range about 5 times larger than conventional single-plane approaches. We apply our method to the 3D visualization of the well-known coffee-stain phenomenon in evaporating water droplets.


Assuntos
Imageamento Tridimensional/métodos , Microscopia/métodos , Cor , Razão Sinal-Ruído , Volatilização , Água/química
3.
IEEE Trans Image Process ; 22(2): 764-77, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23060335

RESUMO

This paper presents an extension of the HMAX model, a neural network model for image classification. The HMAX model can be described as a four-level architecture, with the first level consisting of multiscale and multiorientation local filters. We introduce two main contributions to this model. First, we improve the way the local filters at the first level are integrated into more complex filters at the last level, providing a flexible description of object regions and combining local information of multiple scales and orientations. These new filters are discriminative and yet invariant, two key aspects of visual classification. We evaluate their discriminative power and their level of invariance to geometrical transformations on a synthetic image set. Second, we introduce a multiresolution spatial pooling. This pooling encodes both local and global spatial information to produce discriminative image signatures. Classification results are reported on three image data sets: Caltech101, Caltech256, and fifteen scenes. We show significant improvements over previous architectures using a similar framework.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...