Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 14 de 14
Filter
Add more filters










Publication year range
1.
iScience ; 27(6): 110099, 2024 Jun 21.
Article in English | MEDLINE | ID: mdl-38947503

ABSTRACT

Retinal ganglion cells (RGCs) summate inputs and forward a spike train code to the brain in the form of either maintained spiking (sustained) or a quickly decaying brief spike burst (transient). We report diverse response transience values across the RGC population and, contrary to the conventional transient/sustained scheme, responses with intermediary characteristics are the most abundant. Pharmacological tests showed that besides GABAergic inhibition, gap junction (GJ)-mediated excitation also plays a pivotal role in shaping response transience and thus visual coding. More precisely GJs connecting RGCs to nearby amacrine and RGCs play a defining role in the process. These GJs equalize kinetic features, including the response transience of transient OFF alpha (tOFFα) RGCs across a coupled array. We propose that GJs in other coupled neuron ensembles in the brain are also critical in the harmonization of response kinetics to enhance the population code and suit a corresponding task.

2.
Cells ; 11(5)2022 02 25.
Article in English | MEDLINE | ID: mdl-35269432

ABSTRACT

Retinal ganglion cells (RGCs) encrypt stimulus features of the visual scene in action potentials and convey them toward higher visual centers in the brain. Although there are many visual features to encode, our recent understanding is that the ~46 different functional subtypes of RGCs in the retina share this task. In this scheme, each RGC subtype establishes a separate, parallel signaling route for a specific visual feature (e.g., contrast, the direction of motion, luminosity), through which information is conveyed. The efficiency of encoding depends on several factors, including signal strength, adaptational levels, and the actual efficacy of the underlying retinal microcircuits. Upon collecting inputs across their respective receptive field, RGCs perform further analysis (e.g., summation, subtraction, weighting) before they generate the final output spike train, which itself is characterized by multiple different features, such as the number of spikes, the inter-spike intervals, response delay, and the rundown time (transience) of the response. These specific kinetic features are essential for target postsynaptic neurons in the brain in order to effectively decode and interpret signals, thereby forming visual perception. We review recent knowledge regarding circuit elements of the mammalian retina that participate in shaping RGC response transience for optimal visual signaling.


Subject(s)
Retina , Retinal Ganglion Cells , Action Potentials , Animals , Brain , Mammals , Visual Perception
3.
Sci Rep ; 10(1): 10915, 2020 Jul 02.
Article in English | MEDLINE | ID: mdl-32616787

ABSTRACT

We propose a regression algorithm that utilizes a learned dictionary optimized for sparse inference on a D-Wave quantum annealer. In this regression algorithm, we concatenate the independent and dependent variables as a combined vector, and encode the high-order correlations between them into a dictionary optimized for sparse reconstruction. On a test dataset, the dependent variable is initialized to its average value and then a sparse reconstruction of the combined vector is obtained in which the dependent variable is typically shifted closer to its true value, as in a standard inpainting or denoising task. Here, a quantum annealer, which can presumably exploit a fully entangled initial state to better explore the complex energy landscape, is used to solve the highly non-convex sparse coding optimization problem. The regression algorithm is demonstrated for a lattice quantum chromodynamics simulation data using a D-Wave 2000Q quantum annealer and good prediction performance is achieved. The regression test is performed using six different values for the number of fully connected logical qubits, between 20 and 64. The scaling results indicate that a larger number of qubits gives better prediction accuracy.

5.
BMC Bioinformatics ; 19(Suppl 18): 489, 2018 Dec 21.
Article in English | MEDLINE | ID: mdl-30577746

ABSTRACT

BACKGROUND: Histopathology images of tumor biopsies present unique challenges for applying machine learning to the diagnosis and treatment of cancer. The pathology slides are high resolution, often exceeding 1GB, have non-uniform dimensions, and often contain multiple tissue slices of varying sizes surrounded by large empty regions. The locations of abnormal or cancerous cells, which may constitute a small portion of any given tissue sample, are not annotated. Cancer image datasets are also extremely imbalanced, with most slides being associated with relatively common cancers. Since deep representations trained on natural photographs are unlikely to be optimal for classifying pathology slide images, which have different spectral ranges and spatial structure, we here describe an approach for learning features and inferring representations of cancer pathology slides based on sparse coding. RESULTS: We show that conventional transfer learning using a state-of-the-art deep learning architecture pre-trained on ImageNet (RESNET) and fine tuned for a binary tumor/no-tumor classification task achieved between 85% and 86% accuracy. However, when all layers up to the last convolutional layer in RESNET are replaced with a single feature map inferred via a sparse coding using a dictionary optimized for sparse reconstruction of unlabeled pathology slides, classification performance improves to over 93%, corresponding to a 54% error reduction. CONCLUSIONS: We conclude that a feature dictionary optimized for biomedical imagery may in general support better classification performance than does conventional transfer learning using a dictionary pre-trained on natural images.


Subject(s)
Deep Learning/trends , Neoplasms/pathology , Neural Networks, Computer , Humans
6.
PLoS Comput Biol ; 7(10): e1002162, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21998562

ABSTRACT

Can lateral connectivity in the primary visual cortex account for the time dependence and intrinsic task difficulty of human contour detection? To answer this question, we created a synthetic image set that prevents sole reliance on either low-level visual features or high-level context for the detection of target objects. Rendered images consist of smoothly varying, globally aligned contour fragments (amoebas) distributed among groups of randomly rotated fragments (clutter). The time course and accuracy of amoeba detection by humans was measured using a two-alternative forced choice protocol with self-reported confidence and variable image presentation time (20-200 ms), followed by an image mask optimized so as to interrupt visual processing. Measured psychometric functions were well fit by sigmoidal functions with exponential time constants of 30-91 ms, depending on amoeba complexity. Key aspects of the psychophysical experiments were accounted for by a computational network model, in which simulated responses across retinotopic arrays of orientation-selective elements were modulated by cortical association fields, represented as multiplicative kernels computed from the differences in pairwise edge statistics between target and distractor images. Comparing the experimental and the computational results suggests that each iteration of the lateral interactions takes at least [Formula: see text] ms of cortical processing time. Our results provide evidence that cortical association fields between orientation selective elements in early visual areas can account for important temporal and task-dependent aspects of the psychometric curves characterizing human contour perception, with the remaining discrepancies postulated to arise from the influence of higher cortical areas.


Subject(s)
Form Perception/physiology , Models, Neurological , Visual Cortex/physiology , Computational Biology , Humans , Photic Stimulation , Psychophysics , Reaction Time , Time Factors
7.
J Vis ; 10(3): 21.1-27, 2010 Mar 31.
Article in English | MEDLINE | ID: mdl-20377298

ABSTRACT

Over the brief time intervals available for processing retinal output, the number of spikes generated by individual ganglion cells can be quite variable. Here, two examples of extreme synergy are used to illustrate how realistic long-range spatiotemporal correlations can greatly improve the quality of retinal images reconstructed from computer-generated spike trains that are 25-400 ms in duration, approximately the time between saccadic eye movements. Firing probabilities were specified both explicitly: using time-varying waveforms consistent with stimulus-evoked oscillations measured experimentally, and implicitly: by superimposing realistic fixational eye movements on a biophysical model of primate outer retina. Synergistic encoding was investigated across arrays of model neurons up to 32 x 32 in extent, containing over 1 million pairwise correlations. The difficulty of estimating pairwise, spatiotemporal correlations on single trials from only a few events was overcome by using oscillatory, local multiunit activity to weight contributions from all spike pairs. Stimuli were reconstructed using either an independent rate code or the first principal component of the single-trial, pairwise correlation matrix. Spatiotemporal correlations mediated dramatic improvements in signal/noise without eliminating fine spatial detail, demonstrating how extreme synergy can support rapid image reconstruction using far fewer spikes than required by an independent rate code.


Subject(s)
Action Potentials/physiology , Computer Simulation , Models, Neurological , Retinal Ganglion Cells/physiology , Retinal Photoreceptor Cell Outer Segment/physiology , Animals , Artifacts , Fixation, Ocular/physiology , Light Signal Transduction/physiology , Nystagmus, Physiologic/physiology , Poisson Distribution , Primates , Reaction Time/physiology , Saccades/physiology
8.
Neural Comput ; 19(7): 1766-97, 2007 Jul.
Article in English | MEDLINE | ID: mdl-17521279

ABSTRACT

Cortical neurons selective for numerosity may underlie an innate number sense in both animals and humans. We hypothesize that the number- selective responses of cortical neurons may in part be extracted from coherent, object-specific oscillations . Here, indirect evidence for this hypothesis is obtained by analyzing the numerosity information encoded by coherent oscillations in artificially generated spikes trains. Several experiments report that gamma-band oscillations evoked by the same object remain coherent, whereas oscillations evoked by separate objects are uncorrelated. Because the oscillations arising from separate objects would add in random phase to the total power summed across all stimulated neurons, we postulated that the total gamma activity, normalized by the number of spikes, should fall roughly as the square root of the number of objects in the scene, thereby implicitly encoding numerosity. To test the hypothesis, we examined the normalized gamma activity in multiunit spike trains, 50 to 1000 msec in duration, produced by a model feedback circuit previously shown to generate realistic coherent oscillations. In response to images containing different numbers of objects, regardless of their shape, size, or shading, the normalized gamma activity followed a square-root-of-n rule as long as the separation between objects was sufficiently large and their relative size and contrast differences were not too great. Arrays of winner-take-all numerosity detectors, each responding to normalized gamma activity within a particular band, exhibited tuning curves consistent with behavioral data. We conclude that coherent oscillations in principle could contribute to the number-selective responses of cortical neurons, although many critical issues await experimental resolution.


Subject(s)
Computer Simulation , Models, Neurological , Neurons/physiology , Periodicity , Action Potentials/physiology , Animals , Cerebral Cortex/cytology , Cerebral Cortex/physiology , Humans , Retina/cytology , Retina/physiology , Visual Pathways/cytology , Visual Pathways/physiology
9.
Biol Cybern ; 95(4): 327-48, 2006 Oct.
Article in English | MEDLINE | ID: mdl-16897092

ABSTRACT

We show that coherent oscillations among neighboring ganglion cells in a retinal model encode global topological properties, such as size, that cannot be deduced unambiguously from their local, time-averaged firing rates. Whereas ganglion cells may fire similar numbers of spikes in response to both small and large spots, only large spots evoke coherent high frequency oscillations, potentially allowing downstream neurons to infer global stimulus properties from their local afferents. To determine whether such information might be extracted over physiologically realistic spatial and temporal scales, we analyzed artificial spike trains whose oscillatory correlations were similar to those measured experimentally. Oscillatory power in the upper gamma band, extracted on single-trials from multi-unit spike trains, supported good to excellent size discrimination between small and large spots, with performance improving as the number of cells and/or duration of the analysis window was increased. By using Poisson distributed spikes to normalize the firing rate across stimulus conditions, we further found that coincidence detection, or synchrony, yielded substantially poorer performance on identical size discrimination tasks. To determine whether size encoding depended on contiguity independent of object shape, we examined the total oscillatory activity across the entire model retina in response to random binary images. As the ON-pixel probability crossed the percolation threshold, which marks the sudden emergence of large connected clusters, the total gamma-band activity exhibited a sharp transition, a phenomena that may be experimentally observable. Finally, a reanalysis of previously published oscillatory responses from cat ganglion cells revealed size encoding consistent with that predicted by the retinal model.


Subject(s)
Action Potentials/physiology , Models, Neurological , Retina/cytology , Retinal Ganglion Cells/physiology , Visual Perception/physiology , Animals , Cats , Feedback , Humans , Nerve Net/physiology , Photic Stimulation/methods , Spectrum Analysis
10.
IEEE Trans Pattern Anal Mach Intell ; 27(8): 1279-91, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16119266

ABSTRACT

A population coded algorithm, built on established models of motion processing in the primate visual system, computes the time-to-collision of a mobile robot to real-world environmental objects from video imagery. A set of four transformations starts with motion energy, a spatiotemporal frequency based computation of motion features. The following processing stages extract image velocity features similar to, but distinct from, optic flow; "translation" features, which account for velocity errors including those resulting from the aperture problem; and finally, estimate the time-to-collision. Biologically motivated population coding distinguishes this approach from previous methods based on optic flow. A comparison of the population coded approach with the popular optic flow algorithm of Lucas and Kanade against three types of approaching objects shows that the proposed method produces more robust time-to-collision information from a real world input stimulus in the presence of the aperture problem and other noise sources. The improved performance comes with increased computational cost, which would ideally be mitigated by special purpose hardware architectures.


Subject(s)
Artificial Intelligence , Biomimetics/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Motion Perception/physiology , Pattern Recognition, Automated/methods , Robotics/methods , Vision, Ocular/physiology , Algorithms , Animals , Cluster Analysis , Image Enhancement/methods , Information Storage and Retrieval/methods , Primates , Video Recording/methods
11.
Neural Comput ; 16(11): 2261-91, 2004 Nov.
Article in English | MEDLINE | ID: mdl-15476601

ABSTRACT

Synchronous firing limits the amount of information that can be extracted by averaging the firing rates of similarly tuned neurons. Here, we show that the loss of such rate-coded information due to synchronous oscillations between retinal ganglion cells can be overcome by exploiting the information encoded by the correlations themselves. Two very different models, one based on axon-mediated inhibitory feedback and the other on oscillatory common input, were used to generate artificial spike trains whose synchronous oscillations were similar to those measured experimentally. Pooled spike trains were summed into a threshold detector whose output was classified using Bayesian discrimination. For a threshold detector with short summation times, realistic oscillatory input yielded superior discrimination of stimulus intensity compared to rate-matched Poisson controls. Even for summation times too long to resolve synchronous inputs, gamma band oscillations still contributed to improved discrimination by reducing the total spike count variability, or Fano factor. In separate experiments in which neurons were synchronized in a stimulus-dependent manner without attendant oscillations, the Fano factor increased markedly with stimulus intensity, implying that stimulus-dependent oscillations can offset the increased variability due to synchrony alone.


Subject(s)
Retina/physiology , Retinal Ganglion Cells/physiology , Algorithms , Bayes Theorem , Biofeedback, Psychology/physiology , Electrophysiology , Models, Neurological , Poisson Distribution , Retina/cytology , Synapses/physiology
12.
IEEE Trans Neural Netw ; 15(5): 1083-91, 2004 Sep.
Article in English | MEDLINE | ID: mdl-15484885

ABSTRACT

High-frequency oscillatory potentials (HFOPs) in the vertebrate retina are stimulus specific. The phases of HFOPs recorded at any given retinal location drift randomly over time, but regions activated by the same stimulus tend to remain phase locked with approximately zero lag, whereas regions activated by spatially separate stimuli are typically uncorrelated. Based on retinal anatomy, we previously postulated that HFOPs are mediated by feedback from a class of axon-bearing amacrine cells that receive excitation from neighboring ganglion cells-via gap junctions-and make inhibitory synapses back onto the surrounding ganglion cells. Using a computer model, we show here that such circuitry can account for the stimulus specificity of HFOPs in response to both high- and low-contrast features. Phase locking between pairs of model ganglion cells did not depend critically on their separation distance, but on whether the applied stimulus created a continuous path between them. The degree of phase locking between spatially separate stimuli was reduced by lateral inhibition, which created a buffer zone around strongly activated regions. Stimulating the inhibited region between spatially separate stimuli increased their degree of phase locking proportionately. Our results suggest several experimental strategies for testing the hypothesis that stimulus-specific HFOPs arise from axon-mediated feedback in the inner retina.


Subject(s)
Action Potentials/physiology , Biological Clocks/physiology , Models, Neurological , Neural Pathways/physiology , Retina/physiology , Synaptic Transmission/physiology , Amacrine Cells/physiology , Animals , Computer Simulation , Humans , Neural Inhibition/physiology , Neural Networks, Computer , Presynaptic Terminals/physiology , Retinal Ganglion Cells/physiology , Vision, Ocular/physiology
13.
Neural Netw ; 17(5-6): 773-86, 2004.
Article in English | MEDLINE | ID: mdl-15288897

ABSTRACT

A model color-opponent neuron was used to investigate the subjective colors evoked by the Benham Top (BT). Color-opponent inputs from cone-selective parvocellular (P) pathway neurons with center-surround receptive fields were subtracted with a short relative delay, yielding a small transient input in response to a white spot. This transient input was amplified by BT-like stimuli, modeled as a thin dark bar followed by full-field illumination. The narrow bar produced maximal activation of the P-pathway surrounds but only partial activation of the P-pathway centers. Due to saturation, subsequent removal of the bar had little effect on the P-pathway surrounds, whereas the transient input from the P-pathway centers was amplified via disinhibition. Responses to BT-like stimuli became weaker as surround sensitivity recovered, producing an effect analogous to the progression of perceived BT colors. Our results suggest that the BT-illusion arises because cone-selective neurons convey information about both color and luminance contrast, allowing the two signals become confounded.


Subject(s)
Color Perception/physiology , Models, Neurological , Visual Cortex/physiology , Visual Fields/physiology , Visual Pathways/physiology , Animals , Evoked Potentials, Visual/physiology , Humans , Motion Perception/physiology , Neurons/physiology , Photic Stimulation/methods , Reaction Time , Time Factors , Visual Cortex/cytology , Visual Pathways/cytology
14.
Vis Neurosci ; 20(5): 465-80, 2003.
Article in English | MEDLINE | ID: mdl-14977326

ABSTRACT

High-frequency oscillatory potentials (HFOPs) have been recorded from ganglion cells in cat, rabbit, frog, and mudpuppy retina and in electroretinograms (ERGs) from humans and other primates. However, the origin of HFOPs is unknown. Based on patterns of tracer coupling, we hypothesized that HFOPs could be generated, in part, by negative feedback from axon-bearing amacrine cells excited via electrical synapses with neighboring ganglion cells. Computer simulations were used to determine whether such axon-mediated feedback was consistent with the experimentally observed properties of HFOPs. (1) Periodic signals are typically absent from ganglion cell PSTHs, in part because the phases of retinal HFOPs vary randomly over time and are only weakly stimulus locked. In the retinal model, this phase variability resulted from the nonlinear properties of axon-mediated feedback in combination with synaptic noise. (2) HFOPs increase as a function of stimulus size up to several times the receptive-field center diameter. In the model, axon-mediated feedback pooled signals over a large retinal area, producing HFOPs that were similarly size dependent. (3) HFOPs are stimulus specific. In the model, gap junctions between neighboring neurons caused contiguous regions to become phase locked, but did not synchronize separate regions. Model-generated HFOPs were consistent with the receptive-field center dynamics and spatial organization of cat alpha cells. HFOPs did not depend qualitatively on the exact value of any model parameter or on the numerical precision of the integration method. We conclude that HFOPs could be mediated, in part, by circuitry consistent with known retinal anatomy.


Subject(s)
Action Potentials/physiology , Models, Neurological , Retina/cytology , Retinal Ganglion Cells/physiology , Amacrine Cells/physiology , Animals , Electrophysiology/methods , Electroretinography/methods , Gap Junctions/physiology , Humans , Interneurons/physiology , Neural Conduction/physiology , Photic Stimulation , Synapses/physiology , Visual Fields/physiology , Visual Perception/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...