Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Comput Biol ; 19(6): e1011104, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37289753

ABSTRACT

To interpret the sensory environment, the brain combines ambiguous sensory measurements with knowledge that reflects context-specific prior experience. But environmental contexts can change abruptly and unpredictably, resulting in uncertainty about the current context. Here we address two questions: how should context-specific prior knowledge optimally guide the interpretation of sensory stimuli in changing environments, and do human decision-making strategies resemble this optimum? We probe these questions with a task in which subjects report the orientation of ambiguous visual stimuli that were drawn from three dynamically switching distributions, representing different environmental contexts. We derive predictions for an ideal Bayesian observer that leverages knowledge about the statistical structure of the task to maximize decision accuracy, including knowledge about the dynamics of the environment. We show that its decisions are biased by the dynamically changing task context. The magnitude of this decision bias depends on the observer's continually evolving belief about the current context. The model therefore not only predicts that decision bias will grow as the context is indicated more reliably, but also as the stability of the environment increases, and as the number of trials since the last context switch grows. Analysis of human choice data validates all three predictions, suggesting that the brain leverages knowledge of the statistical structure of environmental change when interpreting ambiguous sensory signals.


Subject(s)
Brain , Decision Making , Humans , Bayes Theorem , Uncertainty , Bias
2.
Adv Neural Inf Process Syst ; 35: 22628-22642, 2022.
Article in English | MEDLINE | ID: mdl-38435074

ABSTRACT

Humans learn from visual inputs at multiple timescales, both rapidly and flexibly acquiring visual knowledge over short periods, and robustly accumulating online learning progress over longer periods. Modeling these powerful learning capabilities is an important problem for computational visual cognitive science, and models that could replicate them would be of substantial utility in real-world computer vision settings. In this work, we establish benchmarks for both real-time and life-long continual visual learning. Our real-time learning benchmark measures a model's ability to match the rapid visual behavior changes of real humans over the course of minutes and hours, given a stream of visual inputs. Our life-long learning benchmark evaluates the performance of models in a purely online learning curriculum obtained directly from child visual experience over the course of years of development. We evaluate a spectrum of recent deep self-supervised visual learning algorithms on both benchmarks, finding that none of them perfectly match human performance, though some algorithms perform substantially better than others. Interestingly, algorithms embodying recent trends in self-supervised learning - including BYOL, SwAV and MAE - are substantially worse on our benchmarks than an earlier generation of self-supervised algorithms such as SimCLR and MoCo-v2. We present analysis indicating that the failure of these newer algorithms is primarily due to their inability to handle the kind of sparse low-diversity datastreams that naturally arise in the real world, and that actively leveraging memory through negative sampling - a mechanism eschewed by these newer algorithms - appears useful for facilitating learning in such low-diversity environments. We also illustrate a complementarity between the short and long timescales in the two benchmarks, showing how requiring a single learning algorithm to be locally context-sensitive enough to match real-time learning changes while stable enough to avoid catastrophic forgetting over the long term induces a trade-off that human-like algorithms may have to straddle. Taken together, our benchmarks establish a quantitative way to directly compare learning between neural networks models and human learners, show how choices in the mechanism by which such algorithms handle sample comparison and memory strongly impact their ability to match human learning abilities, and expose an open problem space for identifying more flexible and robust visual self-supervision algorithms.

3.
Nat Commun ; 12(1): 5982, 2021 10 13.
Article in English | MEDLINE | ID: mdl-34645787

ABSTRACT

Many sensory-driven behaviors rely on predictions about future states of the environment. Visual input typically evolves along complex temporal trajectories that are difficult to extrapolate. We test the hypothesis that spatial processing mechanisms in the early visual system facilitate prediction by constructing neural representations that follow straighter temporal trajectories. We recorded V1 population activity in anesthetized macaques while presenting static frames taken from brief video clips, and developed a procedure to measure the curvature of the associated neural population trajectory. We found that V1 populations straighten naturally occurring image sequences, but entangle artificial sequences that contain unnatural temporal transformations. We show that these effects arise in part from computational mechanisms that underlie the stimulus selectivity of V1 cells. Together, our findings reveal that the early visual system uses a set of specialized computations to build representations that can support prediction in the natural environment.


Subject(s)
Anticipation, Psychological/physiology , Nerve Net/physiology , Visual Cortex/physiology , Visual Perception/physiology , Anesthesia, General , Animals , Craniotomy/methods , Electrodes , Macaca fascicularis , Photic Stimulation/methods , Stereotaxic Techniques , Video Recording
4.
J Neurophysiol ; 125(6): 2125-2134, 2021 06 01.
Article in English | MEDLINE | ID: mdl-33909494

ABSTRACT

Visual systems evolve to process the stimuli that arise in the organism's natural environment, and hence, to fully understand the neural computations in the visual system, it is important to measure behavioral and neural responses to natural visual stimuli. Here, we measured psychometric and neurometric functions in the macaque monkey for detection of a windowed sine-wave target in uniform backgrounds and in natural backgrounds of various contrasts. The neurometric functions were obtained by near-optimal decoding of voltage-sensitive-dye-imaging (VSDI) responses at the retinotopic scale in primary visual cortex (V1). The results were compared with previous human psychophysical measurements made under the same conditions. We found that human and macaque behavioral thresholds followed the generalized Weber's law as function of contrast, and that both the slopes and the intercepts of the threshold as a function of background contrast match each other up to a single scale factor. We also found that the neurometric thresholds followed the generalized Weber's law with slopes and intercepts matching the behavioral slopes and intercepts up to a single scale factor. We conclude that human and macaque ability to detect targets in natural backgrounds are affected in the same way by background contrast, that these effects are consistent with population decoding at the retinotopic scale by down-stream circuits, and that the macaque monkey is an appropriate animal model for gaining an understanding of the neural mechanisms in humans for detecting targets in natural backgrounds. Finally, we discuss limitations of the current study and potential next steps.NEW & NOTEWORTHY We measured macaque detection performance in natural images and compared their performance to the detection sensitivity of neurophysiological responses recorded in the primary visual cortex (V1), and to the performance of human subjects. We found that 1) human and macaque behavioral performances are in quantitative agreement and 2) are consistent with near-optimal decoding of V1 population responses.


Subject(s)
Contrast Sensitivity/physiology , Depth Perception/physiology , Discrimination, Psychological/physiology , Pattern Recognition, Visual/physiology , Perceptual Masking/physiology , Primary Visual Cortex/physiology , Sensory Thresholds/physiology , Animals , Behavior, Animal/physiology , Differential Threshold , Humans , Macaca , Species Specificity , Task Performance and Analysis , Voltage-Sensitive Dye Imaging
5.
Yonsei Med J ; 58(1): 187-194, 2017 Jan.
Article in English | MEDLINE | ID: mdl-27873513

ABSTRACT

PURPOSE: To estimate annual health care and productivity loss costs attributable to overweight or obesity in working asthmatic patients. MATERIALS AND METHODS: This study was conducted using the 2003-2013 Medical Expenditure Panel Survey (MEPS) in the United States. Patients aged 18 to 64 years with asthma were identified via self-reported diagnosis, a Clinical Classification Code of 128, or a ICD-9-CM code of 493.xx. All-cause health care costs were estimated using a generalized linear model with a log function and a gamma distribution. Productivity loss costs were estimated in relation to hourly wages and missed work days, and a two-part model was used to adjust for patients with zero costs. To estimate the costs attributable to overweight or obesity in asthma patients, costs were estimated by the recycled prediction method. RESULTS: Among 11670 working patients with a diagnosis of asthma, 4428 (35.2%) were obese and 3761 (33.0%) were overweight. The health care costs attributable to obesity and overweight in working asthma patients were estimated to be $878 [95% confidence interval (CI): $861-$895] and $257 (95% CI: $251-$262) per person per year, respectively, from 2003 to 2013. The productivity loss costs attributable to obesity and overweight among working asthma patients were $256 (95% CI: $253-$260) and $26 (95% CI: $26-$27) per person per year, respectively. CONCLUSION: Health care and productivity loss costs attributable to overweight and obesity in asthma patients are substantial. This study's results highlight the importance of effective public health and educational initiatives targeted at reducing overweight and obesity among patients with asthma, which may help lower the economic burden of asthma.


Subject(s)
Asthma/economics , Cost of Illness , Efficiency , Employment , Health Care Costs , Obesity/economics , Adult , Asthma/epidemiology , Asthma/therapy , Female , Health Expenditures , Humans , Male , Middle Aged , Obesity/epidemiology , Obesity/therapy , Overweight/economics , Overweight/epidemiology , Overweight/therapy , United States/epidemiology , Young Adult
6.
Elife ; 52016 07 21.
Article in English | MEDLINE | ID: mdl-27441501

ABSTRACT

Understanding the neural basis of behaviour requires studying brain activity in behaving subjects using complementary techniques that measure neural responses at multiple spatial scales, and developing computational tools for understanding the mapping between these measurements. Here we report the first results of widefield imaging of genetically encoded calcium indicator (GCaMP6f) signals from V1 of behaving macaques. This technique provides a robust readout of visual population responses at the columnar scale over multiple mm(2) and over several months. To determine the quantitative relation between the widefield GCaMP signals and the locally pooled spiking activity, we developed a computational model that sums the responses of V1 neurons characterized by prior single unit measurements. The measured tuning properties of the GCaMP signals to stimulus contrast, orientation and spatial position closely match the predictions of the model, suggesting that widefield GCaMP signals are linearly related to the summed local spiking activity.


Subject(s)
Behavior, Animal , Brain Mapping/methods , Brain/physiology , Calcium/analysis , Optical Imaging/methods , Animals , Computer Simulation , Genes, Reporter , Macaca
SELECTION OF CITATIONS
SEARCH DETAIL
...