Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Ann N Y Acad Sci ; 1534(1): 45-68, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38528782

ABSTRACT

This paper considers neural representation through the lens of active inference, a normative framework for understanding brain function. It delves into how living organisms employ generative models to minimize the discrepancy between predictions and observations (as scored with variational free energy). The ensuing analysis suggests that the brain learns generative models to navigate the world adaptively, not (or not solely) to understand it. Different living organisms may possess an array of generative models, spanning from those that support action-perception cycles to those that underwrite planning and imagination; namely, from explicit models that entail variables for predicting concurrent sensations, like objects, faces, or people-to action-oriented models that predict action outcomes. It then elucidates how generative models and belief dynamics might link to neural representation and the implications of different types of generative models for understanding an agent's cognitive capabilities in relation to its ecological niche. The paper concludes with open questions regarding the evolution of generative models and the development of advanced cognitive abilities-and the gradual transition from pragmatic to detached neural representations. The analysis on offer foregrounds the diverse roles that generative models play in cognitive processes and the evolution of neural representation.


Subject(s)
Brain , Cognition , Humans , Sensation , Learning
2.
Proc Natl Acad Sci U S A ; 120(51): e2309058120, 2023 Dec 19.
Article in English | MEDLINE | ID: mdl-38085784

ABSTRACT

Performing goal-directed movements requires mapping goals from extrinsic (workspace-relative) to intrinsic (body-relative) coordinates and then to motor signals. Mainstream approaches based on optimal control realize the mappings by minimizing cost functions, which is computationally demanding. Instead, active inference uses generative models to produce sensory predictions, which allows a cheaper inversion to the motor signals. However, devising generative models to control complex kinematic chains like the human body is challenging. We introduce an active inference architecture that affords a simple but effective mapping from extrinsic to intrinsic coordinates via inference and easily scales up to drive complex kinematic chains. Rich goals can be specified in both intrinsic and extrinsic coordinates using attractive or repulsive forces. The proposed model reproduces sophisticated bodily movements and paves the way for computationally efficient and biologically plausible control of actuated systems.


Subject(s)
Algorithms , Movement , Humans , Biomechanical Phenomena , Motivation
3.
Biomimetics (Basel) ; 8(5)2023 Sep 21.
Article in English | MEDLINE | ID: mdl-37754196

ABSTRACT

Depth estimation is an ill-posed problem; objects of different shapes or dimensions, even if at different distances, may project to the same image on the retina. Our brain uses several cues for depth estimation, including monocular cues such as motion parallax and binocular cues such as diplopia. However, it remains unclear how the computations required for depth estimation are implemented in biologically plausible ways. State-of-the-art approaches to depth estimation based on deep neural networks implicitly describe the brain as a hierarchical feature detector. Instead, in this paper we propose an alternative approach that casts depth estimation as a problem of active inference. We show that depth can be inferred by inverting a hierarchical generative model that simultaneously predicts the eyes' projections from a 2D belief over an object. Model inversion consists of a series of biologically plausible homogeneous transformations based on Predictive Coding principles. Under the plausible assumption of a nonuniform fovea resolution, depth estimation favors an active vision strategy that fixates the object with the eyes, rendering the depth belief more accurate. This strategy is not realized by first fixating on a target and then estimating the depth; instead, it combines the two processes through action-perception cycles, with a similar mechanism of the saccades during object recognition. The proposed approach requires only local (top-down and bottom-up) message passing, which can be implemented in biologically plausible neural circuits.

4.
Front Comput Neurosci ; 17: 1128694, 2023.
Article in English | MEDLINE | ID: mdl-37021085

ABSTRACT

We present a normative computational theory of how the brain may support visually-guided goal-directed actions in dynamically changing environments. It extends the Active Inference theory of cortical processing according to which the brain maintains beliefs over the environmental state, and motor control signals try to fulfill the corresponding sensory predictions. We propose that the neural circuitry in the Posterior Parietal Cortex (PPC) compute flexible intentions-or motor plans from a belief over targets-to dynamically generate goal-directed actions, and we develop a computational formalization of this process. A proof-of-concept agent embodying visual and proprioceptive sensors and an actuated upper limb was tested on target-reaching tasks. The agent behaved correctly under various conditions, including static and dynamic targets, different sensory feedbacks, sensory precisions, intention gains, and movement policies; limit conditions were individuated, too. Active Inference driven by dynamic and flexible intentions can thus support goal-directed behavior in constantly changing environments, and the PPC might putatively host its core intention mechanism. More broadly, the study provides a normative computational basis for research on goal-directed behavior in end-to-end settings and further advances mechanistic theories of active biological systems.

5.
Sci Rep ; 11(1): 15919, 2021 08 05.
Article in English | MEDLINE | ID: mdl-34354144

ABSTRACT

The present study used steady-state visual evoked potentials (SSVEPs) to examine the spatio-temporal dynamics of reading morphologically complex words and test the neurophysiological activation pattern elicited by stems and suffixes. Three different types of target words were presented to proficient readers in a delayed naming task: truly suffixed words (e.g., farmer), pseudo-suffixed words (e.g., corner), and non-suffixed words (e.g., cashew). Embedded stems and affixes were flickered at two different frequencies (18.75 Hz and 12.50 Hz, respectively). The stem data revealed an earlier SSVEP peak in the truly suffixed and pseudo-suffixed conditions compared to the non-suffixed condition, thus providing evidence for the form-based activation of embedded stems during reading. The suffix data also showed a dissociation in the SSVEP response between suffixes and non-suffixes with an additional activation boost for truly suffixed words. The observed differences are discussed in the context of current models of complex word recognition.


Subject(s)
Evoked Potentials, Visual/physiology , Reaction Time/physiology , Reading , Adult , Female , Humans , Language , Male , Semantics , Spatio-Temporal Analysis , Visual Acuity/physiology
6.
Brain Lang ; 192: 1-14, 2019 05.
Article in English | MEDLINE | ID: mdl-30826643

ABSTRACT

The present study explored the possibility to use Steady-State Visual Evoked Potentials (SSVEPs) as a tool to investigate the core mechanisms in visual word recognition. In particular, we investigated three benchmark effects of reading aloud: lexicality (words vs. pseudowords), frequency (high-frequency vs. low-frequency words), and orthographic familiarity ('familiar' versus 'unfamiliar' pseudowords). We found that words and pseudowords elicited robust SSVEPs. Words showed larger SSVEPs than pseudowords and high-frequency words showed larger SSVEPs than low-frequency words. SSVEPs were not sensitive to orthographic familiarity. We further localized the neural generators of the SSVEP effects. The lexicality effect was located in areas associated with early level of visual processing, i.e. in the right occipital lobe and in the right precuneus. Pseudowords produced more activation than words in left sensorimotor areas, rolandic operculum, insula, supramarginal gyrus and in the right temporal gyrus. These areas are devoted to speech processing and/or spelling-to-sound conversion. The frequency effect involved the left temporal pole and orbitofrontal cortex, areas previously implicated in semantic processing and stimulus-response associations respectively, and the right postcentral and parietal inferior gyri, possibly indicating the involvement of the right attentional network.


Subject(s)
Evoked Potentials, Visual , Reading , Speech , Adult , Attention , Brain Mapping , Cerebral Cortex/physiology , Cognition , Female , Humans , Male , Recognition, Psychology , Semantics
7.
PLoS Comput Biol ; 14(9): e1006316, 2018 09.
Article in English | MEDLINE | ID: mdl-30222746

ABSTRACT

While the neurobiology of simple and habitual choices is relatively well known, our current understanding of goal-directed choices and planning in the brain is still limited. Theoretical work suggests that goal-directed computations can be productively associated to model-based (reinforcement learning) computations, yet a detailed mapping between computational processes and neuronal circuits remains to be fully established. Here we report a computational analysis that aligns Bayesian nonparametrics and model-based reinforcement learning (MB-RL) to the functioning of the hippocampus (HC) and the ventral striatum (vStr)-a neuronal circuit that increasingly recognized to be an appropriate model system to understand goal-directed (spatial) decisions and planning mechanisms in the brain. We test the MB-RL agent in a contextual conditioning task that depends on intact hippocampus and ventral striatal (shell) function and show that it solves the task while showing key behavioral and neuronal signatures of the HC-vStr circuit. Our simulations also explore the benefits of biological forms of look-ahead prediction (forward sweeps) during both learning and control. This article thus contributes to fill the gap between our current understanding of computational algorithms and biological realizations of (model-based) reinforcement learning.


Subject(s)
Brain/physiology , Hippocampus/physiology , Spatial Navigation , Ventral Striatum/physiology , Algorithms , Animals , Bayes Theorem , Behavior, Animal , Brain Mapping , Computer Simulation , Conditioning, Classical , Decision Making/physiology , Humans , Learning/physiology , Machine Learning , Maze Learning , Medical Informatics , Mice , Neurobiology , Reinforcement, Psychology , Software
8.
Behav Brain Sci ; 40: e191, 2017 01.
Article in English | MEDLINE | ID: mdl-29342650

ABSTRACT

We provide an emergentist perspective on the computational mechanism underlying numerosity perception, its development, and the role of inhibition, based on our deep neural network model. We argue that the influence of continuous visual properties does not challenge the notion of number sense, but reveals limit conditions for the computation that yields invariance in numerosity perception. Alternative accounts should be formalized in a computational model.


Subject(s)
Cognition , Visual Perception
9.
PLoS One ; 5(10)2010 Oct 01.
Article in English | MEDLINE | ID: mdl-20957204

ABSTRACT

Many authors have proposed that facial expressions, by conveying emotional states of the person we are interacting with, influence the interaction behavior. We aimed at verifying how specific the effect is of the facial expressions of emotions of an individual (both their valence and relevance/specificity for the purpose of the action) with respect to how the action aimed at the same individual is executed. In addition, we investigated whether and how the effects of emotions on action execution are modulated by participants' empathic attitudes. We used a kinematic approach to analyze the simulation of feeding others, which consisted of recording the "feeding trajectory" by using a computer mouse. Actors could express different highly arousing emotions, namely happiness, disgust, anger, or a neutral expression. Response time was sensitive to the interaction between valence and relevance/specificity of emotion: disgust caused faster response. In addition, happiness induced slower feeding time and longer time to peak velocity, but only in blocks where it alternated with expressions of disgust. The kinematic profiles described how the effect of the specificity of the emotional context for feeding, namely a modulation of accuracy requirements, occurs. An early acceleration in kinematic relative-to-neutral feeding profiles occurred when actors expressed positive emotions (happiness) in blocks with specific-to-feeding negative emotions (disgust). On the other hand, the end-part of the action was slower when feeding happy with respect to neutral faces, confirming the increase of accuracy requirements and motor control. These kinematic effects were modulated by participants' empathic attitudes. In conclusion, the social dimension of emotions, that is, their ability to modulate others' action planning/execution, strictly depends on their relevance and specificity to the purpose of the action. This finding argues against a strict distinction between social and nonsocial emotions.


Subject(s)
Behavior , Emotions , Face , Adult , Female , Humans , Male
SELECTION OF CITATIONS
SEARCH DETAIL
...