Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Cortex ; 141: 128-143, 2021 08.
Article in English | MEDLINE | ID: mdl-34049255

ABSTRACT

Autobiographical memory (AM) has been largely investigated as the ability to recollect specific events that belong to an individual's past. However, how we retrieve real-life routine episodes and how the retrieval of these episodes changes with the passage of time remain unclear. Here, we asked participants to use a wearable camera that automatically captured pictures to record instances during a week of their routine life and implemented a deep neural network-based algorithm to identify picture sequences that represented episodic events. We then asked each participant to return to the lab to retrieve AMs for single episodes cued by the selected pictures 1 week, 2 weeks and 6-14 months after encoding while scalp electroencephalographic (EEG) activity was recorded. We found that participants were more accurate in recognizing pictured scenes depicting their own past than pictured scenes encoded in the lab, and that memory recollection of personally experienced events rapidly decreased with the passing of time. We also found that the retrieval of real-life picture cues elicited a strong and positive 'ERP old/new effect' over frontal regions and that the magnitude of this ERP effect was similar throughout memory tests over time. However, we observed that recognition memory induced a frontal theta power decrease and that this effect was mostly seen when memories were tested after 1 and 2 weeks but not after 6-14 months from encoding. Altogether, we discuss the implications for neuroscientific accounts of episodic retrieval and the potential benefits of developing individual-based AM exploration strategies at the clinical level.


Subject(s)
Memory, Episodic , Cues , Electroencephalography , Humans , Mental Recall , Recognition, Psychology
2.
IEEE Trans Image Process ; 30: 1476-1486, 2021.
Article in English | MEDLINE | ID: mdl-33338018

ABSTRACT

Recently, self-supervised learning has proved to be effective to learn representations of events suitable for temporal segmentation in image sequences, where events are understood as sets of temporally adjacent images that are semantically perceived as a whole. However, although this approach does not require expensive manual annotations, it is data hungry and suffers from domain adaptation problems. As an alternative, in this work, we propose a novel approach for learning event representations named Dynamic Graph Embedding (DGE). The assumption underlying our model is that a sequence of images can be represented by a graph that encodes both semantic and temporal similarity. The key novelty of DGE is to learn jointly the graph and its graph embedding. At its core, DGE works by iterating over two steps: 1) updating the graph representing the semantic and temporal similarity of the data based on the current data representation, and 2) updating the data representation to take into account the current data graph structure. The main advantage of DGE over state-of-the-art self-supervised approaches is that it does not require any training set, but instead learns iteratively from the data itself a low-dimensional embedding that reflects their temporal and semantic similarity. Experimental results on two benchmark datasets of real image sequences captured at regular time intervals demonstrate that the proposed DGE leads to event representations effective for temporal segmentation. In particular, it achieves robust temporal segmentation on the EDUBSeg and EDUBSeg-Desc benchmark datasets, outperforming the state of the art. Additional experiments on two Human Motion Segmentation benchmark datasets demonstrate the generalization capabilities of the proposed DGE.

3.
Vision Res ; 126: 308-317, 2016 09.
Article in English | MEDLINE | ID: mdl-25824454

ABSTRACT

We present a computational model that computes and integrates in a nonlocal fashion several configural cues for automatic figure-ground segregation. Our working hypothesis is that the figural status of each pixel is a nonlocal function of several geometric shape properties and it can be estimated without explicitly relying on object boundaries. The methodology is grounded on two elements: multi-directional linear voting and nonlinear diffusion. A first estimation of the figural status of each pixel is obtained as a result of a voting process, in which several differently oriented line-shaped neighborhoods vote to express their belief about the figural status of the pixel. A nonlinear diffusion process is then applied to enforce the coherence of figural status estimates among perceptually homogeneous regions. Computer simulations fit human perception and match the experimental evidence that several cues cooperate in defining figure-ground segregation. The results of this work suggest that figure-ground segregation involves feedback from cells with larger receptive fields in higher visual cortical areas.


Subject(s)
Form Perception/physiology , Pattern Recognition, Visual/physiology , Visual Perception/physiology , Cues , Humans , Models, Theoretical , Photic Stimulation/methods , Visual Cortex/physiology
4.
PLoS Comput Biol ; 9(4): e1003038, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23633942

ABSTRACT

Aggregates of misfolded proteins are a hallmark of many age-related diseases. Recently, they have been linked to aging of Escherichia coli (E. coli) where protein aggregates accumulate at the old pole region of the aging bacterium. Because of the potential of E. coli as a model organism, elucidating aging and protein aggregation in this bacterium may pave the way to significant advances in our global understanding of aging. A first obstacle along this path is to decipher the mechanisms by which protein aggregates are targeted to specific intercellular locations. Here, using an integrated approach based on individual-based modeling, time-lapse fluorescence microscopy and automated image analysis, we show that the movement of aging-related protein aggregates in E. coli is purely diffusive (Brownian). Using single-particle tracking of protein aggregates in live E. coli cells, we estimated the average size and diffusion constant of the aggregates. Our results provide evidence that the aggregates passively diffuse within the cell, with diffusion constants that depend on their size in agreement with the Stokes-Einstein law. However, the aggregate displacements along the cell long axis are confined to a region that roughly corresponds to the nucleoid-free space in the cell pole, thus confirming the importance of increased macromolecular crowding in the nucleoids. We thus used 3D individual-based modeling to show that these three ingredients (diffusion, aggregation and diffusion hindrance in the nucleoids) are sufficient and necessary to reproduce the available experimental data on aggregate localization in the cells. Taken together, our results strongly support the hypothesis that the localization of aging-related protein aggregates in the poles of E. coli results from the coupling of passive diffusion-aggregation with spatially non-homogeneous macromolecular crowding. They further support the importance of "soft" intracellular structuring (based on macromolecular crowding) in diffusion-based protein localization in E. coli.


Subject(s)
Escherichia coli/metabolism , Organelles/metabolism , Cell Nucleus/metabolism , Computational Biology/methods , Computer Simulation , Diffusion , Escherichia coli Proteins/metabolism , Image Processing, Computer-Assisted , Microscopy, Fluorescence , Protein Binding , Protein Folding , Protein Transport
5.
J Comput Neurosci ; 35(2): 125-54, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23588587

ABSTRACT

Otolith end organs of vertebrates sense linear accelerations of the head and gravitation. The hair cells on their epithelia are responsible for transduction. In mammals, the striola, parallel to the line where hair cells reverse their polarization, is a narrow region centered on a curve with curvature and torsion. It has been shown that the striolar region is functionally different from the rest, being involved in a phasic vestibular pathway. We propose a mathematical and computational model that explains the necessity of this amazing geometry for the striola to be able to carry out its function. Our hypothesis, related to the biophysics of the hair cells and to the physiology of their afferent neurons, is that striolar afferents collect information from several type I hair cells to detect the jerk in a large domain of acceleration directions. This predicts a mean number of two calyces for afferent neurons, as measured in rodents. The domain of acceleration directions sensed by our striolar model is compatible with the experimental results obtained on monkeys considering all afferents. Therefore, the main result of our study is that phasic and tonic vestibular afferents cover the same geometrical fields, but at different dynamical and frequency domains.


Subject(s)
Otolithic Membrane/physiology , Sensation/physiology , Acceleration , Algorithms , Animals , Biophysics , Computer Simulation , Hair Cells, Auditory, Inner/physiology , Hair Cells, Auditory, Inner/ultrastructure , Models, Neurological , Neural Pathways/physiology , Neurons, Afferent/physiology , Otolithic Membrane/cytology , Otolithic Membrane/ultrastructure , Rats , Saccule and Utricle/physiology , Vestibule, Labyrinth/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...