Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-30222585

ABSTRACT

This paper introduces an event-based luminance-free algorithm for line and segment detection from the output of asynchronous event-based neuromorphic retinas. These recent biomimetic vision sensors are composed of autonomous pixels, each of them asynchronously generating visual events that encode relative changes in pixels' illumination at high temporal resolutions. This frame-free approach results in an increased energy efficiency and in real-time operation, making these sensors especially suitable for applications such as autonomous robotics. The proposed algorithm is based on an iterative event-based weighted least squares fitting, and it is consequently well suited to the high temporal resolution and asynchronous acquisition of neuromorphic cameras: parameters of a current line are updated for each event attributed (i.e., spatio-temporally close) to it, while implicitly forgetting the contribution of older events according to a speed-tuned exponentially decaying function. A detection occurs if a measure of activity, i.e., implicit measure of the number of contributing events and using the same decay function, exceeds a given threshold. The speed-tuned decreasing function is based on a measure of the apparent motion, i.e., the optical flow computed around each event. This latter ensures that the algorithm behaves independently of the edges' dynamics. Line segments are then extracted from the lines, allowing for the tracking of the corresponding endpoints. We provide experiments showing the accuracy of our algorithm and study the influence of the apparent velocity and relative orientation of the observed edges. Finally, evaluations of its computational efficiency show that this algorithm can be envisioned for high-speed applications, such as vision-based robotic navigation.

2.
Front Psychol ; 9: 83, 2018.
Article in English | MEDLINE | ID: mdl-29515472

ABSTRACT

Highlights The kinematics of hand movements (spatial use, curvature, acceleration, and velocity) of infants with their mothers in an interactive setting are significantly associated with age in cohorts of typical and at-risk infantsdiffer significantly at 5-6 months of age, depending on the context: relating either with an object or a person.Environmental and developmental factors shape the developmental trajectories of hand movements in different cohorts: environment for infants with VIMs; stage of development for premature infants and those with West syndrome; and both factors for infants with orality disorders.The curvature of hand movements specifically reflects atypical development in infants with West syndrome when developmental age is considered. We aimed to discriminate between typical and atypical developmental trajectory patterns of at-risk infants in an interactive setting in this observational and longitudinal study, with the assumption that hand movements (HM) reflect preverbal communication and its disorders. We examined the developmental trajectories of HM in five cohorts of at-risk infants and one control cohort, followed from ages 2 to 10 months: 25 West syndrome (WS), 13 preterm birth (PB), 16 orality disorder (OD), 14 with visually impaired mothers (VIM), 7 early hospitalization (EH), and 19 typically developing infants (TD). Video-recorded data were collected in three different structured interactive contexts. Descriptors of the hand motion were used to examine the extent to which HM were associated with age and cohort. We obtained four principal results: (i) the kinematics of HM (spatial use, curvature, acceleration, and velocity) were significantly associated with age in all cohorts; (ii) HM significantly differed at 5-6 months of age in TD infants, depending on the context; (iii) environmental and developmental factors shaped the developmental trajectories of HM in different cohorts: environment for VIM, development for PB and WS, and both factors for OD and; (iv) the curvatures of HM showed atypical development in WS infants when developmental age was considered. These findings support the importance of using kinematics of HM to identify very early developmental disorders in an interactive context and would allow early prevention and intervention for at-risk infants.

3.
Front Neurosci ; 10: 594, 2016.
Article in English | MEDLINE | ID: mdl-28101001

ABSTRACT

This paper introduces an event-based luminance-free feature from the output of asynchronous event-based neuromorphic retinas. The feature consists in mapping the distribution of the optical flow along the contours of the moving objects in the visual scene into a matrix. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. The optical flow is computed at each event, and is integrated locally or globally in a speed and direction coordinate frame based grid, using speed-tuned temporal kernels. The latter ensures that the resulting feature equitably represents the distribution of the normal motion along the current moving edges, whatever their respective dynamics. The usefulness and the generality of the proposed feature are demonstrated in pattern recognition applications: local corner detection and global gesture recognition.

4.
Neural Netw ; 66: 91-106, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25828960

ABSTRACT

This paper introduces an event-based luminance-free method to detect and match corner events from the output of asynchronous event-based neuromorphic retinas. The method relies on the use of space-time properties of moving edges. Asynchronous event-based neuromorphic retinas are composed of autonomous pixels, each of them asynchronously generating "spiking" events that encode relative changes in pixels' illumination at high temporal resolutions. Corner events are defined as the spatiotemporal locations where the aperture problem can be solved using the intersection of several geometric constraints in events' spatiotemporal spaces. A regularization process provides the required constraints, i.e. the motion attributes of the edges with respect to their spatiotemporal locations using local geometric properties of visual events. Experimental results are presented on several real scenes showing the stability and robustness of the detection and matching.


Subject(s)
Image Processing, Computer-Assisted/methods , Models, Neurological , Visual Fields , Image Processing, Computer-Assisted/instrumentation
5.
IEEE Trans Neural Netw Learn Syst ; 26(12): 3045-59, 2015 Dec.
Article in English | MEDLINE | ID: mdl-25794399

ABSTRACT

Object tracking is an important step in many artificial vision tasks. The current state-of-the-art implementations remain too computationally demanding for the problem to be solved in real time with high dynamics. This paper presents a novel real-time method for visual part-based tracking of complex objects from the output of an asynchronous event-based camera. This paper extends the pictorial structures model introduced by Fischler and Elschlager 40 years ago and introduces a new formulation of the problem, allowing the dynamic processing of visual input in real time at high temporal resolution using a conventional PC. It relies on the concept of representing an object as a set of basic elements linked by springs. These basic elements consist of simple trackers capable of successfully tracking a target with an ellipse-like shape at several kilohertz on a conventional computer. For each incoming event, the method updates the elastic connections established between the trackers and guarantees a desired geometric structure corresponding to the tracked object in real time. This introduces a high temporal elasticity to adapt to projective deformations of the tracked object in the focal plane. The elastic energy of this virtual mechanical system provides a quality criterion for tracking and can be used to determine whether the measured deformations are caused by the perspective projection of the perceived object or by occlusions. Experiments on real-world data show the robustness of the method in the context of dynamic face tracking.

6.
Front Neurosci ; 9: 46, 2015.
Article in English | MEDLINE | ID: mdl-25759637

ABSTRACT

Bio-inspired asynchronous event-based vision sensors are currently introducing a paradigm shift in visual information processing. These new sensors rely on a stimulus-driven principle of light acquisition similar to biological retinas. They are event-driven and fully asynchronous, thereby reducing redundancy and encoding exact times of input signal changes, leading to a very precise temporal resolution. Approaches for higher-level computer vision often rely on the reliable detection of features in visual frames, but similar definitions of features for the novel dynamic and event-based visual input representation of silicon retinas have so far been lacking. This article addresses the problem of learning and recognizing features for event-based vision sensors, which capture properties of truly spatiotemporal volumes of sparse visual event information. A novel computational architecture for learning and encoding spatiotemporal features is introduced based on a set of predictive recurrent reservoir networks, competing via winner-take-all selection. Features are learned in an unsupervised manner from real-world input recorded with event-based vision sensors. It is shown that the networks in the architecture learn distinct and task-specific dynamic visual features, and can predict their trajectories over time.

7.
Front Neurosci ; 8: 9, 2014.
Article in English | MEDLINE | ID: mdl-24570652

ABSTRACT

Reliable and fast sensing of the environment is a fundamental requirement for autonomous mobile robotic platforms. Unfortunately, the frame-based acquisition paradigm at the basis of main stream artificial perceptive systems is limited by low temporal dynamics and redundant data flow, leading to high computational costs. Hence, conventional sensing and relative computation are obviously incompatible with the design of high speed sensor-based reactive control for mobile applications, that pose strict limits on energy consumption and computational load. This paper introduces a fast obstacle avoidance method based on the output of an asynchronous event-based time encoded imaging sensor. The proposed method relies on an event-based Time To Contact (TTC) computation based on visual event-based motion flows. The approach is event-based in the sense that every incoming event adds to the computation process thus allowing fast avoidance responses. The method is validated indoor on a mobile robot, comparing the event-based TTC with a laser range finder TTC, showing that event-based sensing offers new perspectives for mobile robotics sensing.

8.
IEEE Int Conf Rehabil Robot ; 2011: 5975439, 2011.
Article in English | MEDLINE | ID: mdl-22275639

ABSTRACT

An embedded 3D body motion capture system for an assistive walking robot is presented in this paper. A 3D camera and infrared sensors are installed on a wheeled walker. We compare the positions of the human articular joints computed with our embedded system and the ones obtained with an other accurate system using embodied markers, the Codamotion. The obtained results valid our approach.


Subject(s)
Robotics/instrumentation , Robotics/methods , Self-Help Devices , Walking/physiology , Humans , Models, Theoretical
SELECTION OF CITATIONS
SEARCH DETAIL
...