Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
2.
Sci Rep ; 11(1): 7827, 2021 04 09.
Article in English | MEDLINE | ID: mdl-33837223

ABSTRACT

Humans recognize individual faces regardless of variation in the facial view. The view-tuned face neurons in the inferior temporal (IT) cortex are regarded as the neural substrate for view-invariant face recognition. This study approximated visual features encoded by these neurons as combinations of local orientations and colors, originated from natural image fragments. The resultant features reproduced the preference of these neurons to particular facial views. We also found that faces of one identity were separable from the faces of other identities in a space where each axis represented one of these features. These results suggested that view-invariant face representation was established by combining view sensitive visual features. The face representation with these features suggested that, with respect to view-invariant face representation, the seemingly complex and deeply layered ventral visual pathway can be approximated via a shallow network, comprised of layers of low-level processing for local orientations and colors (V1/V2-level) and the layers which detect particular sets of low-level elements derived from natural image fragments (IT-level).


Subject(s)
Facial Recognition/physiology , Recognition, Psychology/physiology , Temporal Lobe/physiology , Visual Cortex/physiology , Visual Pathways/physiology , Animals , Brain Mapping , Face , Macaca fuscata , Nerve Net/physiology , Neurons/physiology
3.
PLoS One ; 13(9): e0201192, 2018.
Article in English | MEDLINE | ID: mdl-30235218

ABSTRACT

Despite a large body of research on response properties of neurons in the inferior temporal (IT) cortex, studies to date have not yet produced quantitative feature descriptions that can predict responses to arbitrary objects. This deficit in the research prevents a thorough understanding of object representation in the IT cortex. Here we propose a fragment-based approach for finding quantitative feature descriptions of face neurons in the IT cortex. The development of the proposed method was driven by the assumption that it is possible to recover features from a set of natural image fragments if the set is sufficiently large. To find the feature from the set, we compared object responses predicted from each fragment and responses of neurons to these objects, and search for the fragment that revealed the highest correlation with neural object responses. Prediction of object responses of each fragment was made by normalizing Euclidian distance between the fragment and each object to 0 to 1 such that the smaller distance gives the higher value. The distance was calculated at the space where images were transformed to a local orientation space by a Gabor filter and a local max operation. The method allowed us to find features with a correlation coefficient between predicted and neural responses of 0.68 on average (number of object stimuli, 104) from among 560,000 feature candidates, reliably explaining differential responses among faces as well as a general preference for faces over to non-face objects. Furthermore, predicted responses of the resulting features to novel object images were significantly correlated with neural responses to these images. Identification of features comprising specific, moderately complex combinations of local orientations and colors enabled us to predict responses to upright and inverted faces, which provided a possible mechanism of face inversion effects. (292/300).


Subject(s)
Neurons/cytology , Neurons/physiology , Temporal Lobe/cytology , Temporal Lobe/physiology , Visual Perception/physiology , Animals , Macaca mulatta , Male
4.
J Neurosci Methods ; 244: 26-32, 2015 Apr 15.
Article in English | MEDLINE | ID: mdl-24797225

ABSTRACT

BACKGROUND: For a self-paced motor imagery based brain-computer interface (BCI), the system should be able to recognize the occurrence of a motor imagery, as well as the type of the motor imagery. However, because of the difficulty of detecting the occurrence of a motor imagery, general motor imagery based BCI studies have been focusing on the cued motor imagery paradigm. NEW METHOD: In this paper, we present a novel hybrid BCI system that uses near infrared spectroscopy (NIRS) and electroencephalography (EEG) systems together to achieve online self-paced motor imagery based BCI. We designed a unique sensor frame that records NIRS and EEG simultaneously for the realization of our system. Based on this hybrid system, we proposed a novel analysis method that detects the occurrence of a motor imagery with the NIRS system, and classifies its type with the EEG system. RESULTS: An online experiment demonstrated that our hybrid system had a true positive rate of about 88%, a false positive rate of 7% with an average response time of 10.36 s. COMPARISON WITH EXISTING METHOD(S): As far as we know, there is no report that explored hemodynamic brain switch for self-paced motor imagery based BCI with hybrid EEG and NIRS system. CONCLUSIONS: From our experimental results, our hybrid system showed enough reliability for using in a practical self-paced motor imagery based BCI.


Subject(s)
Brain Waves/physiology , Brain-Computer Interfaces , Brain/physiology , Hemoglobins/metabolism , Imagination/physiology , Movement , Self-Control , Adult , Brain Mapping , Electroencephalography , Humans , Male , Online Systems , Spectroscopy, Near-Infrared , Young Adult
5.
IEEE Trans Biomed Eng ; 61(2): 453-62, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24021635

ABSTRACT

We present a novel human-machine interface, called GOM-Face , and its application to humanoid robot control. The GOM-Face bases its interfacing on three electric potentials measured on the face: 1) glossokinetic potential (GKP), which involves the tongue movement; 2) electrooculogram (EOG), which involves the eye movement; 3) electromyogram, which involves the teeth clenching. Each potential has been individually used for assistive interfacing to provide persons with limb motor disabilities or even complete quadriplegia an alternative communication channel. However, to the best of our knowledge, GOM-Face is the first interface that exploits all these potentials together. We resolved the interference between GKP and EOG by extracting discriminative features from two covariance matrices: a tongue-movement-only data matrix and eye-movement-only data matrix. With the feature extraction method, GOM-Face can detect four kinds of horizontal tongue or eye movements with an accuracy of 86.7% within 2.77 s. We demonstrated the applicability of the GOM-Face to humanoid robot control: users were able to communicate with the robot by selecting from a predefined menu using the eye and tongue movements.


Subject(s)
Electromyography/methods , Electrooculography/methods , Man-Machine Systems , Robotics/instrumentation , Signal Processing, Computer-Assisted/instrumentation , Tongue/physiology , Adult , Bite Force , Evoked Potentials, Motor/physiology , Eye Movements/physiology , Female , Humans , Male , Quadriplegia/rehabilitation , Self-Help Devices , Tooth/physiology , Young Adult
6.
Article in English | MEDLINE | ID: mdl-24110173

ABSTRACT

Steady-state somatosensory evoked potential (SSSEP) is a recently developing brain-computer interface (BCI) paradigm where the brain response to tactile stimulation of a specific frequency is used. Thus far, spatial information was not examined in depth in SSSEP BCI, because frequency information was regarded as the main concern of SSSEP analysis. However, given that the somatosensory cortex areas, each of which correspond to a different body part, are well clustered, we can assume that the spatial information could be beneficial for SSSEP analysis. Based on this assumption, we apply the common spatial pattern (CSP) method, which is the spatial feature extraction method most widely used for the motor imagery BCI paradigm, to SSSEP BCI. Experimental results show that our approach, where two CSP methods are applied to the signal of each frequency band, has a performance improvement from 70% to 75%.


Subject(s)
Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Somatosensory/physiology , Adult , Algorithms , Humans , Male , Physical Stimulation
7.
IEEE Trans Biomed Eng ; 59(1): 290-9, 2012 Jan.
Article in English | MEDLINE | ID: mdl-22049361

ABSTRACT

Glossokinetic potentials (GKPs) are electric potential responses generated by tongue movement. In this study, we use these GKPs to automatically detect and estimate tongue positions, and develop a tongue-machine interface. We show that a specific configuration of electrode placement yields discriminative GKPs that vary depending on the direction of the tongue. We develop a linear model to determine the direction of tongue from GKPs, where we seek linear features that are robust to a baseline drift problem by maximizing the ratio of intertask covariance to intersession covariance. We apply our method to the task of wheelchair control, developing a tongue-machine interface for wheelchair control, referred to as tongue-rudder. A teeth clenching detection system, using electromyography, was also implemented in the system in order to assign teeth clenching as the stop command. Experiments on off-line cursor control and online wheelchair control confirm the unique advantages of our method, such as: 1) noninvasiveness, 2) fine controllability, and 3) ability to integrate with other EEG-based interface systems.


Subject(s)
Algorithms , Electroencephalography/methods , Evoked Potentials, Motor/physiology , Pattern Recognition, Automated/methods , Tongue/physiology , User-Computer Interface , Wheelchairs , Brain Mapping/methods , Humans , Motor Cortex/physiology , Movement/physiology , Reproducibility of Results , Sensitivity and Specificity , Tongue/innervation
8.
Exp Neurobiol ; 20(4): 189-96, 2011 Dec.
Article in English | MEDLINE | ID: mdl-22355264

ABSTRACT

In this study, we characterize the hemodynamic changes in the main olfactory bulb of anesthetized Sprague-Dawley (SD) rats with near-infrared spectroscopy (NIRS, ISS Imagent) during presentation of two different odorants. Odorants were presented for 10 seconds with clean air via an automatic odor stimulator. Odorants are: (i) plain air as a reference (Blank), (ii) 2-Heptanone (HEP), (iii) Isopropylbenzene (IB). Our results indicated that a plain air did not cause any change in the concentrations of oxygenated (Δ[HbO(2)]) and deoxygenated hemoglobin (Δ[Hbr]), but HEP and IB induced strong changes. Furthermore, these odor-specific changes had regional differences within the MOB. Our results suggest that NIRS technology might be a useful tool to identify of various odorants in a non-invasive manner using animals which has a superb olfactory system.

SELECTION OF CITATIONS
SEARCH DETAIL
...