Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
1.
JCI Insight ; 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38900571

ABSTRACT

Men who have sex with men (MSM) with HIV are at high risk for squamous intraepithelial lesion (SIL) and anal cancer. Identifying local immunological mechanisms involved in the development of anal dysplasia could aid treatment and diagnostics. Here we studied 111 anal biopsies obtained from 101 MSM with HIV, who participated in an anal screening program. We first assessed multiple immune subsets by flow cytometry, in addition to histological examination, in a discovery cohort (n = 54). Selected molecules were further evaluated by immunohistochemistry in a validation cohort (n = 47). Pathological samples were characterized by the presence of Resident Memory T cells with low expression of CD103 and by changes in Natural Killer cell subsets, affecting residency and activation. Furthermore, potentially immune suppressive subsets, including CD15+CD16+ mature neutrophils, gradually increased as the anal lesion progressed. Immunohistochemistry confirmed the association between the presence of CD15 in the epithelium and SIL diagnosis, with a sensitivity of 80% and specificity of 71% (AUC 0.762) for the correlation with high-grade SIL. A complex immunological environment with imbalanced proportions of resident effectors and immune suppressive subsets characterizes pathological samples. Neutrophil infiltration, determined by CD15 staining, may represent a valuable pathological marker associated with the grade of dysplasia.

2.
Curr Opin HIV AIDS ; 19(2): 69-78, 2024 03 01.
Article in English | MEDLINE | ID: mdl-38169333

ABSTRACT

PURPOSE OF REVIEW: The complex nature and distribution of the HIV reservoir in tissue of people with HIV remains one of the major obstacles to achieve the elimination of HIV persistence. Challenges include the tissue-specific states of latency and viral persistence, which translates into high levels of reservoir heterogeneity. Moreover, the best strategies to reach and eliminate these reservoirs may differ based on the intrinsic characteristics of the cellular and anatomical reservoir to reach. RECENT FINDINGS: While major focus has been undertaken for lymphoid tissues and follicular T helper cells, evidence of viral persistence in HIV and non-HIV antigen-specific CD4 + T cells and macrophages resident in multiple tissues providing long-term protection presents new challenges in the quest for an HIV cure. Considering the microenvironments where these cellular reservoirs persist opens new venues for the delivery of drugs and immunotherapies to target these niches. New tools, such as single-cell RNA sequencing, CRISPR screenings, mRNA technology or tissue organoids are quickly developing and providing detailed information about the complex nature of the tissue reservoirs. SUMMARY: Targeting persistence in tissue reservoirs represents a complex but essential step towards achieving HIV cure. Combinatorial strategies, particularly during the early phases of infection to impact initial reservoirs, capable of reaching and reactivating multiple long-lived reservoirs in the body may lead the path.


Subject(s)
HIV Infections , Humans , HIV Infections/drug therapy , Virus Latency , CD4-Positive T-Lymphocytes
3.
Open Biol ; 13(1): 220200, 2023 01.
Article in English | MEDLINE | ID: mdl-36629019

ABSTRACT

Microglia are very sensitive to changes in the environment and respond through morphological, functional and metabolic adaptations. To depict the modifications microglia undergo under healthy and pathological conditions, we developed free access image analysis scripts to quantify microglia morphologies and phagocytosis. Neuron-glia cultures, in which microglia express the reporter tdTomato, were exposed to excitotoxicity or excitotoxicity + inflammation and analysed 8 h later. Neuronal death was assessed by SYTOX staining of nucleus debris and phagocytosis was measured through the engulfment of SYTOX+ particles in microglia. We identified seven morphologies: round, hypertrophic, fried egg, bipolar and three 'inflamed' morphologies. We generated a classifier able to separate them and assign one of the seven classes to each microglia in sample images. In control cultures, round and hypertrophic morphologies were predominant. Excitotoxicity had a limited effect on the composition of the populations. By contrast, excitotoxicity + inflammation promoted an enrichment in inflamed morphologies and increased the percentage of phagocytosing microglia. Our data suggest that inflammation is critical to promote phenotypical changes in microglia. We also validated our tools for the segmentation of microglia in brain slices and performed morphometry with the obtained mask. Our method is versatile and useful to correlate microglia sub-populations and behaviour with environmental changes.


Subject(s)
Microglia , Phagocytosis , Humans , Microglia/metabolism , Inflammation/metabolism , Cell Death , Neurons/metabolism
4.
Mach Learn Appl ; 102022 Dec 15.
Article in English | MEDLINE | ID: mdl-36578375

ABSTRACT

The breast cosmetic outcome after breast conserving therapy is essential for evaluating breast treatment and determining patient's remedy selection. This prompts the need of objective and efficient methods for breast cosmesis evaluations. However, current evaluation methods rely on ratings from a small group of physicians or semi-automated pipelines, making the processes time-consuming and their results inconsistent. To solve the problem, in this study, we proposed: 1. a fully-automatic Machine Learning Breast Cosmetic evaluation algorithm leveraging the state-of-the-art Deep Learning algorithms for breast detection and contour annotation, 2. a novel set of Breast Cosmesis features, 3. a new Breast Cosmetic dataset consisting 3k+ images from three clinical trials with human annotations on both breast components and their cosmesis scores. We show our fully-automatic framework can achieve comparable performance to state-of-the-art without the need of human inputs, leading to a more objective, low-cost and scalable solution for breast cosmetic evaluation in breast cancer treatment.

5.
Int J Comput Vis ; 129(4): 942-959, 2021 Apr.
Article in English | MEDLINE | ID: mdl-34211258

ABSTRACT

Computer vision algorithms performance are near or superior to humans in the visual problems including object recognition (especially those of fine-grained categories), segmentation, and 3D object reconstruction from 2D views. Humans are, however, capable of higher-level image analyses. A clear example, involving theory of mind, is our ability to determine whether a perceived behavior or action was performed intentionally or not. In this paper, we derive an algorithm that can infer whether the behavior of an agent in a scene is intentional or unintentional based on its 3D kinematics, using the knowledge of self-propelled motion, Newtonian motion and their relationship. We show how the addition of this basic knowledge leads to a simple, unsupervised algorithm. To test the derived algorithm, we constructed three dedicated datasets from abstract geometric animation to realistic videos of agents performing intentional and non-intentional actions. Experiments on these datasets show that our algorithm can recognize whether an action is intentional or not, even without training data. The performance is comparable to various supervised baselines quantitatively, with sensible intentionality segmentation qualitatively.

6.
Article in English | MEDLINE | ID: mdl-33090835

ABSTRACT

The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).

7.
Dev Psychol ; 55(9): 1965-1981, 2019 Sep.
Article in English | MEDLINE | ID: mdl-31464498

ABSTRACT

Computer vision algorithms have made tremendous advances in recent years. We now have algorithms that can detect and recognize objects, faces, and even facial actions in still images and video sequences. This is wonderful news for researchers that need to code facial articulations in large data sets of images and videos, because this task is time consuming and can only be completed by expert coders, making it very expensive. The availability of computer algorithms that can automatically code facial actions in extremely large data sets also opens the door to studies in psychology and neuroscience that were not previously possible, for example, to study the development of the production of facial expressions from infancy to adulthood within and across cultures. Unfortunately, there is a lack of methodological understanding on how these algorithms should and should not be used, and on how to select the most appropriate algorithm for each study. This article aims to address this gap in the literature. Specifically, we present several methodologies for use in hypothesis-based and exploratory studies, explain how to select the computer algorithms that best fit to the requirements of our experimental design, and detail how to evaluate whether the automatic annotations provided by existing algorithms are trustworthy. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Subject(s)
Algorithms , Emotions/physiology , Facial Expression , Machine Learning/standards , Research Design/standards , Child , Female , Humans , Male
8.
Psychol Sci Public Interest ; 20(1): 1-68, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31313636

ABSTRACT

It is commonly assumed that a person's emotional state can be readily inferred from his or her facial movements, typically called emotional expressions or facial expressions. This assumption influences legal judgments, policy decisions, national security protocols, and educational practices; guides the diagnosis and treatment of psychiatric illness, as well as the development of commercial applications; and pervades everyday social interactions as well as research in other scientific fields such as artificial intelligence, neuroscience, and computer vision. In this article, we survey examples of this widespread assumption, which we refer to as the common view, and we then examine the scientific evidence that tests this view, focusing on the six most popular emotion categories used by consumers of emotion research: anger, disgust, fear, happiness, sadness, and surprise. The available scientific evidence suggests that people do sometimes smile when happy, frown when sad, scowl when angry, and so on, as proposed by the common view, more than what would be expected by chance. Yet how people communicate anger, disgust, fear, happiness, sadness, and surprise varies substantially across cultures, situations, and even across people within a single situation. Furthermore, similar configurations of facial movements variably express instances of more than one emotion category. In fact, a given configuration of facial movements, such as a scowl, often communicates something other than an emotional state. Scientists agree that facial movements convey a range of information and are important for social communication, emotional or otherwise. But our review suggests an urgent need for research that examines how people actually move their faces to express emotions and other social information in the variety of contexts that make up everyday life, as well as careful study of the mechanisms by which people perceive instances of emotion in one another. We make specific research recommendations that will yield a more valid picture of how people move their faces to express emotions and how they infer emotional meaning from facial movements in situations of everyday life. This research is crucial to provide consumers of emotion research with the translational information they require.


Subject(s)
Emotions , Facial Expression , Facial Recognition , Movement , Female , Humans , Interpersonal Relations , Judgment , Male , Psychomotor Performance
9.
Proc Natl Acad Sci U S A ; 116(15): 7169-7171, 2019 04 09.
Article in English | MEDLINE | ID: mdl-30898883

Subject(s)
Emotions
10.
IEEE Trans Pattern Anal Mach Intell ; 41(12): 2835-2845, 2019 12.
Article in English | MEDLINE | ID: mdl-30188814

ABSTRACT

Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in, turning some face areas red; or when one goes white in fear as a result of the drainage of blood from the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity, and skin color. Specifically, we identify the chromaticity changes defining the transition of an AU from inactive to active and use an innovative Gabor transform-based algorithm to gain invariance to the timing of these changes. Because these image changes are given by functions rather than vectors, we use functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Finally, we define an algorithm that allows us to use the learned functional color representation in still images. This is done by learning the mapping between images and the identified functional color features in videos. Our algorithm works in realtime, i.e., 30 frames/second/CPU thread.


Subject(s)
Face , Image Processing, Computer-Assisted/methods , Machine Learning , Algorithms , Color , Emotions/classification , Emotions/physiology , Face/anatomy & histology , Face/diagnostic imaging , Face/physiology , Humans , Skin Pigmentation/physiology , Video Recording
11.
Comput Vis ECCV ; 11214: 835-851, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30465044

ABSTRACT

Recent advances in Generative Adversarial Networks (GANs) have shown impressive results for task of facial expression synthesis. The most successful architecture is StarGAN [4], that conditions GANs' generation process with images of a specific domain, namely a set of images of persons sharing the same expression. While effective, this approach can only generate a discrete number of expressions, determined by the content of the dataset. To address this limitation, in this paper, we introduce a novel GAN conditioning scheme based on Action Units (AU) annotations, which describes in a continuous manifold the anatomical facial movements defining a human expression. Our approach allows controlling the magnitude of activation of each AU and combine several of them. Additionally, we propose a fully unsupervised strategy to train the model, that only requires images annotated with their activated AUs, and exploit attention mechanisms that make our network robust to changing backgrounds and lighting conditions. Extensive evaluation show that our approach goes beyond competing conditional generators both in the capability to synthesize a much wider range of expressions ruled by anatomically feasible muscle movements, as in the capacity of dealing with images in the wild.

12.
IEEE Trans Pattern Anal Mach Intell ; 40(12): 3059-3066, 2018 12.
Article in English | MEDLINE | ID: mdl-29990100

ABSTRACT

Three-dimensional shape reconstruction of 2D landmark points on a single image is a hallmark of human vision, but is a task that has been proven difficult for computer vision algorithms. We define a feed-forward deep neural network algorithm that can reconstruct 3D shapes from 2D landmark points almost perfectly (i.e., with extremely small reconstruction errors), even when these 2D landmarks are from a single image. Our experimental results show an improvement of up to two-fold over state-of-the-art computer vision algorithms; 3D shape reconstruction error (measured as the Procrustes distance between the reconstructed shape and the ground-truth) of human faces is , cars is .0022, human bodies is .022, and highly-deformable flags is .0004. Our algorithm was also a top performer at the 2016 3D Face Alignment in the Wild Challenge competition (done in conjunction with the European Conference on Computer Vision, ECCV) that required the reconstruction of 3D face shape from a single image. The derived algorithm can be trained in a couple hours and testing runs at more than 1,000 frames/s on an i7 desktop. We also present an innovative data augmentation approach that allows us to train the system efficiently with small number of samples. And the system is robust to noise (e.g., imprecise landmark points) and missing data (e.g., occluded or undetected landmark points).


Subject(s)
Algorithms , Imaging, Three-Dimensional/methods , Neural Networks, Computer , Databases, Factual , Face/anatomy & histology , Humans , Video Recording
13.
Proc Natl Acad Sci U S A ; 115(14): 3581-3586, 2018 04 03.
Article in English | MEDLINE | ID: mdl-29555780

ABSTRACT

Facial expressions of emotion in humans are believed to be produced by contracting one's facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.


Subject(s)
Color , Emotions/physiology , Face/physiology , Facial Expression , Facial Muscles/physiology , Pattern Recognition, Visual , Adult , Female , Humans , Male , Young Adult
14.
Article in English | MEDLINE | ID: mdl-31244515

ABSTRACT

We present a scalable weakly supervised clustering approach to learn facial action units (AUs) from large, freely available web images. Unlike most existing methods (e.g., CNNs) that rely on fully annotated data, our method exploits web images with inaccurate annotations. Specifically, we derive a weakly-supervised spectral algorithm that learns an embedding space to couple image appearance and semantics. The algorithm has efficient gradient update, and scales up to large quantities of images with a stochastic extension. With the learned embedding space, we adopt rank-order clustering to identify groups of visually and semantically similar images, and re-annotate these groups for training AU classifiers. Evaluation on the 1 millon EmotioNet dataset demonstrates the effectiveness of our approach: (1) our learned annotations reach on average 91.3% agreement with human annotations on 7 common AUs, (2) classifiers trained with re-annotated images perform comparably to, sometimes even better than, its supervised CNN-based counterpart, and (3) our method offers intuitive outlier/noise pruning instead of forcing one annotation to every image. Code is available.

15.
Curr Opin Psychol ; 17: 27-33, 2017 10.
Article in English | MEDLINE | ID: mdl-28950969

ABSTRACT

Facial expressions of emotion are produced by contracting and relaxing the facial muscles in our face. I hypothesize that the human visual system solves the inverse problem of production, that is, to interpret emotion, the visual system attempts to identify the underlying muscle activations. I show converging computational, behavioral and imaging evidence in favor of this hypothesis. I detail the computations performed by the human visual system to achieve the decoding of these facial actions and identify a brain region where these computations likely take place. The resulting computational model explains how humans readily classify emotions into categories as well as continuous variables. This model also predicts the existence of a large number of previously unknown facial expressions, including compound emotions, affect attributes and mental states that are regularly used by people. I provide evidence in favor of this prediction.


Subject(s)
Brain/physiology , Emotions , Facial Recognition/physiology , Brain/diagnostic imaging , Computer Simulation , Emotions/physiology , Facial Expression , Humans , Models, Neurological , Models, Psychological
16.
Curr Dir Psychol Sci ; 26(3): 263-269, 2017 Jun.
Article in English | MEDLINE | ID: mdl-29307959

ABSTRACT

Faces are one of the most important means of communication in humans. For example, a short glance at a person's face provides information on identity and emotional state. What are the computations the brain uses to solve these problems so accurately and seemingly effortlessly? This article summarizes current research on computational modeling, a technique used to answer this question. Specifically, my research studies the hypothesis that this algorithm is tasked to solve the inverse problem of production. For example, to recognize identity, our brain needs to identify shape and shading image features that are invariant to facial expression, pose and illumination. Similarly, to recognize emotion, the brain needs to identify shape and shading features that are invariant to identity, pose and illumination. If one defines the physics equations that render an image under different identities, expressions, poses and illuminations, then gaining invariance to these factors is readily resolved by computing the inverse of this rendering function. I describe our current understanding of the algorithms used by our brains to resolve this inverse problem. I also discuss how these results are driving research in computer vision to design computer systems that are as accurate, robust and efficient as humans.

17.
Int J Colorectal Dis ; 32(2): 255-264, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27757541

ABSTRACT

PURPOSE: Patients with locally advanced rectal cancer and pathologic complete response to neoadjuvant chemoradiation therapy have lower rates of recurrence compared to those who do not. However, the influences of the pathologic response on surgical complications and survival remain unclear. This study aimed to investigate the influence of neoadjuvant therapy for rectal cancer on postoperative morbidity and long-term survival. METHODS: This was a comparative study of consecutive patients who underwent laparoscopic total mesorectal excision for rectal cancer in two European tertiary hospitals between 2004 and 2014. Patients with and without pathologic complete responses were compared in terms of postoperative morbidity, mortality, and survival. RESULTS: Fifty patients with complete response (ypT0N0) were compared with 141 patients who exhibited non-complete response. No group differences were observed in the postoperative mortality or morbidity rates. The median follow-up time was 57 months (range 1-121). Over this period, 11 (5.8 %) patients, all of whom were in the non-complete response group, exhibited local recurrence. The 5-year overall survival and disease-free survival were significantly better in the complete response group, 92.5 vs. 75.3 % (p = 0.004) and 89 vs. 73.4 % (p = 0.002), respectively. CONCLUSIONS: Postoperative complication rate after laparoscopic total mesorectal excision is not associated with the pathologic response grade to neoadjuvant chemoradiation therapy.


Subject(s)
Chemoradiotherapy , Laparoscopy , Neoadjuvant Therapy , Rectal Neoplasms/pathology , Rectal Neoplasms/therapy , Adult , Aged , Aged, 80 and over , Disease-Free Survival , Female , Humans , Male , Middle Aged , Morbidity , Neoplasm Staging , Postoperative Care , Rectal Neoplasms/epidemiology , Treatment Outcome
18.
J Neurosci ; 36(16): 4434-42, 2016 Apr 20.
Article in English | MEDLINE | ID: mdl-27098688

ABSTRACT

By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies. SIGNIFICANCE STATEMENT: Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.


Subject(s)
Brain/metabolism , Facial Expression , Facial Recognition/physiology , Photic Stimulation/methods , Adult , Brain Mapping/methods , Female , Humans , Magnetic Resonance Imaging/methods , Male
19.
Cognition ; 150: 77-84, 2016 May.
Article in English | MEDLINE | ID: mdl-26872248

ABSTRACT

Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.


Subject(s)
Emotions/physiology , Facial Expression , Judgment , Photic Stimulation/methods , Adolescent , Adult , Female , Humans , Male , Young Adult
20.
IEEE Trans Pattern Anal Mach Intell ; 38(8): 1640-50, 2016 08.
Article in English | MEDLINE | ID: mdl-26415154

ABSTRACT

Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.


Subject(s)
Algorithms , Pattern Recognition, Automated , Support Vector Machine , Humans , Video Recording
SELECTION OF CITATIONS
SEARCH DETAIL
...