Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Neurology ; 87(20): 2146-2153, 2016 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-27770067

RESUMO

OBJECTIVE: To compare clinical rating scales of blepharospasm severity with involuntary eye closures measured automatically from patient videos with contemporary facial expression software. METHODS: We evaluated video recordings of a standardized clinical examination from 50 patients with blepharospasm in the Dystonia Coalition's Natural History and Biorepository study. Eye closures were measured on a frame-by-frame basis with software known as the Computer Expression Recognition Toolbox (CERT). The proportion of eye closure time was compared with 3 commonly used clinical rating scales: the Burke-Fahn-Marsden Dystonia Rating Scale, Global Dystonia Rating Scale, and Jankovic Rating Scale. RESULTS: CERT was reliably able to find the face, and its eye closure measure was correlated with all of the clinical severity ratings (Spearman ρ = 0.56, 0.52, and 0.56 for the Burke-Fahn-Marsden Dystonia Rating Scale, Global Dystonia Rating Scale, and Jankovic Rating Scale, respectively, all p < 0.0001). CONCLUSIONS: The results demonstrate that CERT has convergent validity with conventional clinical rating scales and can be used with video recordings to measure blepharospasm symptom severity automatically and objectively. Unlike EMG and kinematics, CERT requires only conventional video recordings and can therefore be more easily adopted for use in the clinic.


Assuntos
Blefarospasmo/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Índice de Gravidade de Doença , Software , Gravação em Vídeo/métodos , Adulto , Idoso , Distonia/diagnóstico por imagem , Expressão Facial , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
2.
JMIR Mhealth Uhealth ; 3(4): e108, 2015 Dec 31.
Artigo em Inglês | MEDLINE | ID: mdl-26721413

RESUMO

BACKGROUND: Chronic diseases such as diabetes require high levels of medication adherence and patient self-management for optimal health outcomes. A novel sensing platform, Digital Health Feedback System (Proteus Digital Health, Redwood City, CA), can for the first time detect medication ingestion events and physiological measures simultaneously, using an edible sensor, personal monitor patch, and paired mobile device. The Digital Health Feedback System (DHFS) generates a large amount of data. Visual analytics of this rich dataset may provide insights into longitudinal patterns of medication adherence in the natural setting and potential relationships between medication adherence and physiological measures that were previously unknown. OBJECTIVE: Our aim was to use modern methods of visual analytics to represent continuous and discrete data from the DHFS, plotting multiple different data types simultaneously to evaluate the potential of the DHFS to capture longitudinal patterns of medication-taking behavior and self-management in individual patients with type II diabetes. METHODS: Visualizations were generated using time domain methods of oral metformin medication adherence and physiological data obtained by the DHFS use in 5 patients with type II diabetes over 37-42 days. The DHFS captured at-home metformin adherence, heart rate, activity, and sleep/rest. A mobile glucose monitor captured glucose testing and level (mg/dl). Algorithms were developed to analyze data over varying time periods: across the entire study, daily, and weekly. Following visualization analysis, correlations between sleep/rest and medication ingestion were calculated across all subjects. RESULTS: A total of 197 subject days, encompassing 141,840 data events were analyzed. Individual continuous patch use varied between 87-98%. On average, the cohort took 78% (SD 12) of prescribed medication and took 77% (SD 26) within the prescribed ±2-hour time window. Average activity levels per subjects ranged from 4000-12,000 steps per day. The combination of activity level and heart rate indicated different levels of cardiovascular fitness between subjects. Visualizations over the entire study captured the longitudinal pattern of missed doses (the majority of which took place in the evening), the timing of ingestions in individual subjects, and the range of medication ingestion timing, which varied from 1.5-2.4 hours (Subject 3) to 11 hours (Subject 2). Individual morning self-management patterns over the study period were obtained by combining the times of waking, metformin ingestion, and glucose measurement. Visualizations combining multiple data streams over a 24-hour period captured patterns of broad daily events: when subjects rose in the morning, tested their blood glucose, took their medications, went to bed, hours of sleep/rest, and level of activity during the day. Visualizations identified highly consistent daily patterns in Subject 3, the most adherent participant. Erratic daily patterns including sleep/rest were demonstrated in Subject 2, the least adherent subject. Correlation between sleep /rest and medication ingestion in each individual subject was evaluated. Subjects 2 and 4 showed correlation between amount of sleep/rest over a 24-hour period and medication-taking the following day (Subject 2: r=.47, P<.02; Subject 4: r=.35, P<.05). With Subject 2, sleep/rest disruptions during the night were highly correlated (r=.47, P<.009) with missing doses the following day. CONCLUSIONS: Visualizations integrating medication ingestion and physiological data from the DHFS over varying time intervals captured detailed individual longitudinal patterns of medication adherence and self-management in the natural setting. Visualizing multiple data streams simultaneously, providing a data-rich representation, revealed information that would not have been shown by plotting data streams individually. Such analyses provided data far beyond traditional adherence summary statistics and may form the foundation of future personalized predictive interventions to drive longitudinal adherence and support optimal self-management in chronic diseases such as diabetes.

3.
Curr Biol ; 24(7): 738-43, 2014 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-24656830

RESUMO

In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.


Assuntos
Inteligência Artificial , Expressão Facial , Dor , Reconhecimento Automatizado de Padrão , Humanos
4.
IEEE Trans Pattern Anal Mach Intell ; 31(11): 2106-11, 2009 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-19762937

RESUMO

Machine learning approaches have produced some of the highest reported performances for facial expression recognition. However, to date, nearly all automatic facial expression recognition research has focused on optimizing performance on a few databases that were collected under controlled lighting conditions on a relatively small number of subjects. This paper explores whether current machine learning methods can be used to develop an expression recognition system that operates reliably in more realistic conditions. We explore the necessary characteristics of the training data set, image registration, feature representation, and machine learning algorithms. A new database, GENKI, is presented which contains pictures, photographed by the subjects themselves, from thousands of different people in many different real-world imaging conditions. Results suggest that human-level expression recognition accuracy in real-life illumination conditions is achievable with machine learning technology. However, the data sets currently used in the automatic expression recognition literature to evaluate progress may be overly constrained and could potentially lead research into locally optimal algorithmic solutions.


Assuntos
Inteligência Artificial , Biometria/métodos , Face/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Sorriso , Técnica de Subtração , Algoritmos , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Modelos Biológicos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA