Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Oleo Sci ; 68(4): 369-378, 2019 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-30867391

RESUMO

Proton nuclear magnetic resonance (NMR) is useful for the analysis of biological samples such as serum. Free induction decays (FIDs) are NMR signals that follow a radio-frequency pulse applied at the resonance frequency. Short-time Fourier transform (STFT) is a basic method for time-frequency analyses. The purpose of this study was to ascertain whether the STFT of FIDs enables the sensitive detection of changes and differences in serum properties. FIDs were obtained from serum collected from young, healthy, male volunteers ≤ 40 years of age and seniors ≥ 65 years of age. Temporal changes in the instantaneous amplitudes for the time-domain analysis, fast Fourier transform for frequency-domain analysis, and STFT were applied to the FIDs. The STFT-based spectrogram represented the complex frequency components that changed dynamically over time, indicating that the spectrogram enabled the visualization of the features of an FID. Furthermore, the results of a partial least-squares discriminant analysis demonstrated that the STFT was superior to the other two methods for discriminating between serum from younger and older subjects. In conclusion, the STFT of FIDs obtained from proton NMR measurements was useful for evaluating similarities and dissimilarities in the FIDs obtained from serum samples.


Assuntos
Testes Diagnósticos de Rotina/métodos , Análise de Fourier , Espectroscopia de Ressonância Magnética/métodos , Prótons , Soro , Adulto , Humanos , Masculino , Pessoa de Meia-Idade , Soro/química , Albumina Sérica
2.
IEEE Trans Pattern Anal Mach Intell ; 28(5): 738-52, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16640260

RESUMO

We propose a system that is capable of detailed analysis of eye region images in terms of the position of the iris, degree of eyelid opening, and the shape, complexity, and texture of the eyelids. The system uses a generative eye region model that parameterizes the fine structure and motion of an eye. The structure parameters represent structural individuality of the eye, including the size and color of the iris, the width, boldness, and complexity of the eyelids, the width of the bulge below the eye, and the width of the illumination reflection on the bulge. The motion parameters represent movement of the eye, including the up-down position of the upper and lower eyelids and the 2D position of the iris. The system first registers the eye model to the input in a particular frame and individualizes it by adjusting the structure parameters. The system then tracks motion of the eye by estimating the motion parameters across the entire image sequence. Combined with image stabilization to compensate for appearance changes due to head motion, the system achieves accurate registration and motion recovery of eyes.


Assuntos
Algoritmos , Inteligência Artificial , Olho/anatomia & histologia , Face/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Modelos Anatômicos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Humanos , Armazenamento e Recuperação da Informação/métodos , Modelos Biológicos , Fotografação/métodos , Técnica de Subtração
3.
Behav Res Methods Instrum Comput ; 35(3): 420-8, 2003 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-14587550

RESUMO

Previous research in automatic facial expression recognition has been limited to recognition of gross expression categories (e.g., joy or anger) in posed facial behavior under well-controlled conditions (e.g., frontal pose and minimal out-of-plane head motion). We have developed a system that detects a discrete and important facial action (e.g., eye blinking) in spontaneously occurring facial behavior that has been measured with a nonfrontal pose, moderate out-of-plane head motion, and occlusion. The system recovers three-dimensional motion parameters, stabilizes facial regions, extracts motion and appearance information, and recognizes discrete facial actions in spontaneous facial behavior. We tested the system in video data from a two-person interview. The 10 subjects were ethnically diverse, action units occurred during speech, and out-of-plane motion and occlusion from head motion and glasses were common. The video data were originally collected to answer substantive questions in psychology and represent a substantial challenge to automated action unit recognition. In analysis of blinks, the system achieved 98% accuracy.


Assuntos
Piscadela , Expressão Facial , Software , Adulto , Algoritmos , Inteligência Artificial , Processamento Eletrônico de Dados/métodos , Humanos , Masculino , Gravação em Vídeo
4.
Int J Imaging Syst Technol ; 13(1): 85-94, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-26819494

RESUMO

This paper presents a method to recover the full-motion (3 rotations and 3 translations) of the head from an input video using a cylindrical head model. Given an initial reference template of the head image and the corresponding head pose, the head model is created and full head motion is recovered automatically. The robustness of the approach is achieved by a combination of three techniques. First, we use the iteratively re-weighted least squares (IRLS) technique in conjunction with the image gradient to accommodate non-rigid motion and occlusion. Second, while tracking, the templates are dynamically updated to diminish the effects of self-occlusion and gradual lighting changes and to maintain accurate tracking even when the face moves out of view of the camera. Third, to minimize error accumulation inherent in the use of dynamic templates, we re-register images to a reference template whenever head pose is close to that in the template. The performance of the method, which runs in real time, was evaluated in three separate experiments using image sequences (both synthetic and real) for which ground truth head motion was known. The real sequences included pitch and yaw as large as 40° and 75°, respectively. The average recovery accuracy of the 3D rotations was about 3°. In a further test, the method was used as part of a facial expression analysis system intended for use with spontaneous facial behavior in which moderate head motion is common. Image data consisted of 1-minute of video from each of 10 subjects while engaged in a 2-person interview. The method successfully stabilized face and eye images allowing for 98% accuracy in automatic blink recognition.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...