Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Nutr Soc ; 76(3): 283-294, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-27938425

RESUMO

For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.


Assuntos
Registros de Dieta , Dieta/efeitos adversos , Internet , Aplicativos Móveis , Tamanho da Porção , Adulto , Pesquisa Biomédica/métodos , Pesquisa Biomédica/tendências , Telefone Celular , Criança , Fenômenos Fisiológicos da Nutrição Infantil , Computadores de Mão , Congressos como Assunto , Dietética/métodos , Dietética/tendências , Humanos , Avaliação Nutricional , Ciências da Nutrição/métodos , Ciências da Nutrição/tendências , Fotografação/instrumentação , Fotografação/tendências , Sociedades Científicas , Gravação em Vídeo/instrumentação , Gravação em Vídeo/tendências
2.
J Hum Nutr Diet ; 27 Suppl 1: 82-8, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23489518

RESUMO

The use of image-based dietary assessment methods shows promise for improving dietary self-report among children. The Technology Assisted Dietary Assessment (TADA) food record application is a self-administered food record specifically designed to address the burden and human error associated with conventional methods of dietary assessment. Users would take images of foods and beverages at all eating occasions using a mobile telephone or mobile device with an integrated camera [e.g. Apple iPhone, Apple iPod Touch (Apple Inc., Cupertino, CA, USA); Nexus One (Google, Mountain View, CA, USA)]. Once the images are taken, the images are transferred to a back-end server for automated analysis. The first step in this process is image analysis (i.e. segmentation, feature extraction and classification), which allows for automated food identification. Portion size estimation is also automated via segmentation and geometric shape template modeling. The results of the automated food identification and volume estimation can be indexed with the Food and Nutrient Database for Dietary Studies to provide a detailed diet analysis for use in epidemiological or intervention studies. Data collected during controlled feeding studies in a camp-like setting have allowed for formative evaluation and validation of the TADA food record application. This review summarises the system design and the evidence-based development of image-based methods for dietary assessment among children.


Assuntos
Telefone Celular , Registros de Dieta , Dieta , Estilo de Vida , Aplicativos Móveis , Avaliação Nutricional , Fotografação , Adolescente , Inquéritos sobre Dietas , Comportamento Alimentar , Humanos , Tamanho da Porção , Autorrelato
3.
ISSCS 2013 (2013) ; 20132013 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28573257

RESUMO

There is a health crisis in the US related to diet that is further exacerbated by our aging population and sedentary lifestyles. Six of the ten leading causes of death in the United States can be directly linked to diet. Dietary assessment, the process of determining what someone eats during the course of a day, is essential for understanding the link between diet and health. We are developing imaging based tools to automatically obtain accurate estimates of what foods a user consumes. Accurate food segmentation is essential for identifying food items and estimating food portion sizes. In this paper, we present a quantitative evaluation of automatic image segmentation methods for food image analysis used for dietary assessment. The experiments indicate that local variation is more suitable for food image segmentation in general dietary assessment studies where the food images acquired have complex background.

4.
J Microsc ; 245(2): 148-60, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22092443

RESUMO

Digital image analysis is a fundamental component of quantitative microscopy. However, intravital microscopy presents many challenges for digital image analysis. In general, microscopy volumes are inherently anisotropic, suffer from decreasing contrast with tissue depth, lack object edge detail and characteristically have low signal levels. Intravital microscopy introduces the additional problem of motion artefacts, resulting from respiratory motion and heartbeat from specimens imaged in vivo. This paper describes an image registration technique for use with sequences of intravital microscopy images collected in time-series or in 3D volumes. Our registration method involves both rigid and nonrigid components. The rigid registration component corrects global image translations, whereas the nonrigid component manipulates a uniform grid of control points defined by B-splines. Each control point is optimized by minimizing a cost function consisting of two parts: a term to define image similarity, and a term to ensure deformation grid smoothness. Experimental results indicate that this approach is promising based on the analysis of several image volumes collected from the kidney, lung and salivary gland of living rodents.


Assuntos
Artefatos , Processamento de Imagem Assistida por Computador/métodos , Pulmão/fisiologia , Microscopia de Fluorescência por Excitação Multifotônica/métodos , Movimento (Física) , Glândulas Salivares/fisiologia , Baço/fisiologia , Animais , Imageamento Tridimensional/métodos , Pulmão/ultraestrutura , Camundongos , Microscopia de Vídeo/métodos , Ratos , Reprodutibilidade dos Testes , Glândulas Salivares/ultraestrutura , Sensibilidade e Especificidade , Baço/ultraestrutura
5.
IEEE Int Conf Multimed Expo Workshops ; 2012: 424-428, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28573157

RESUMO

Traditional dietary assessment methods, consisting of written and orally reported methods, are not widely acceptable or feasible for everyday monitoring. The development of builtin cameras for mobile devices provides a new way of collecting dietary information by acquiring images of foods and beverages. The ability of image analysis techniques to automatically segment and identify food items from food images becomes imperative. Food images, usually consisting of plates, bowls and glasses, are often affected by lighting and specular highlights which present difficulties for image analysis. In this paper, we propose a novel single-image specular highlight removal method to detect and remove specular highlights in food images. We use independent components analysis (ICA) to separate the specular and diffuse components from the original image using only one image. This paper describes the details of the proposed model and also presents experimental results on food images to demonstrate the effectiveness of our approach.

6.
Eur J Clin Nutr ; 63 Suppl 1: S50-7, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19190645

RESUMO

BACKGROUND: Information on dietary intake provides some of the most valuable insights for mounting intervention programmes for the prevention of chronic diseases. With the growing concern about adolescent overweight, the need to accurately measure diet becomes imperative. Assessment among adolescents is problematic as this group has irregular eating patterns and they have less enthusiasm for recording food intake. SUBJECTS/METHODS: We used qualitative and quantitative techniques among adolescents to assess their preferences for dietary assessment methods. RESULTS: Dietary assessment methods using technology, for example, a personal digital assistant (PDA) or a disposable camera, were preferred over the pen and paper food record. CONCLUSIONS: There was a strong preference for using methods that incorporate technology such as capturing images of food. This suggests that for adolescents, dietary methods that incorporate technology may improve cooperation and accuracy. Current computing technology includes higher resolution images, improved memory capacity and faster processors that allow small mobile devices to process information not previously possible. Our goal is to develop, implement and evaluate a mobile device (for example, PDA, mobile phone) food record that will translate to an accurate account of daily food and nutrient intake among adolescents. This mobile computing device will include digital images, a nutrient database and image analysis for identification and quantification of food consumption. Mobile computing devices provide a unique vehicle for collecting dietary information that reduces the burden on record keepers. Images of food can be marked with a variety of input methods that link the item for image processing and analysis to estimate the amount of food. Images before and after the foods are eaten can estimate the amount of food consumed. The initial stages and potential of this project will be described.


Assuntos
Coleta de Dados/instrumentação , Registros de Dieta , Inquéritos sobre Dietas , Tecnologia , Adolescente , Povo Asiático , Criança , Periféricos de Computador , Coleta de Dados/métodos , Dieta/psicologia , Feminino , Grupos Focais/métodos , Humanos , Masculino , Fotografação , Projetos Piloto , Inquéritos e Questionários
7.
Conf Proc IEEE Eng Med Biol Soc ; 2005: 6532-5, 2005.
Artigo em Inglês | MEDLINE | ID: mdl-17281766

RESUMO

This paper presents a comparison of feature selection methods for a unified detection of breast cancers in mammograms. A set of features, including curvilinear features, texture features, Gabor features, and multi-resolution features, were extracted from a region of 512x512 pixels containing normal tissue or breast cancer. Adaptive floating search and genetic algorithm were used for the feature selection, and a linear discriminant analysis (LDA) was used for the classification of cancer regions from normal regions. The performance is evaluated using Az the area under ROC curve. On a dataset consisting 296 normal regions and 164 cancer regions (53 masses, 56 spiculated lesions, and 55 calcifications), adaptive floating search achieved Az = 0.96 with comparison to Az = 0.93 of CHC genetic algorithm and Az = 0.90 of simple genetic algorithm.

8.
IEEE Trans Image Process ; 9(10): 1731-44, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18262912

RESUMO

In this paper we present new results relative to the "expectation-maximization/maximization of the posterior marginals" (EM/MPM) algorithm for simultaneous parameter estimation and segmentation of textured images. The EM/MPM algorithm uses a Markov random field model for the pixel class labels and alternately approximates the MPM estimate of the pixel class labels and estimates parameters of the observed image model. The goal of the EM/MPM algorithm is to minimize the expected value of the number of misclassified pixels. We present new theoretical results in this paper which show that the algorithm can be expected to achieve this goal, to the extent that the EM estimates of the model parameters are close to the true values of the model parameters. We also present new experimental results demonstrating the performance of the EM/MPM algorithm.

9.
IEEE Trans Image Process ; 8(3): 408-20, 1999.
Artigo em Inglês | MEDLINE | ID: mdl-18262883

RESUMO

We present a new algorithm for segmentation of textured images using a multiresolution Bayesian approach. The new algorithm uses a multiresolution Gaussian autoregressive (MGAR) model for the pyramid representation of the observed image, and assumes a multiscale Markov random field model for the class label pyramid. The models used in this paper incorporate correlations between different levels of both the observed image pyramid and the class label pyramid. The criterion used for segmentation is the minimization of the expected value of the number of misclassified nodes in the multiresolution lattice. The estimate which satisfies this criterion is referred to as the "multiresolution maximization of the posterior marginals" (MMPM) estimate, and is a natural extension of the single-resolution "maximization of the posterior marginals" (MPM) estimate. Previous multiresolution segmentation techniques have been based on the maximum a posterior (MAP) estimation criterion, which has been shown to be less appropriate for segmentation than the MPM criterion. It is assumed that the number of distinct textures in the observed image is known. The parameters of the MGAR model-the means, prediction coefficients, and prediction error variances of the different textures-are unknown. A modified version of the expectation-maximization (EM) algorithm is used to estimate these parameters. The parameters of the Gibbs distribution for the label pyramid are assumed to be known. Experimental results demonstrating the performance of the algorithm are presented.

10.
IEEE Trans Med Imaging ; 14(2): 318-27, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18215835

RESUMO

Reports the diagnostic performance of observers in detecting abnormalities in computer-generated mammogram-like images. A mathematical model of the human breast is defined in which breast tissues are simulated by spheres of different sizes and densities. Images are generated by casting rays from a specified source, through the model, and onto an image plane. Observer performance when using two viewing modalities (stereo versus mono) is compared. In the stereo viewing mode, images are presented to the observer (wearing liquid-crystal display glasses), such that the left eye sees the left image only and the right eye sees the right image only. In this way, the images can be fused by the observer to obtain a sense of depth. In the mono viewing mode, identical images are presented to the left and right eyes so that no binocular disparities will be produced by the images. Observer response data are evaluated using receiver operating characteristic (ROC) analysis to characterize any difference in detectability of abnormalities (in either the density or the arrangement of simulated tissue densities) using the two viewing modes. The authors' experimental results indicate the clear superiority of stereo viewing for detection of arrangement abnormalities. For detection of density abnormalities, the performance of the two viewing modes is similar. These preliminary results suggest that stereomammography may permit easier detection of certain tissue abnormalities, perhaps providing a route to earlier tumor detection in cases of breast cancer.

11.
IEEE Trans Image Process ; 4(2): 177-85, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18289969

RESUMO

Presents a new algorithm that utilizes mathematical morphology for pyramidal coding of color images. The authors obtain lossy color image compression by using block truncation coding at the pyramid levels to attain reduced data rates. The pyramid approach is attractive due to low computational complexity, simple parallel implementation, and the ability to produce acceptable color images at moderate data rates. In many applications, the progressive transmission capability of the algorithm is very useful. The authors show experimental results for color images at data rates of 1.89 bits/pixel.

12.
J Electrocardiol ; 23 Suppl: 192-7, 1990.
Artigo em Inglês | MEDLINE | ID: mdl-2090741

RESUMO

Electrocardiographic (ECG) signals are frequently corrupted by impulsive noise due to muscle activities, and background normalization is often needed to correct for patient motion and respiration. Nonlinear signal processing methods are effective alternatives to conventional linear filtering methods when dealing with impulsive noise or noise types that are difficult to characterize. The class of nonlinear filtering methods studied in this article operate by moving a window of finite width along the input data sequence. At each position, the filter output is obtained from the input samples inside the window. Nonlinear operators differ from linear filters in that the output is not a simple linear combination of the input samples. Three classes of nonlinear operators--median filters, morphologic operators, and the alpha-trimmed mean filter--are briefly introduced and algorithms using them for ECG signal processing are presented. Empirical results indicate that the nonlinear operators are good candidates for impulsive noise suppression and background normalization in ECG signal processing.


Assuntos
Algoritmos , Eletrocardiografia/métodos , Processamento de Sinais Assistido por Computador , Filtração/métodos , Humanos
13.
IEEE Trans Biomed Eng ; 36(2): 262-73, 1989 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-2917772

RESUMO

A new approach to impulsive noise suppression and background normalization of digitized electrocardiogram signals is presented using mathematical morphological operators that incorporate the shape information of a signal. A brief introduction to these nonlinear signal processing operators, as well as a detailed description of the new algorithm, is presented. Empirical results show that the new algorithm has good performance in impulsive noise suppression and background normalization.


Assuntos
Algoritmos , Eletrocardiografia/métodos , Computação Matemática , Processamento de Sinais Assistido por Computador
14.
IEEE Trans Med Imaging ; 8(1): 104-6, 1989.
Artigo em Inglês | MEDLINE | ID: mdl-18230506

RESUMO

A technique is presented that can be used in the visualization and analysis of cardiac wall motion abnormalities by digital two-dimensional echocardiography. This technique is based on the use of a curvature function extracted from the endocardial boundary locations and can be used for shape or shape change analysis of the heart. The authors call the locations of high absolute curvature landmarks. Identification of landmarks on the endocardial boundary provides a simplified but powerful description of the boundary that allows visualization and analysis of wall motion. The simplification of the heart beating process is an excellent tool for the identification of infarcted areas of the heart.

15.
IEEE Trans Med Imaging ; 7(2): 81-90, 1988.
Artigo em Inglês | MEDLINE | ID: mdl-18230456

RESUMO

Cardiac function is evaluated using echocardiographic analysis of shape attributes, such as the heart wall thickness or the shape change of the heart wall boundaries. This requires that the complete boundaries of the heart wall be detected from a sequence of two-dimensional ultrasonic images of the heart. The image segmentation process is made difficult since these images are plagued by poor intensity contrast and dropouts caused by the intrinsic limitations of the image formation process. Current studies often require having trained operators manually trace the heart walls. A review of previous work is presented, along with how this problem can be viewed in the context of the computer vision area. A novel algorithm is presented for detecting the boundaries. This algorithm first detects spatially significant features based on the measurement of image intensity variations. Since the detection step suffers from false alarms and missing boundary points, further processing uses high-level knowledge about the heart wall to label the detected features for noise rejection and to fill in the missing points by interpolation.

16.
IEEE Trans Med Imaging ; 7(4): 313-20, 1988.
Artigo em Inglês | MEDLINE | ID: mdl-18230484

RESUMO

The authors describe a novel algorithm, known as sequential edge linking (SEL), for the automatic definition of coronary arterial edges in cineangiograms. This algorithm is based on sequential tree searching of possible coronary artery boundary locations. Using a coronary artery phantom, the authors compared the results obtained using SEL with hand-traced boundaries. Using a magnification of 2x, the results are generally good, with the average error being 1.7% of the diameter. Actual coronary artery images were also processed and a similar comparison indicated that total areas were comparable but the hand-drawn stenoses were, on average, 7% greater than the unobstructed diameter. Based on these data it is concluded that the SEL algorithm is an accurate method for fully automatic definition of coronary artery dimensions.

17.
IEEE Trans Med Imaging ; 6(4): 292-6, 1987.
Artigo em Inglês | MEDLINE | ID: mdl-18244036

RESUMO

Although two-dimensional echocardiography (2-D echo) is a useful technique for evaluation of global and regional left ventricular function, the main limitation is the inability to easily extract reliable and accurate quantitative information throughout all phases of the cardiac cycle. We sought to develop suitable automated techniques for the objective determination of endocardial and epicardial borders in two-dimensional echocardiographic images. To test algorithms for the automatic detection of myocardial borders we constructed a cardiac ultrasound phantom consisting of 16 echogenic annuli of known dimensions embedded in a material of low echogenicity which allowed imaging without partial volume effects. An algorithm based on Gaussian filtering followed by a difference gradient operator was applied to detect edges in the 2-D echo images of these annuli. The radii of the automatically determined inner borders were within 0.44 mm root meansquared error over a range of 15-25 mm true radius. This lower boundary for the error in our approach to automatic placement of myocardial borders in 2-D echocardiograms suggests the potential to extract more information concerning left ventricular function than is available with current techniques.

18.
Comput Biomed Res ; 18(6): 587-604, 1985 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-4075790

RESUMO

Extracellular glycosaminoglycans when precipitated by tannic acid, appear in electron micrographs as amorphous reticulate masses or fragments sometimes finely beaded and often associated with collagen fibrils. An algorithm for automatic classification, segmentation, and quantification of the amount of tannic acid-precipitable material (TAPM) and collagen in electron microscopic images is presented. Small patches of a region are initially located and the patch boundaries are traced using a binary contour tracing algorithm. The patches are then grown out and merged together to form one large area. This area is classified using a two-dimensional feature vector into one of two classes: a region with TAPM and collagen, or one with cell bodies and/or processes. Once these areas are classified and segmented, the distribution of TAPM is measured. The algorithm was tested on several TAPM images displaying varying amounts and configurations of TAPM with good results. It may also be adapted to process other electron microscopic images containing elements of interest which have complex or amorphous form.


Assuntos
Matriz Extracelular/ultraestrutura , Glicosaminoglicanos/análise , Animais , Colágeno/análise , Computadores , Matriz Extracelular/análise , Fixadores , Histocitoquímica/métodos , Taninos Hidrolisáveis , Aumento da Imagem/métodos , Matemática , Camundongos , Microscopia Eletrônica/métodos , Palato/embriologia , Palato/ultraestrutura
19.
J Histochem Cytochem ; 33(4): 261-7, 1985 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-3980979

RESUMO

A computer-assisted method for objectively identifying and displaying the distribution of molecules that can only be positively identified by a combination of staining characteristics and susceptibility to specific enzymatic digestion or chemical degradation is presented. The visual image of an enzymatically digested tissue section is subtracted from that of an adjacent buffer-incubated control section and the distribution of the extracellular molecules removed from the tissue section displayed. Photomicrographs are taken using white light and narrow bandwidth filters of wavelengths at or near the maximum absorbance for the dye products used to visualize the extracellular matrix and cells. Each negative is standardized using reference gray levels. The cell and matrix images of both digested and undigested sections are then registered. The locations of cells in both control and digested sections are identified and set to an undefined gray level value in the matrix images. The cell-removed images of the control and digested sections are then registered and the difference in gray levels between the two images calculated and displayed. The validity of results obtained is primarily dependent on the soundness of the histological visualization and digestion techniques used, but is independent of investigator interpretation.


Assuntos
Computadores , Espaço Extracelular/análise , Histocitoquímica/métodos , Animais , Ácido Hialurônico/análise , Técnicas In Vitro , Camundongos , Palato/análise , Fotomicrografia/métodos
20.
Am J Cardiol ; 52(3): 384-9, 1983 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-6869291

RESUMO

Quantitative studies of left ventricular function using 2-dimensional echocardiography have been limited because of a lack of computerized methods to automatically analyze the echocardiographic images. Previous computer efforts have been directed at digitizing the video output of the 2-D echocardiogram, but this digitizing method has significant limitations. A direct digitization method that produces improvement in signal-to-noise ratio and, subsequently, improved automatic detection of endocardial and epicardial borders, was developed. With definition of these edges, left ventricular global and regional analysis is possible frame by frame so that dynamic changes in cardiac function may be assessed throughout the cardiac cycle. Further technologic advances in 2-D echocardiographic acquisition and image processing should allow computer processing of 2-D echocardiographic data in real time.


Assuntos
Computadores , Ecocardiografia/instrumentação , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...