Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 14(1): 1665, 2024 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-38238423

RESUMO

The first step in any dietary monitoring system is the automatic detection of eating episodes. To detect eating episodes, either sensor data or images can be used, and either method can result in false-positive detection. This study aims to reduce the number of false positives in the detection of eating episodes by a wearable sensor, Automatic Ingestion Monitor v2 (AIM-2). Thirty participants wore the AIM-2 for two days each (pseudo-free-living and free-living). The eating episodes were detected by three methods: (1) recognition of solid foods and beverages in images captured by AIM-2; (2) recognition of chewing from the AIM-2 accelerometer sensor; and (3) hierarchical classification to combine confidence scores from image and accelerometer classifiers. The integration of image- and sensor-based methods achieved 94.59% sensitivity, 70.47% precision, and 80.77% F1-score in the free-living environment, which is significantly better than either of the original methods (8% higher sensitivity). The proposed method successfully reduces the number of false positives in the detection of eating episodes.


Assuntos
Dieta , Mastigação , Humanos , Monitorização Fisiológica , Reconhecimento Psicológico , Processos Mentais
2.
IEEE Sens J ; 23(5): 5391-5400, 2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-37799776

RESUMO

Automatic food portion size estimation (FPSE) with minimal user burden is a challenging task. Most of the existing FPSE methods use fiducial markers and/or virtual models as dimensional references. An alternative approach is to estimate the dimensions of the eating containers prior to estimating the portion size. In this article, we propose a wearable sensor system (the automatic ingestion monitor integrated with a ranging sensor) and a related method for the estimation of dimensions of plates and bowls. The contributions of this study are: 1) the model eliminates the need for fiducial markers; 2) the camera system [automatic ingestion monitor version 2 (AIM-2)] is not restricted in terms of positioning relative to the food item; 3) our model accounts for radial lens distortion caused due to lens aberrations; 4) a ranging sensor directly gives the distance between the sensor and the eating surface; 5) the model is not restricted to circular plates; and 6) the proposed system implements a passive method that can be used for assessment of container dimensions with minimum user interaction. The error rates (mean ± std. dev) for dimension estimation were 2.01% ± 4.10% for plate widths/diameters, 2.75% ± 38.11% for bowl heights, and 4.58% ± 6.78% for bowl diameters.

3.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36679357

RESUMO

Sensor-based food intake monitoring has become one of the fastest-growing fields in dietary assessment. Researchers are exploring imaging-sensor-based food detection, food recognition, and food portion size estimation. A major problem that is still being tackled in this field is the segmentation of regions of food when multiple food items are present, mainly when similar-looking foods (similar in color and/or texture) are present. Food image segmentation is a relatively under-explored area compared with other fields. This paper proposes a novel approach to food imaging consisting of two imaging sensors: color (Red-Green-Blue) and thermal. Furthermore, we propose a multi-modal four-Dimensional (RGB-T) image segmentation using a k-means clustering algorithm to segment regions of similar-looking food items in multiple combinations of hot, cold, and warm (at room temperature) foods. Six food combinations of two food items each were used to capture RGB and thermal image data. RGB and thermal data were superimposed to form a combined RGB-T image and three sets of data (RGB, thermal, and RGB-T) were tested. A bootstrapped optimization of within-cluster sum of squares (WSS) was employed to determine the optimal number of clusters for each case. The combined RGB-T data achieved better results compared with RGB and thermal data, used individually. The mean ± standard deviation (std. dev.) of the F1 score for RGB-T data was 0.87 ± 0.1 compared with 0.66 ± 0.13 and 0.64 ± 0.39, for RGB and Thermal data, respectively.


Assuntos
Algoritmos , Temperatura Baixa , Análise por Conglomerados , Reconhecimento Psicológico , Imagem Multimodal , Cor
4.
Sensors (Basel) ; 22(9)2022 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-35590990

RESUMO

Imaging-based methods of food portion size estimation (FPSE) promise higher accuracies compared to traditional methods. Many FPSE methods require dimensional cues (fiducial markers, finger-references, object-references) in the scene of interest and/or manual human input (wireframes, virtual models). This paper proposes a novel passive, standalone, multispectral, motion-activated, structured light-supplemented, stereo camera for food intake monitoring (FOODCAM) and an associated methodology for FPSE that does not need a dimensional reference given a fixed setup. The proposed device integrated a switchable band (visible/infrared) stereo camera with a structured light emitter. The volume estimation methodology focused on the 3-D reconstruction of food items based on the stereo image pairs captured by the device. The FOODCAM device and the methodology were validated using five food models with complex shapes (banana, brownie, chickpeas, French fries, and popcorn). Results showed that the FOODCAM was able to estimate food portion sizes with an average accuracy of 94.4%, which suggests that the FOODCAM can potentially be used as an instrument in diet and eating behavior studies.


Assuntos
Fotografação , Tamanho da Porção , Dieta , Comportamento Alimentar , Alimentos , Humanos , Fotografação/métodos
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2736-2740, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891816

RESUMO

Tracking an individual's food intake provides useful insight into their eating habits. Technological advancements in wearable sensors such as the automatic capture of food images from wearable cameras have made the tracking of food intake efficient and feasible. For accurate food intake monitoring, an automated food detection technique is needed to recognize foods from unstaged real-world images. This work presents a novel food detection and segmentation pipeline to detect the presence of food in images acquired from an egocentric wearable camera, and subsequently segment the food image. An ensemble of YOLOv5 detection networks is trained to detect and localize food items among other objects present in captured images. The model achieves an overall 80.6% mean average precision on four objects-Food, Beverage, Screen, and Person. Post object detection, the predicted food objects which are sufficiently sharp were considered for segmentation. The Normalized-Graph-Cut algorithm was used to segment the different parts of the food resulting in an average IoU of 82%.Clinical relevance- The automatic monitoring of food intake using wearable devices can play a pivotal role in the treatment and prevention of eating disorders, obesity, malnutrition and other related issues. It can aid in understanding the pattern of nutritional intake and make personalized adjustments to lead a healthy life.


Assuntos
Alimentos , Dispositivos Eletrônicos Vestíveis , Algoritmos , Ingestão de Alimentos , Comportamento Alimentar , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...