Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 24(7)2024 Mar 22.
Article in English | MEDLINE | ID: mdl-38610258

ABSTRACT

In this paper, we propose an amount estimation method for food intake based on both color and depth images. Two pairs of color and depth images are captured pre- and post-meals. The pre- and post-meal color images are employed to detect food types and food existence regions using Mask R-CNN. The post-meal color image is spatially transformed to match the food region locations between the pre- and post-meal color images. The same transformation is also performed on the post-meal depth image. The pixel values of the post-meal depth image are compensated to reflect 3D position changes caused by the image transformation. In both the pre- and post-meal depth images, a space volume for each food region is calculated by dividing the space between the food surfaces and the camera into multiple tetrahedra. The food intake amounts are estimated as the difference in space volumes calculated from the pre- and post-meal depth images. From the simulation results, we verify that the proposed method estimates the food intake amount with an error of up to 2.2%.


Subject(s)
Deep Learning , Computer Simulation , Food , Postprandial Period , Eating
2.
Sensors (Basel) ; 22(24)2022 Dec 09.
Article in English | MEDLINE | ID: mdl-36560023

ABSTRACT

In this paper, we propose an intra-picture prediction method for depth video by a block clustering through a neural network. The proposed method solves a problem that the block that has two or more clusters drops the prediction performance of the intra prediction for depth video. The proposed neural network consists of both a spatial feature prediction network and a clustering network. The spatial feature prediction network utilizes spatial features in vertical and horizontal directions. The network contains a 1D CNN layer and a fully connected layer. The 1D CNN layer extracts the spatial features for a vertical direction and a horizontal direction from a top block and a left block of the reference pixels, respectively. 1D CNN is designed to handle time-series data, but it can also be applied to find the spatial features by regarding a pixel order in a certain direction as a timestamp. The fully connected layer predicts the spatial features of the block to be coded through the extracted features. The clustering network finds clusters from the spatial features which are the outputs of the spatial feature prediction network. The network consists of 4 CNN layers. The first 3 CNN layers combine two spatial features in the vertical and horizontal directions. The last layer outputs the probabilities that pixels belong to the clusters. The pixels of the block are predicted by the representative values of the clusters that are the average of the reference pixels belonging to the clusters. For the intra prediction for various block sizes, the block is scaled to the size of the network input. The prediction result through the proposed network is scaled back to the original size. In network training, the mean square error is used as a loss function between the original block and the predicted block. A penalty for output values far from both ends is introduced to the loss function for clear network clustering. In the simulation results, the bit rate is saved by up to 12.45% under the same distortion condition compared with the latest video coding standard.


Subject(s)
Deep Learning , Neural Networks, Computer , Computer Simulation , Cluster Analysis
3.
Sensors (Basel) ; 19(4)2019 Feb 20.
Article in English | MEDLINE | ID: mdl-30791636

ABSTRACT

In this paper, a virtual touch sensor using a depth camera is proposed. Touch regions are detected by finding each region that consists of pixels within a certain distance from a touch surface. Touch points are detected by finding a pixel in each touch region whose neighboring pixels are closest to surfaces. A touch path error due to noise in the depth picture is corrected through a filter that introduces a weight parameter in order to respond quickly even with sudden changes. The virtual touch sensor is implemented by using the proposed methods for the touch point detection and the touch path correction. In the virtual touch sensor, a touch surface and pixels of a depth picture can be regarded as a virtual touch panel and virtual touch units, respectively. The virtual touch sensor can be applied to wide fields of touch interfaces. Results of simulations show the implementation of the touch-pen interface using the virtual touch sensor.


Subject(s)
Biosensing Techniques/methods , Touch/physiology , User-Computer Interface , Algorithms , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...