Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Opt ; 55(32): 9006-9016, 2016 Nov 10.
Article in English | MEDLINE | ID: mdl-27857283

ABSTRACT

With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

2.
Biomed Opt Express ; 6(11): 4610-8, 2015 Nov 01.
Article in English | MEDLINE | ID: mdl-26601022

ABSTRACT

In this paper, facial images from various video sequences are used to obtain a heart rate reading. In this study, a video camera is used to capture the facial images of eight subjects whose heart rates vary dynamically, between 81 and 153 BPM. Principal component analysis (PCA) is used to recover the blood volume pulses (BVP) which can be used for the heart rate estimation. An important consideration for accuracy of the dynamic heart rate estimation is to determine the shortest video duration that realizes it. This video duration is chosen when the six principal components (PC) are least correlated amongst them. When this is achieved, the first PC is used to obtain the heart rate. The results obtained from the proposed method are compared to the readings obtained from the Polar heart rate monitor. Experimental results show the proposed method is able to estimate the dynamic heart rate readings using less computational requirements when compared to the existing method. The mean absolute error and the standard deviation of the absolute errors between experimental readings and actual readings are 2.18 BPM and 1.71 BPM respectively.

3.
Biomed Opt Express ; 6(7): 2466-80, 2015 Jul 01.
Article in English | MEDLINE | ID: mdl-26203374

ABSTRACT

This paper shows how dynamic heart rate measurements that are typically obtained from sensors mounted near to the heart can also be obtained from video sequences. In this study, two experiments are carried out where a video camera captures the facial images of the seven subjects. The first experiment involves the measurement of subjects' increasing heart rates (79 to 150 beats per minute (BPM)) while cycling whereas the second involves falling heart beats (153 to 88 BPM). In this study, independent component analysis (ICA) is combined with mutual information to ensure accuracy is not compromised in the use of short video duration. While both experiments are going on measures of heartbeat using the Polar heart rate monitor is also taken to compare with the findings of the proposed method. Overall experimental results show the proposed method can be used to measure dynamic heart rates where the root mean square error (RMSE) and the correlation coefficient are 1.88 BPM and 0.99 respectively.

4.
IEEE Trans Image Process ; 24(7): 2197-211, 2015 Jul.
Article in English | MEDLINE | ID: mdl-25823037

ABSTRACT

In this paper, we propose a new method to online enhance the quality of a depth video based on the intermediary of a so-called static structure of the captured scene. The static and dynamic regions of the input depth frame are robustly separated by a layer assignment procedure, in which the dynamic part stays in the front while the static part fits and helps to update this structure by a novel online variational generative model with added spatial refinement. The dynamic content is enhanced spatially while the static region is otherwise substituted by the updated static structure so as to favor the long-range spatiotemporal enhancement. The proposed method both performs long-range temporal consistency on the static region and keeps necessary depth variations in the dynamic content. Thus, it can produce flicker-free and spatially optimized depth videos with reduced motion blur and depth distortion. Our experimental results reveal that the proposed method is effective in both static and dynamic indoor scenes and is compatible with depth videos captured by Kinect and time-of-flight camera. We also demonstrate that excellent performance can be achieved by the proposed method in comparison with the existing spatiotemporal approaches. In addition, our enhanced depth videos and static structures can act as effective cues to improve various applications, including depth-aided background subtraction and novel view synthesis, showing satisfactory results with few visual artifacts.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Photography/methods , Video Recording/methods , Online Systems , Reproducibility of Results , Sensitivity and Specificity , User-Computer Interface
SELECTION OF CITATIONS
SEARCH DETAIL
...