Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 33: 3823-3838, 2024.
Article in English | MEDLINE | ID: mdl-38875090

ABSTRACT

Image deblurring for camera shake is a highly regarded problem in the field of computer vision. A promising solution is the patch-wise non-uniform image deblurring algorithms, where a linear transformation model is typically established between different blur kernels to re-estimate poorly estimated blur kernels. However, the linear model struggles to effectively describe the nonlinear transformation relationships between blur kernels. A key observation is that the inertial measurement unit (IMU) provides motion data of the camera, which is helpful in describing the landmarks of the blur kernel. This paper presents a new IMU-assisted method for the re-estimation of poorly estimated blur kernels. This method establishes a nonlinear transformation relationship model between blur kernels of different patches using IMU motion data. Subsequently, an optimization problem is applied to re-estimate poorly estimated blur kernels by incorporating this relationship model with neighboring well-estimated kernels. Experimental results demonstrate that this blur kernel re-estimation method outperforms existing methods.

2.
IEEE Trans Image Process ; 31: 4336-4351, 2022.
Article in English | MEDLINE | ID: mdl-35727783

ABSTRACT

Distinguishing between dynamic foreground objects and a mostly static background is a fundamental problem in many computer vision and computer graphics tasks. This paper presents a novel online video background identification method with the assistance of inertial measurement unit (IMU). Based on the fact that the background motion of a video essentially reflects the 3D camera motion, we leverage IMU data to realize a robust camera motion estimation for identifying background feature points by only investigating a few historical frames. We observe that the displacement of the 2D projection of a scene point caused by camera rotation is depth-invariant, and the rotation estimation by using IMU data can be quite accurate. We thus propose to analyze 2D feature points by decomposing the 2D motion into two components: rotation projection and translation projection. In our method, after establishing the 3D camera rotations, we generate the depth-relevant 2D feature point movement induced by the camera 3D translation. Then, by examining the disparity between inter-frame offset and the projection of estimated 3D camera motion, we can identify the background feature points. In the experiments, our online method is able to run at 30FPS with only 1 frame latency and outperforms state-of-the-art background identification and other relevant methods. Our method directly leads to a better camera motion estimation, which is beneficial to many applications like online video stabilization, SLAM, image stitching, etc.

SELECTION OF CITATIONS
SEARCH DETAIL
...