Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Opt Express ; 32(11): 18931-18943, 2024 May 20.
Article in English | MEDLINE | ID: mdl-38859039

ABSTRACT

Wavefront aberration describes the deviation of a wavefront in an imaging system from a desired perfect shape, such as a plane or a sphere, which may be caused by a variety of factors, such as imperfections in optical equipment, atmospheric turbulence, and the physical properties of imaging subjects and medium. Measuring the wavefront aberration of an imaging system is a crucial part of modern optics and optical engineering, with a variety of applications such as adaptive optics, optical testing, microscopy, laser system design, and ophthalmology. While there are dedicated wavefront sensors that aim to measure the phase of light, they often exhibit some drawbacks, such as higher cost and limited spatial resolution compared to regular intensity measurement. In this paper, we introduce a lightweight and practical learning-based method, named LWNet, to recover the wavefront aberration for an imaging system from a single intensity measurement. Specifically, LWNet takes a measured point spread function (PSF) as input and recovers the wavefront aberration with a two-stage network. The first stage network estimates an initial wavefront aberration via supervised learning, and the second stage network further optimizes the wavefront aberration via self-supervised learning by enforcing the statistical priors and physical constraints of wavefront aberrations via Zernike decomposition. For supervised learning, we created a synthetic PSF-wavefront aberration dataset via ray tracing of 88 lenses. Experimental results show that even trained with simulated data, LWNet works well for wavefront aberration estimation of real imaging systems and consistently outperforms prior learning-based methods.

2.
IEEE Trans Pattern Anal Mach Intell ; 41(9): 2236-2250, 2019 09.
Article in English | MEDLINE | ID: mdl-30004870

ABSTRACT

We study the problem of synthesizing a number of likely future frames from a single input image. In contrast to traditional methods that have tackled this problem in a deterministic or non-parametric way, we propose to model future frames in a probabilistic manner. Our probabilistic model makes it possible for us to sample and synthesize many possible future frames from a single input image. To synthesize realistic movement of objects, we propose a novel network structure, namely a Cross Convolutional Network; this network encodes image and motion information as feature maps and convolutional kernels, respectively. In experiments, our model performs well on synthetic data, such as 2D shapes and animated game sprites, and on real-world video frames. We present analyses of the learned network representations, showing it is implicitly learning a compact encoding of object appearance and motion. We also demonstrate a few of its applications, including visual analogy-making and video extrapolation.

3.
IEEE Trans Pattern Anal Mach Intell ; 40(8): 1799-1813, 2018 08.
Article in English | MEDLINE | ID: mdl-28796608

ABSTRACT

We propose a novel method for template matching in unconstrained environments. Its essence is the Best-Buddies Similarity (BBS), a useful, robust, and parameter-free similarity measure between two sets of points. BBS is based on counting the number of Best-Buddies Pairs (BBPs)-pairs of points in source and target sets that are mutual nearest neighbours, i.e., each point is the nearest neighbour of the other. BBS has several key features that make it robust against complex geometric deformations and high levels of outliers, such as those arising from background clutter and occlusions. We study these properties, provide a statistical analysis that justifies them, and demonstrate the consistent success of BBS on a challenging real-world dataset while using different types of features.

4.
IEEE Trans Image Process ; 21(9): 4180-9, 2012 Sep.
Article in English | MEDLINE | ID: mdl-22645268

ABSTRACT

3-D technologies are considered as the next generation of multimedia applications. Currently, one of the challenges faced by 3-D applications is the shortage of 3-D resources. To solve this problem, many 3-D modeling methods are proposed to directly recover 3-D geometry from 2-D images. However, these methods on single view modeling either require intensive user interaction, or are restricted to a specific kind of object. In this paper, we propose a novel 3-D modeling approach to recover 3-D geometry from a single image of a symmetric object with minimal user interaction. Symmetry is one of the most common properties of natural or manmade objects. Given a single view of a symmetric object, the user marks some symmetric lines and depth discontinuity regions on the image. Our algorithm first finds a set of planes to approximately fit to the object, and then a rough 3-D point cloud is generated by an optimization procedure. The occluded part of the object is further recovered using symmetry information. Experimental results on various indoor and outdoor objects show that the proposed system can obtain 3-D models from single images with only a little user interaction.

SELECTION OF CITATIONS
SEARCH DETAIL
...