Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Opt Lett ; 48(5): 1304-1307, 2023 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-36857274

RESUMO

Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.

2.
Opt Express ; 24(4): 3479-87, 2016 Feb 22.
Artigo em Inglês | MEDLINE | ID: mdl-26907006

RESUMO

Microelectromechanical (MEMS) mirrors have extended vision capabilities onto small, low-power platforms. However, the field-of-view (FOV) of these MEMS mirrors is usually less than 90° and any increase in the MEMS mirror scanning angle has design and fabrication trade-offs in terms of power, size, speed and stability. Therefore, we need techniques to increase the scanning range while still maintaining a small form factor. In this paper we exploit our recent breakthrough that has enabled the immersion of MEMS mirrors in liquid. While allowing the MEMS to move, the liquid additionally provides a "Snell's window" effect and enables an enlarged FOV (≈ 150°). We present an optimized MEMS mirror design and use it to demonstrate applications in extreme wide-angle structured light.

3.
IEEE Trans Image Process ; 24(7): 2083-97, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25794389

RESUMO

Scene appearance from the point of view of a light source is called a reciprocal or dual view. Since there exists a large diversity in illumination, these virtual views may be nonperspective and multiviewpoint in nature. In this paper, we demonstrate the use of occluding masks to recover these dual views, which we term shadow cameras. We first show how to render a single reciprocal scene view by swapping the camera and light source positions. We then extend this technique for multiple views by both building a virtual shadow camera array and by exploiting area sources. We also capture nonperspective views such as orthographic, cross-slit and a pushbroom variant, while introducing novel applications such as converting between camera projections and removing refractive and catadioptric distortions. Finally, since a shadow camera is artificial, we can manipulate any of its intrinsic parameters, such as camera skew, to create perspective distortions. We demonstrate a variety of indoor and outdoor results and show a rendering application for capturing the light-field of a light-source.

4.
IEEE Trans Image Process ; 24(3): 823-35, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25532175

RESUMO

One popular technique for multimodal imaging is generalized assorted pixels (GAP), where an assorted pixel array on the image sensor allows for multimodal capture. Unfortunately, GAP is limited in its applicability because of the need for multimodal filters that are amenable with semiconductor fabrication processes and results in a fixed multimodal imaging configuration. In this paper, we advocate for generalized assorted camera (GAC) arrays for multimodal imaging--i.e., a camera array with filters of different characteristics placed in front of each camera aperture. The GAC provides us with three distinct advantages over GAP: ease of implementation, flexible application-dependent imaging since filters are external and can be changed and depth information that can be used for enabling novel applications (e.g., postcapture refocusing). The primary challenge in GAC arrays is that since the different modalities are obtained from different viewpoints, there is a need for accurate and efficient cross-channel registration. Traditional approaches such as sum-of-squared differences, sum-of-absolute differences, and mutual information all result in multimodal registration errors. Here, we propose a robust cross-channel matching cost function, based on aligning normalized gradients, which allows us to compute cross-channel subpixel correspondences for scenes exhibiting nontrivial geometry. We highlight the promise of GAC arrays with our cross-channel normalized gradient cost for several applications such as low-light imaging, postcapture refocusing, skin perfusion imaging using color + near infrared, and hyperspectral imaging.

5.
IEEE Trans Pattern Anal Mach Intell ; 35(12): 2982-96, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24136435

RESUMO

Achieving computer vision on microscale devices is a challenge. On these platforms, the power and mass constraints are severe enough for even the most common computations (matrix manipulations, convolution, etc.) to be difficult. This paper proposes and analyzes a class of miniature vision sensors that can help overcome these constraints. These sensors reduce power requirements through template-based optical convolution, and they enable a wide field-of-view within a small form through a refractive optical design. We describe the tradeoffs between the field-of-view, volume, and mass of these sensors and we provide analytic tools to navigate the design space. We demonstrate milliscale prototypes for computer vision tasks such as locating edges, tracking targets, and detecting faces. Finally, we utilize photolithographic fabrication tools to further miniaturize the optical designs and demonstrate fiducial detection onboard a small autonomous air vehicle.

6.
IEEE Trans Pattern Anal Mach Intell ; 31(8): 1375-85, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19542573

RESUMO

A new technique is proposed for scene analysis, called "appearance clustering." The key result of this approach is that the scene points can be clustered according to their surface normals, even when the geometry, material, and lighting are all unknown. This is achieved by analyzing an image sequence of a scene as it is illuminated by a smoothly moving distant light source. In such a scenario, the brightness measurements at each pixel form a "continuous appearance profile." When the source path follows an unstructured trajectory (obtained, say, by smoothly hand-waving a light source), the locations of the extrema of the appearance profile provide a strong cue for the scene point's surface normal. Based on this observation, a simple transformation of the appearance profiles and a distance metric are introduced that, together, can be used with any unsupervised clustering algorithm to obtain isonormal clusters of a scene. We support our algorithm empirically with comprehensive simulations of the Torrance-Sparrow and Oren-Nayar analytic BRDFs, as well as experiments with 25 materials obtained from the MERL database of measured BRDFs. The method is also demonstrated on 45 examples from the CURET database, obtaining clusters on scenes with real textures such as artificial grass and ceramic tile, as well as anisotropic materials such as satin and velvet. The results of applying our algorithm to indoor and outdoor scenes containing a variety of complex geometry and materials are shown. As an example application, isonormal clusters are used for lighting-consistent texture transfer. Our algorithm is simple and does not require any complex lighting setup for data collection.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...