Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 12(1): 3804, 2022 03 09.
Artigo em Inglês | MEDLINE | ID: mdl-35264622

RESUMO

Fully autonomous drones have been demonstrated to find lost or injured persons under strongly occluding forest canopy. Airborne optical sectioning (AOS), a novel synthetic aperture imaging technique, together with deep-learning-based classification enables high detection rates under realistic search-and-rescue conditions. We demonstrate that false detections can be significantly suppressed and true detections boosted by combining classifications from multiple AOS-rather than single-integral images. This improves classification rates especially in the presence of occlusion. To make this possible, we modified the AOS imaging process to support large overlaps between subsequent integrals, enabling real-time and on-board scanning and processing of groundspeeds up to 10 m/s.


Assuntos
Florestas , Técnicas Histológicas , Humanos , Imagem Óptica , Fenômenos Ópticos
2.
Opt Express ; 29(1): 342-345, 2021 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-33362118

RESUMO

This feature issue of Optics Express is organized in conjunction with the 2020 OSA conference on 3D image acquisition and display: technology, perception and applications which was held virtually in Vancouver from 22 to 26, June 2020 as part of the imaging and sensing congress 2020. This feature issue presents 29 articles based on the topics and scope of the 3D conference. This review provides a summary of these articles.

3.
Sci Rep ; 10(1): 7254, 2020 04 29.
Artigo em Inglês | MEDLINE | ID: mdl-32350304

RESUMO

We describe how a new and low-cost aerial scanning technique, airborne optical sectioning (AOS), can support ornithologists in nesting observation. After capturing thermal and color images during a seven minutes drone flight over a 40 × 12 m patch of the nesting site of Austria's largest heron population, a total of 65 herons and 27 nests could be identified, classified, and localized in a sparse 3D reconstruction of the forest. AOS is a synthetic aperture imaging technique that removes occlusion caused by leaves and branches. It registers recorded images to a common 3D coordinate system to support the reconstruction and analysis of the entire forest volume, which is impossible with conventional 2D or 3D imaging techniques. The recorded data is published with open access.


Assuntos
Aves/fisiologia , Imageamento Tridimensional/métodos , Comportamento de Nidação , Animais , Áustria , Aves/classificação , Especificidade da Espécie
4.
IEEE Comput Graph Appl ; 39(3): 8-15, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31021742

RESUMO

Synthetic apertures sample the signal of wide aperture sensors with either arrays of static or single moving smaller aperture sensors whose individual signals are computationally combined to increase the resolution, depth-of-field, frame rate, contrast, and signal-to-noise ratio. This principle has been used for radar, telescopes, microscopes, sonar, ultrasound, laser, and optical imaging. With airborne optical sectioning (AOS), we apply camera drones for synthetic aperture imaging to uncover the ruins of a 19th century fortification system that is concealed by dense forest and shrubs. Compared to alternative airborne scanning technologies (such as LiDAR), AOS is cheaper, delivers surface color information, achieves higher sampling resolutions, and (in contrast to photogrammetry) does not suffer from inaccurate correspondence matches and long processing times.

5.
Opt Express ; 26(22): 29253-29261, 2018 Oct 29.
Artigo em Inglês | MEDLINE | ID: mdl-30470091

RESUMO

We present a 300 µm thick optical Söller collimator realized by X-ray lithography on a PMMA wafer which, when paired with luminescent concentrator films, forms the first complete prototype of a short-distance, flexible, scalable imaging system that is less than 1 mm thick. We describe two ways of increasing the light-gathering ability of the collimator by using hexagonal aperture cells and embedded micro-lenses, evaluate a new micro-lens aperture array (MLAA) for proof of concept, and analyze the optical imaging properties of flexible MLAAs when realized as thin films.

6.
Sci Rep ; 7(1): 13981, 2017 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-29070847

RESUMO

We explain how volumetric light-field excitation can be converted to a process that entirely avoids 3D reconstruction, deconvolution, and calibration of optical elements while taking scattering in the probe better into account. For spatially static probes, this is achieved by an efficient (one-time) light-transport sampling and light-field factorization. Individual probe particles (and arbitrary combinations thereof) can subsequently be excited in a dynamically controlled way while still supporting volumetric reconstruction of the entire probe in real-time based on a single light-field recording.

7.
Opt Express ; 25(16): 18526-18536, 2017 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-29041052

RESUMO

This article reports our investigation of the potential of optical Söller collimators in combination with luminescent concentrators for lens-less, short-distance, and shape-independent thin-film imaging. We discuss optical imaging capabilities and limitations, and present first prototypes and results. Modern 3D laser lithography and deep X-ray lithography support the manufacturing of extremely fine collimator structures that pave the way for flexible and scalable thin-film cameras that are far thinner than 1 mm (including optical imaging and color sensor layers).

8.
Opt Express ; 25(16): 19084, 2017 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-29041101

RESUMO

This publisher's note amends the author list of [Opt. Express25, 18526 (2017)].

9.
Opt Express ; 25(3): 2694-2702, 2017 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-29519111

RESUMO

We wrap a thin-film luminescent concentrator (LC) - a flexible and transparent plastic foil doped with fluorescent dye particles - around an object to obtain images of the object under varying synthetic lighting conditions and without lenses. These images can then be used for computational relighting and depth reconstruction. An LC is an efficient two-dimensional light guide that allows photons to be collected over a wide solid angle, and through multiple overlapping integration areas simultaneously. We show that conventional photodetectors achieve a higher signal-to-noise ratio when equipped with an LC than in direct measurements. Efficient light guidance in combination with computational imaging approaches, such as presented in this article, can lead to novel optical sensors that collect light in a structured way and within a wide solid angle rather than unstructured through narrow apertures. This enables flexible, scalable, transparent, and lens-less thin-film image and depth sensors.

10.
Sci Rep ; 6: 29193, 2016 07 01.
Artigo em Inglês | MEDLINE | ID: mdl-27363565

RESUMO

We explain how to concentrate light simultaneously at multiple selected volumetric positions by means of a 4D illumination light field. First, to select target objects, a 4D imaging light field is captured. A light field mask is then computed automatically for this selection to avoid illumination of the remaining areas. With one-photon illumination, simultaneous generation of complex volumetric light patterns becomes possible. As a full light-field can be captured and projected simultaneously at the desired exposure and excitation times, short readout and lighting durations are supported.

11.
Opt Express ; 23(7): 9397-406, 2015 Apr 06.
Artigo em Inglês | MEDLINE | ID: mdl-25968770

RESUMO

We present a thin-film sensor that optically measures the Radon transform of an image focussed onto it. Measuring and classifying directly in Radon space, rather than in image space, is fast and yields robust and high classification rates. We explain how the number of integral measurements required for a given classification task can be reduced by several orders of magnitude. Our experiments achieve classification rates of 98%-99% for complex hand gesture and motion detection tasks with as few as 10 photosensors. Our findings have the potential to stimulate further research towards a new generation of application-oriented classification sensors for use in areas such as biometry, security, diagnostics, surface inspection, and human-computer interfaces.

12.
Opt Express ; 23(26): 33713-20, 2015 Dec 28.
Artigo em Inglês | MEDLINE | ID: mdl-26832034

RESUMO

We present a fully transparent, scalable, and flexible color image sensor that consists of stacked thin-film luminescent concentrators (LCs). At each layer, it measures a Radon transform of the corresponding LC's spectral responses. Color images are then reconstructed through inverse Radon transforms that are obtained using machine learning. A high sampling rate in Radon space allows encoding multiple exposures to cope with under- and overexposed cases in one recording. Thus, our sensor simultaneously measures multiple spectral responses in different LC layers and multiple exposures in different Radon coefficients per layer. We also show that machine learning enables adequate three-channel image reconstruction from the response of only two LC layers.

13.
IEEE Comput Graph Appl ; 34(5): 98-102, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25248203

RESUMO

The LumiConSense sensor employs a thin luminescent-concentrator film, which allows lensless multifocal imaging and depth estimation at interactive rates.

14.
Opt Express ; 22(8): 8928-42, 2014 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-24787782

RESUMO

We present a fully transparent and flexible light-sensing film that, based on a single thin-film luminescent concentrator layer, supports simultaneous multi-focal image reconstruction and depth estimation without additional optics. Together with the sampling of two-dimensional light fields propagated inside the film layer under various focal conditions, it allows entire focal image stacks to be computed after only one recording that can be used for depth estimation. The transparency and flexibility of our sensor unlock the potential of lensless multilayer imaging and depth sensing with arbitrary sensor shapes--enabling novel human-computer interfaces.

15.
Opt Express ; 22(24): 29531-43, 2014 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-25606886

RESUMO

LumiConSense, a transparent, flexible, scalable, and disposable thin-film image sensor has the potential to lead to new human-computer interfaces that are unconstrained in shape and sensing-distance. In this article we make four new contributions: (1) A new real-time image reconstruction method that results in a significant enhancement of image quality compared to previous approaches; (2) the efficient combination of image reconstruction and shift-invariant linear image processing operations; (3) various hardware and software prototypes which, realize the above contributions, demonstrating the current potential of our sensor for real-time applications; and finally, (4) a further higher quality offline reconstruction algorithm.


Assuntos
Processamento de Imagem Assistida por Computador/instrumentação , Luminescência , Algoritmos , Calibragem , Dinâmica não Linear , Tomografia
16.
Opt Express ; 21(4): 4796-810, 2013 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-23482014

RESUMO

Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable.


Assuntos
Equipamentos Descartáveis , Aumento da Imagem/instrumentação , Medições Luminescentes/instrumentação , Membranas Artificiais , Transdutores , Desenho de Equipamento , Análise de Falha de Equipamento
18.
IEEE Trans Vis Comput Graph ; 17(6): 857-70, 2011 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-20714022

RESUMO

In this paper, we show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we premodulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real time, even though the specimen is continuously but not uniformly illuminated. This information is processed and back-projected constantly, allowing the illumination to be adjusted on the fly if the specimen is moved or the focus or magnification of the microscope is changed. The contrast of the specimen's optical image can be enhanced, and high-intensity highlights can be suppressed. A formal pilot study with users indicates that this optimizes the visibility of spatial structures when observed through the oculars. We also demonstrate that the signal-to-noise (S/N) ratio in digital images of the specimen is higher if captured under an optimized rather than a uniform illumination. In contrast to advanced scanning techniques that maximize the S/N ratio using multiple measurements, our approach is fast because it requires only two images. This can improve image analysis in digital microscopy applications with real-time capturing requirements.

19.
IEEE Trans Vis Comput Graph ; 14(1): 97-108, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-17993705

RESUMO

Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. Using the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. These will lead to clipping errors and to visible artifacts on the surface. In this article, we present an innovative algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time.


Assuntos
Colorimetria/métodos , Gráficos por Computador , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Iluminação/métodos , Algoritmos , Cor , Sistemas Computacionais , Análise Numérica Assistida por Computador
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...