Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(21)2023 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-37960374

RESUMO

One of the challenges of using Time-of-Flight (ToF) sensors for dimensioning objects is that the depth information suffers from issues such as low resolution, self-occlusions, noise, and multipath interference, which distort the shape and size of objects. In this work, we successfully apply a superquadric fitting framework for dimensioning cuboid and cylindrical objects from point cloud data generated using a ToF sensor. Our work demonstrates that an average error of less than 1 cm is possible for a box with the largest dimension of about 30 cm and a cylinder with the largest dimension of about 20 cm that are each placed 1.5 m from a ToF sensor. We also quantify the performance of dimensioning objects using various object orientations, ground plane surfaces, and model fitting methods. For cuboid objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 4% and 9% using the bounding technique and between 8% and 15% using the mirroring technique across all tested surfaces. For cylindrical objects, our results show that the proposed superquadric fitting framework is able to achieve absolute dimensioning errors between 2.97% and 6.61% when the object is in a horizontal orientation and between 8.01% and 13.13% when the object is in a vertical orientation using the bounding technique across all tested surfaces.

2.
Sensors (Basel) ; 23(19)2023 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-37836877

RESUMO

The behavior of multicamera interference in 3D images (e.g., depth maps), which is based on infrared (IR) light, is not well understood. In 3D images, when multicamera interference is present, there is an increase in the amount of zero-value pixels, resulting in a loss of depth information. In this work, we demonstrate a framework for synthetically generating direct and indirect multicamera interference using a combination of a probabilistic model and ray tracing. Our mathematical model predicts the locations and probabilities of zero-value pixels in depth maps that contain multicamera interference. Our model accurately predicts where depth information may be lost in a depth map when multicamera interference is present. We compare the proposed synthetic 3D interference images with controlled 3D interference images captured in our laboratory. The proposed framework achieves an average root mean square error (RMSE) of 0.0625, an average peak signal-to-noise ratio (PSNR) of 24.1277 dB, and an average structural similarity index measure (SSIM) of 0.9007 for predicting direct multicamera interference, and an average RMSE of 0.0312, an average PSNR of 26.2280 dB, and an average SSIM of 0.9064 for predicting indirect multicamera interference. The proposed framework can be used to develop and test interference mitigation techniques that will be crucial for the successful proliferation of these devices.

3.
Sensors (Basel) ; 22(3)2022 Feb 04.
Artigo em Inglês | MEDLINE | ID: mdl-35161927

RESUMO

Synthetically creating motion blur in two-dimensional (2D) images is a well-understood process and has been used in image processing for developing deblurring systems. There are no well-established techniques for synthetically generating arbitrary motion blur within three-dimensional (3D) images, such as depth maps and point clouds since their behavior is not as well understood. As a prerequisite, we have previously developed a method for generating synthetic motion blur in a plane that is parallel to the sensor detector plane. In this work, as a major extension, we generalize our previously developed framework for synthetically generating linear and radial motion blur along planes that are at arbitrary angles with respect to the sensor detector plane. Our framework accurately captures the behavior of the real motion blur that is encountered using a Time-of-Flight (ToF) sensor. This work uses a probabilistic model that predicts the location of invalid pixels that are typically present within depth maps that contain real motion blur. More specifically, the probabilistic model considers different angles of motion paths and the velocity of an object with respect to the image plane of a ToF sensor. Extensive experimental results are shown that demonstrate how our framework can be applied to synthetically create radial, linear, and combined radial-linear motion blur. We quantify the accuracy of the synthetic generation method by comparing the resulting synthetic depth map to the experimentally captured depth map with motion. Our results indicate that our framework achieves an average Boundary F1 (BF) score of 0.7192 for invalid pixels for synthetic radial motion blur, an average BF score of 0.8778 for synthetic linear motion blur, and an average BF score of 0.62 for synthetic combined radial-linear motion blur.


Assuntos
Algoritmos , Modelos Estatísticos , Processamento de Imagem Assistida por Computador , Movimento (Física)
4.
Sensors (Basel) ; 19(19)2019 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-31546595

RESUMO

Accurate three-dimensional displacement measurements of bridges and other structures have received significant attention in recent years. The main challenges of such measurements include the cost and the need for a scalable array of instrumentation. This paper presents a novel Hybrid Inertial Vision-Based Displacement Measurement (HIVBDM) system that can measure three-dimensional structural displacements by using a monocular charge-coupled device (CCD) camera, a stationary calibration target, and an attached tilt sensor. The HIVBDM system does not require the camera to be stationary during the measurements, while the camera movements, i.e., rotations and translations, during the measurement process are compensated by using a stationary calibration target in the field of view (FOV) of the camera. An attached tilt sensor is further used to refine the camera movement compensation, and better infers the global three-dimensional structural displacements. This HIVBDM system is evaluated on both short-term and long-term synthetic static structural displacements, which are conducted in an indoor simulated experimental environment. In the experiments, at a 9.75 m operating distance between the monitoring camera and the structure that is being monitored, the proposed HIVBDM system achieves an average of 1.440 mm Root Mean Square Error (RMSE) on the in-plane structural translations and an average of 2.904 mm RMSE on the out-of-plane structural translations.

5.
IEEE Trans Image Process ; 25(4): 1544-55, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26849862

RESUMO

Super resolution (SR) for real-life video sequences is a challenging problem due to complex nature of the motion fields. In this paper, a novel blind SR method is proposed to improve the spatial resolution of video sequences, while the overall point spread function of the imaging system, motion fields, and noise statistics are unknown. To estimate the blur(s), first, a nonuniform interpolation SR method is utilized to upsample the frames, and then, the blur(s) is(are) estimated through a multi-scale process. The blur estimation process is initially performed on a few emphasized edges and gradually on more edges as the iterations continue. Also for faster convergence, the blur is estimated in the filter domain rather than the pixel domain. The high-resolution frames are estimated using a cost function that has the fidelity and regularization terms of type Huber-Markov random field to preserve edges and fine details. The fidelity term is adaptively weighted at each iteration using a masking operation to suppress artifacts due to inaccurate motions. Very promising results are obtained for real-life videos containing detailed structures, complex motions, fast-moving objects, deformable regions, or severe brightness changes. The proposed method outperforms the state of the art in all performed experiments through both subjective and objective evaluations. The results are available online at http://lyle.smu.edu/~rajand/Video_SR/.

6.
J Chem Inf Model ; 54(10): 3033-43, 2014 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-25207854

RESUMO

A limitation of traditional molecular dynamics (MD) is that reaction rates are difficult to compute. This is due to the rarity of observing transitions between metastable states since high energy barriers trap the system in these states. Recently the weighted ensemble (WE) family of methods have emerged which can flexibly and efficiently sample conformational space without being trapped and allow calculation of unbiased rates. However, while WE can sample correctly and efficiently, a scalable implementation applicable to interesting biomolecular systems is not available. We provide here a GPLv2 implementation called AWE-WQ of a WE algorithm using the master/worker distributed computing WorkQueue (WQ) framework. AWE-WQ is scalable to thousands of nodes and supports dynamic allocation of computer resources, heterogeneous resource usage (such as central processing units (CPU) and graphical processing units (GPUs) concurrently), seamless heterogeneous cluster usage (i.e., campus grids and cloud providers), and support for arbitrary MD codes such as GROMACS, while ensuring that all statistics are unbiased. We applied AWE-WQ to a 34 residue protein which simulated 1.5 ms over 8 months with peak aggregate performance of 1000 ns/h. Comparison was done with a 200 µs simulation collected on a GPU over a similar timespan. The folding and unfolded rates were of comparable accuracy.


Assuntos
Algoritmos , Sistemas Computacionais , Simulação de Dinâmica Molecular , Proteínas/química , Dobramento de Proteína , Estrutura Terciária de Proteína , Desdobramento de Proteína , Termodinâmica , Triptofano/química
7.
IEEE Trans Image Process ; 22(6): 2101-14, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23314775

RESUMO

This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration. For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Animais , Simulação por Computador , Diagnóstico por Imagem , Humanos , Cadeias de Markov , Imagens de Fantasmas , Fotografação
8.
Appl Opt ; 51(4): A48-58, 2012 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-22307129

RESUMO

The design, development, and field-test results of a visible-band, folded, multiresolution, adaptive computational imaging system based on the Processing Arrays of Nyquist-limited Observations to Produce a Thin Electro-optic Sensor (PANOPTES) concept is presented. The architectural layout that enables this imager to be adaptive is described, and the control system that ensures reliable field-of-view steering for precision and accuracy in subpixel target registration is explained. A digital superresolution algorithm introduced to obtain high-resolution imagery from field tests conducted in both nighttime and daytime imaging conditions is discussed. The digital superresolution capability of this adaptive PANOPTES architecture is demonstrated via results in which resolution enhancement by a factor of 4 over the detector Nyquist limit is achieved.


Assuntos
Aumento da Imagem/instrumentação , Interpretação de Imagem Assistida por Computador/instrumentação , Sistemas Microeletromecânicos/instrumentação , Fotografação/instrumentação , Transdutores , Desenho de Equipamento , Análise de Falha de Equipamento , Projetos Piloto , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
Proc IEEE Int Conf Escience ; 2012: 1-8, 2012 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25540799

RESUMO

Molecular modeling is a field that traditionally has large computational costs. Until recently, most simulation techniques relied on long trajectories, which inherently have poor scalability. A new class of methods is proposed that requires only a large number of short calculations, and for which minimal communication between computer nodes is required. We considered one of the more accurate variants called Accelerated Weighted Ensemble Dynamics (AWE) and for which distributed computing can be made efficient. We implemented AWE using the Work Queue framework for task management and applied it to an all atom protein model (Fip35 WW domain). We can run with excellent scalability by simultaneously utilizing heterogeneous resources from multiple computing platforms such as clouds (Amazon EC2, Microsoft Azure), dedicated clusters, grids, on multiple architectures (CPU/GPU, 32/64bit), and in a dynamic environment in which processes are regularly added or removed from the pool. This has allowed us to achieve an aggregate sampling rate of over 500 ns/hour. As a comparison, a single process typically achieves 0.1 ns/hour.

10.
Appl Opt ; 47(10): B86-103, 2008 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-18382554

RESUMO

A framework is proposed for optimal joint design of the optical and reconstruction filters in a computational imaging system. First, a technique for the design of a physically unconstrained system is proposed whose performance serves as a universal bound on any realistic computational imaging system. Increasing levels of constraints are then imposed to emulate a physically realizable optical filter. The proposed design employs a generalized Benders' decomposition method to yield multiple globally optimal solutions to the nonconvex optimization problem. Structured, closed-form solutions for the design of observation and reconstruction filters, in terms of the system input and noise autocorrelation matrices, are presented. Numerical comparison with a state-of-the-art optical system shows the advantage of joint optimization and concurrent design.

11.
Appl Opt ; 47(10): B128-38, 2008 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-18385774

RESUMO

The performance of uniform and nonuniform detector arrays for application to the PANOPTES (processing arrays of Nyquist-limited observations to produce a thin electro-optic sensor) flat camera design is analyzed for measurement noise environments including quantization noise and Gaussian and Poisson processes. Image data acquired from a commercial camera with 8 bit and 14 bit output options are analyzed, and estimated noise levels are computed. Noise variances estimated from the measurement values are used in the optimal linear estimators for superresolution image reconstruction.

12.
Appl Opt ; 45(13): 2859-70, 2006 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-16639434

RESUMO

Algorithms that use optical system diversity to improve multiplexed image reconstruction from multiple low-resolution images are analyzed and demonstrated. Compared with systems using identical imagers, systems using additional lower-resolution imagers can have improved accuracy and computation. The diverse system is not sensitive to boundary conditions and can take full advantage of improvements that decrease noise and allow an increased number of bits per pixel to represent spatial information in a scene.

13.
Appl Opt ; 45(13): 2884-92, 2006 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-16639436

RESUMO

A thin, agile multiresolution, computational imaging sensor architecture, termed PANOPTES (processing arrays of Nyguist-limited observations to produce a thin electro-optic sensor), which utilizes arrays of microelectromechanical mirrors to adaptively redirect the fields of view of multiple low-resolution subimagers, is described. An information theory-based algorithm adapts the system and restores the image. The modulation transfer function (MTF) effects of utilizing micromirror arrays to steering imaging systems are analyzed, and computational methods for combining data collected from systems with differing MTFs are presented.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...