Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Diagnostics (Basel) ; 13(16)2023 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-37627918

RESUMO

Retinal volume computation is one of the critical steps in grading pathologies and evaluating the response to a treatment. We propose a deep-learning-based visualization tool to calculate the fluid volume in retinal optical coherence tomography (OCT) images. The pathologies under consideration are Intraretinal Fluid (IRF), Subretinal Fluid (SRF), and Pigmented Epithelial Detachment (PED). We develop a binary classification model for each of these pathologies using the Inception-ResNet-v2 and the small Inception-ResNet-v2 models. For visualization, we use several standard Class Activation Mapping (CAM) techniques, namely Grad-CAM, Grad-CAM++, Score-CAM, Ablation-CAM, and Self-Matching CAM, to visualize the pathology-specific regions in the image and develop a novel Ensemble-CAM visualization technique for robust visualization of OCT images. In addition, we demonstrate a Graphical User Interface that takes the visualization heat maps as the input and calculates the fluid volume in the OCT C-scans. The volume is computed using both the region-growing algorithm and selective thresholding technique and compared with the ground-truth volume based on expert annotation. We compare the results obtained using the standard Inception-ResNet-v2 model with a small Inception-ResNet-v2 model, which has half the number of trainable parameters compared with the original model. This study shows the relevance and usefulness of deep-learning-based visualization techniques for reliable volumetric analysis.

3.
Sci Data ; 10(1): 70, 2023 02 03.
Artigo em Inglês | MEDLINE | ID: mdl-36737439

RESUMO

We introduce Cháksu-a retinal fundus image database for the evaluation of computer-assisted glaucoma prescreening techniques. The database contains 1345 color fundus images acquired using three brands of commercially available fundus cameras. Each image is provided with the outlines for the optic disc (OD) and optic cup (OC) using smooth closed contours and a decision of normal versus glaucomatous by five expert ophthalmologists. In addition, segmentation ground-truths of the OD and OC are provided by fusing the expert annotations using the mean, median, majority, and Simultaneous Truth and Performance Level Estimation (STAPLE) algorithm. The performance indices show that the ground-truth agreement with the experts is the best with STAPLE algorithm, followed by majority, median, and mean. The vertical, horizontal, and area cup-to-disc ratios are provided based on the expert annotations. Image-wise glaucoma decisions are also provided based on majority voting among the experts. Cháksu is the largest Indian-ethnicity-specific fundus image database with expert annotations and would aid in the development of artificial intelligence based glaucoma diagnostics.


Assuntos
Glaucoma , Disco Óptico , Humanos , Algoritmos , Inteligência Artificial , Fundo de Olho , Glaucoma/diagnóstico por imagem , Disco Óptico/diagnóstico por imagem
4.
Nat Nanotechnol ; 18(4): 380-389, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36690737

RESUMO

Neuromorphic cameras are a new class of dynamic-vision-inspired sensors that encode the rate of change of intensity as events. They can asynchronously record intensity changes as spikes, independent of the other pixels in the receptive field, resulting in sparse measurements. This recording of such sparse events makes them ideal for imaging dynamic processes, such as the stochastic emission of isolated single molecules. Here we show the application of neuromorphic detection to localize nanoscale fluorescent objects below the diffraction limit, with a precision below 20 nm. We demonstrate a combination of neuromorphic detection with segmentation and deep learning approaches to localize and track fluorescent particles below 50 nm with millisecond temporal resolution. Furthermore, we show that combining information from events resulting from the rate of change of intensities improves the classical limit of centroid estimation of single fluorescent objects by nearly a factor of two. Additionally, we validate that using post-processed data from the neuromorphic detector at defined windows of temporal integration allows a better evaluation of the fractalized diffusion of single particle trajectories. Our observations and analysis is useful for event sensing by nonlinear neuromorphic devices to ameliorate real-time particle localization approaches at the nanoscale.

5.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 8992-9010, 2022 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34699349

RESUMO

Low-rank plus sparse matrix decomposition (LSD) is an important problem in computer vision and machine learning. It has been solved using convex relaxations of the matrix rank and l0-pseudo-norm, which are the nuclear norm and l1-norm, respectively. Convex approximations are known to result in biased estimates, to overcome which, nonconvex regularizers such as weighted nuclear-norm minimization and weighted Schatten p-norm minimization have been proposed. However, works employing these regularizers have used heuristic weight-selection strategies. We propose weighted minimax-concave penalty (WMCP) as the nonconvex regularizer and show that it admits an equivalent representation that enables weight adaptation. Similarly, an equivalent representation to the weighted matrix gamma norm (WMGN) enables weight adaptation for the low-rank part. The optimization algorithms are based on the alternating direction method of multipliers technique. We show that the optimization frameworks relying on the two penalties, WMCP and WMGN, coupled with a novel iterative weight update strategy, result in accurate low-rank plus sparse matrix decomposition. The algorithms are also shown to satisfy descent properties and convergence guarantees. On the applications front, we consider the problem of foreground-background separation in video sequences. Simulation experiments and validations on standard datasets, namely, I2R, CDnet 2012, and BMC 2012 show that the proposed techniques outperform the benchmark techniques.

6.
Ultrasonics ; 108: 106183, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-32652324

RESUMO

A fundamental challenge in non-destructive evaluation using ultrasound is to accurately estimate the thicknesses of different layers or cracks present in the object under examination, which implicitly corresponds to accurately localizing the point-sources of the reflections from the measured signal. Conventional signal processing techniques cannot overcome the axial-resolution limit of the ultrasound imaging system determined by the wavelength of the transmitted pulse. In this paper, starting from the solution to the 1-D wave equation, we show that the ultrasound reflections could be effectively modeled as finite-rate-of-innovation (FRI) signals. The FRI modeling approach is a new paradigm in signal processing. Apart from allowing for the signals to be sampled below the Nyquist rate, the FRI framework also transforms the reconstruction problem into one of parametric estimation. We employ high-resolution parametric estimation techniques to solve the problem. We demonstrate axial super-resolution capability (resolution below the theoretical limit) of the proposed technique both on simulated as well as experimental data. A comparison of the FRI technique with time-domain and Fourier-domain sparse recovery techniques shows that the FRI technique is more robust. We also assess the resolvability of the proposed technique under different noise conditions on data simulated using the Field-II software and show that the reconstruction technique is robust to noise. For experimental validation, we consider Teflon sheets and Agarose phantoms of varying thicknesses. The experimental results show that the FRI technique is capable of super-resolving by a factor of three below the theoretical limit.

7.
PLoS One ; 15(5): e0231677, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32421691

RESUMO

Retinal oximetry is an important screening tool for early detection of retinal pathologies due to changes in the vasculature and also serves as a useful indicator of human-body-wide vascular abnormalities. We present an automatic technique for the measurement of oxygen saturation in retinal arterioles and venules using dual-wavelength retinal oximetry images. The technique is based on segmenting an optic-disc-centered ring-shaped region of interest and subsequent analysis of the oxygen saturation levels. We show that the two dominant peaks in the histogram of the oxygen saturation levels correspond to arteriolar and venular oxygen saturations from which the arterio-venous saturation difference (AVSD) can be calculated. For evaluation, we use a normative database of Asian Indian eyes containing 44 dual-wavelength retinal oximetry images. Validations against expert manual annotations of arterioles and venules show that the proposed technique results in an average arteriolar oxygen saturation (SatO2) of 87.48%, venular SatO2 of 57.41%, and AVSD of 30.07% in comparison with the expert ground-truth average arteriolar SatO2 of 89.41%, venular SatO2 of 56.32%, and AVSD of 33.09%, respectively. The results exhibit high consistency across the dataset indicating that the automated technique is an accurate alternative to the manual procedure.


Assuntos
Oximetria/métodos , Vasos Retinianos/diagnóstico por imagem , Arteríolas/diagnóstico por imagem , Arteríolas/metabolismo , Feminino , Humanos , Masculino , Oxigênio/metabolismo , Consumo de Oxigênio , Retina/fisiologia , Vasos Retinianos/metabolismo , Vênulas/diagnóstico por imagem , Vênulas/metabolismo
8.
Sci Rep ; 9(1): 7099, 2019 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-31068608

RESUMO

We present a novel and fully automated fundus image processing technique for glaucoma prescreening based on the rim-to-disc ratio (RDR). The technique accurately segments the optic disc and optic cup and then computes the RDR based on which it is possible to differentiate a normal fundus from a glaucomatous one. The technique performs a further categorization into normal, moderate, or severely glaucomatous classes following the disc-damage-likelihood scale (DDLS). To the best of our knowledge, this is the first engineering attempt at using RDR and DDLS to perform glaucoma severity assessment. The segmentation of the optic disc and cup is based on the active disc, whose parameters are optimized to maximize the local contrast. The optimization is performed efficiently by means of a multiscale representation, accelerated gradient-descent, and Green's theorem. Validations are performed on several publicly available databases as well as data provided by manufacturers of some commercially available fundus imaging devices. The segmentation and classification performance is assessed against expert clinician annotations in terms of sensitivity, specificity, accuracy, Jaccard, and Dice similarity indices. The results show that RDR based automated glaucoma assessment is about 8% to 10% more accurate than a cup-to-disc ratio (CDR) based system. An ablation study carried out considering the ground-truth expert outlines alone for classification showed that RDR is superior to CDR by 5.28% in a two-stage classification and about 3.21% in a three-stage severity grading.


Assuntos
Glaucoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Disco Óptico/diagnóstico por imagem , Algoritmos , Confiabilidade dos Dados , Fundo de Olho , Humanos , Sensibilidade e Especificidade , Índice de Gravidade de Doença , Software
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 2740-2743, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946461

RESUMO

Recent advances in the unsupervised and generative models of deep learning have shown promise for application in biomedical signal processing. In this work, we present a portable resource-constrained ultrasound (US) system trained using Variational Autoencoder (VAE) network which performs compressive-sensing on pre-beamformed RF signals. The encoder network compresses the RF data, which is further transmitted to the cloud. At the cloud, the decoder reconstructs back the ultrasound image, which can be used for inferencing. The compression is done with an undersampling ratio of 1/2, 1/3, 1/5 and 1/10 without significant loss of the resolution. We also compared the model by state-of-the-art compressive-sensing reconstruction algorithm and it shows significant improvement in terms of PSNR and MSE. The innovation in this approach resides in training with binary weights at the encoder, shows its feasibility for the hardware implementation at the edge. In the future, we plan to include our field-programmable gate array (FPGA) based design directly interfaced with sensors for real-time analysis of Ultrasound images during medical procedures.


Assuntos
Compressão de Dados , Aprendizado Profundo , Algoritmos , Processamento de Sinais Assistido por Computador , Ultrassonografia
10.
J Neurophysiol ; 119(3): 808-821, 2018 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-29118193

RESUMO

The gamma rhythm (30-80 Hz), often associated with high-level cortical functions, is believed to provide a temporal reference frame for spiking activity, for which it should have a stable center frequency and linear phase for an extended duration. However, recent studies that have estimated the power and phase of gamma as a function of time suggest that gamma occurs in short bursts and lacks the temporal structure required to act as a reference frame. Here, we show that the bursty appearance of gamma arises from the variability in the spectral estimator used in these studies. To overcome this problem, we use another duration estimator based on a matching pursuit algorithm that robustly estimates the duration of gamma in simulated data. Applying this algorithm to gamma oscillations recorded from implanted microelectrodes in the primary visual cortex of awake monkeys, we show that the median gamma duration is greater than 300 ms, which is three times longer than previously reported values. NEW & NOTEWORTHY Gamma oscillations (30-80 Hz) have been hypothesized to provide a temporal reference frame for coordination of spiking activity, but recent studies have shown that gamma occurs in very short bursts. We show that existing techniques have severely underestimated the rhythm duration, use a technique based on the Matching Pursuit algorithm, which provides a robust estimate of the duration, and show that the median duration of gamma is greater than 300 ms, much longer than previous estimates.


Assuntos
Ritmo Gama , Processamento de Sinais Assistido por Computador , Córtex Visual/fisiologia , Algoritmos , Animais , Feminino , Macaca radiata , Reprodutibilidade dos Testes , Fatores de Tempo
11.
Sci Rep ; 7(1): 9651, 2017 08 29.
Artigo em Inglês | MEDLINE | ID: mdl-28851979

RESUMO

We present a novel method that breaks the resolution barrier in nuclear magnetic resonance (NMR) spectroscopy, allowing one to accurately estimate the chemical shift values of highly overlapping or broadened peaks. This problem is routinely encountered in NMR when peaks have large linewidths due to rapidly decaying signals, hindering its application. We address this problem based on the notion of finite-rate-of-innovation (FRI) sampling, which is based on the premise that signals such as the NMR signal, can be accurately reconstructed using fewer measurements than that required by existing approaches. The FRI approach leads to super-resolution, beyond the limits of contemporary NMR techniques. Using this method, we could measure for the first time small changes in chemical shifts during the formation of a Gold nanorod-protein complex, facilitating the quantification of the strength of such interactions. The method thus opens up new possibilities for the application and acceleration of multidimensional NMR spectroscopy across a wide range of systems.

12.
SLAS Technol ; 22(5): 565-572, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28395141

RESUMO

The erythrocyte sedimentation rate (ESR) is a commonly used test to screen for inflammatory conditions such as infections, autoimmune diseases, and cancers. However, it is a bulk macroscale test that requires a relatively large blood sample and takes a long time to run. Moreover, it provides no information regarding cell sizes or interactions, which can be highly variable. To overcome these drawbacks, we developed a microfluidic microscopy-based protocol to dynamically track settling red blood cells (RBCs) to quantify velocity of cell settling, as a surrogate for the ESR. We imaged individual cells in a vertical microfluidic channel and applied a hybrid cell detection and tracking algorithm to compute settling velocities. We combined eigenvalue background subtraction and centroid detection together with the Kalman filter and Hungarian assignment solver algorithms to increase accuracy and computational speed. Our algorithm is designed to track settling RBCs/aggregates in high cellularity samples rather than single cells in suspension. Detection accuracy was 79.3%, which is comparable to state-of-the-art cell-tracking techniques. Compared with conventional ESR tests, our approach has the advantages of being automated, using microliter volumes of blood samples, and rapid turnaround.


Assuntos
Sedimentação Sanguínea , Técnicas de Laboratório Clínico/métodos , Microfluídica/métodos , Microscopia/métodos , Técnicas de Laboratório Clínico/instrumentação , Voluntários Saudáveis , Humanos , Processamento de Imagem Assistida por Computador/métodos , Microfluídica/instrumentação , Microscopia/instrumentação , Imagem Óptica/métodos
13.
IEEE Trans Image Process ; 25(3): 1451-64, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26841391

RESUMO

Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x - and y -coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.

14.
PLoS One ; 9(3): e89540, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24603717

RESUMO

Objective identification and description of mimicked calls is a primary component of any study on avian vocal mimicry but few studies have adopted a quantitative approach. We used spectral feature representations commonly used in human speech analysis in combination with various distance metrics to distinguish between mimicked and non-mimicked calls of the greater racket-tailed drongo, Dicrurus paradiseus and cross-validated the results with human assessment of spectral similarity. We found that the automated method and human subjects performed similarly in terms of the overall number of correct matches of mimicked calls to putative model calls. However, the two methods also misclassified different subsets of calls and we achieved a maximum accuracy of ninety five per cent only when we combined the results of both the methods. This study is the first to use Mel-frequency Cepstral Coefficients and Relative Spectral Amplitude - filtered Linear Predictive Coding coefficients to quantify vocal mimicry. Our findings also suggest that in spite of several advances in automated methods of song analysis, corresponding cross-validation by humans remains essential.


Assuntos
Percepção Auditiva/fisiologia , Comportamento Imitativo/fisiologia , Passeriformes/fisiologia , Vocalização Animal/fisiologia , Algoritmos , Animais , Humanos , Processamento de Sinais Assistido por Computador , Espectrografia do Som/métodos , Especificidade da Espécie
15.
J Acoust Soc Am ; 133(5): 2788-802, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23654386

RESUMO

Transient signals such as plosives in speech or Castanets in audio do not have a specific modulation or periodic structure in time domain. However, in the spectral domain they exhibit a prominent modulation structure, which is a direct consequence of their narrow time localization. Based on this observation, a spectral-domain AM-FM model for transients is proposed. The spectral AM-FM model is built starting from real spectral zero-crossings. The AM and FM correspond to the spectral envelope (SE) and group delay (GD), respectively. Taking into account the modulation structure and spectral continuity, a local polynomial regression technique is proposed to estimate the GD function from the real spectral zeros. The SE is estimated based on the phase function computed from the estimated GD. Since the GD estimation is parametric, the degree of smoothness can be controlled directly. Simulation results based on synthetic transient signals generated using a beta density function are presented to analyze the noise-robustness of the SEGD model. Three specific applications are considered: (1) SEGD based modeling of Castanet sounds; (2) appropriateness of the model for transient compression; and (3) determining glottal closure instants in speech using a short-time SEGD model of the linear prediction residue.


Assuntos
Acústica , Som , Simulação por Computador , Humanos , Modelos Lineares , Masculino , Movimento (Física) , Música , Fonética , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Acústica da Fala , Fatores de Tempo
16.
J Opt Soc Am A Opt Image Sci Vis ; 29(10): 2080-91, 2012 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-23201655

RESUMO

We address the problem of high-resolution reconstruction in frequency-domain optical-coherence tomography (FDOCT). The traditional method employed uses the inverse discrete Fourier transform, which is limited in resolution due to the Heisenberg uncertainty principle. We propose a reconstruction technique based on zero-crossing (ZC) interval analysis. The motivation for our approach lies in the observation that, for a multilayered specimen, the backscattered signal may be expressed as a sum of sinusoids, and each sinusoid manifests as a peak in the FDOCT reconstruction. The successive ZC intervals of a sinusoid exhibit high consistency, with the intervals being inversely related to the frequency of the sinusoid. The statistics of the ZC intervals are used for detecting the frequencies present in the input signal. The noise robustness of the proposed technique is improved by using a cosine-modulated filter bank for separating the input into different frequency bands, and the ZC analysis is carried out on each band separately. The design of the filter bank requires the design of a prototype, which we accomplish using a Kaiser window approach. We show that the proposed method gives good results on synthesized and experimental data. The resolution is enhanced, and noise robustness is higher compared with the standard Fourier reconstruction.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Tomografia de Coerência Óptica/métodos , Vidro , Cebolas , Folhas de Planta , Razão Sinal-Ruído
17.
J Opt Soc Am A Opt Image Sci Vis ; 29(10): 2118-29, 2012 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-23201659

RESUMO

We propose a Riesz transform approach to the demodulation of digital holograms. The Riesz transform is a higher-dimensional extension of the Hilbert transform and is steerable to a desired orientation. Accurate demodulation of the hologram requires a reliable methodology by which quadrature-phase functions (or simply, quadratures) can be constructed. The Riesz transform, by itself, does not yield quadratures. However, one can start with the Riesz transform and construct the so-called vortex operator by employing the notion of quasi-eigenfunctions, and this approach results in accurate quadratures. The key advantage of using the vortex operator is that it effectively handles nonplanar fringes (interference patterns) and has the ability to compensate for the local orientation. Therefore, this method results in aberration-free holographic imaging even in the case when the wavefronts are not planar. We calibrate the method by estimating the orientation from a reference hologram, measured with an empty field of view. Demodulation results on synthesized planar as well as nonplanar fringe patterns show that the accuracy of demodulation is high. We also perform validation on real experimental measurements of Caenorhabditis elegans acquired with a digital holographic microscope.


Assuntos
Algoritmos , Holografia/métodos , Microscopia/métodos , Animais , Caenorhabditis elegans , Processamento de Imagem Assistida por Computador
18.
Opt Lett ; 37(23): 4907-9, 2012 Dec 01.
Artigo em Inglês | MEDLINE | ID: mdl-23202086

RESUMO

We address the reconstruction problem in frequency-domain optical-coherence tomography (FDOCT) from undersampled measurements within the framework of compressed sensing (CS). Specifically, we propose optimal sparsifying bases for accurate reconstruction by analyzing the backscattered signal model. Although one might expect Fourier bases to be optimal for the FDOCT reconstruction problem, it turns out that the optimal sparsifying bases are windowed cosine functions where the window is the magnitude spectrum of the laser source. Further, the windowed cosine bases can be phase locked, which allows one to obtain higher accuracy in reconstruction. We present experimental validations on real data. The findings reported in this Letter are useful for optimal dictionary design within the framework of CS-FDOCT.

19.
IEEE Trans Image Process ; 21(3): 1258-71, 2012 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21965208

RESUMO

We present a new class of continuously defined parametric snakes using a special kind of exponential splines as basis functions. We have enforced our bases to have the shortest possible support subject to some design constraints to maximize efficiency. While the resulting snakes are versatile enough to provide a good approximation of any closed curve in the plane, their most important feature is the fact that they admit ellipses within their span. Thus, they can perfectly generate circular and elliptical shapes. These features are appropriate to delineate cross sections of cylindrical-like conduits and to outline bloblike objects. We address the implementation details and illustrate the capabilities of our snake with synthetic and real data.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Núcleo Celular/ultraestrutura , Criança , Endocárdio/anatomia & histologia , Células HeLa , Humanos , Imageamento por Ressonância Magnética , Microscopia de Fluorescência
20.
J Opt Soc Am A Opt Image Sci Vis ; 28(6): 983-92, 2011 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-21643382

RESUMO

We address the problem of exact complex-wave reconstruction in digital holography. We show that, by confining the object-wave modulation to one quadrant of the frequency domain, and by maintaining a reference-wave intensity higher than that of the object, one can achieve exact complex-wave reconstruction in the absence of noise. A feature of the proposed technique is that the zero-order artifact, which is commonly encountered in hologram reconstruction, can be completely suppressed in the absence of noise. The technique is noniterative and nonlinear. We also establish a connection between the reconstruction technique and homomorphic signal processing, which enables an interpretation of the technique from the perspective of deconvolution. Another key contribution of this paper is a direct link between the reconstruction technique and the two-dimensional Hilbert transform formalism proposed by Hahn. We show that this connection leads to explicit Hilbert transform relations between the magnitude and phase of the complex wave encoded in the hologram. We also provide results on simulated as well as experimental data to validate the accuracy of the reconstruction technique.


Assuntos
Holografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Dinâmica não Linear , Imagens de Fantasmas , Pólen , Reprodutibilidade dos Testes , Processamento de Sinais Assistido por Computador , Taxus
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...