Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Image Process ; 10(4): 623-31, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18249651

RESUMO

We develop a method for the formation of spotlight-mode synthetic aperture radar (SAR) images with enhanced features. The approach is based on a regularized reconstruction of the scattering field which combines a tomographic model of the SAR observation process with prior information regarding the nature of the features of interest. Compared to conventional SAR techniques, the method we propose produces images with increased resolution, reduced sidelobes, reduced speckle and easier-to-segment regions. Our technique effectively deals with the complex-valued, random-phase nature of the underlying SAR reflectivities. An efficient and robust numerical solution is achieved through extensions of half-quadratic regularization methods to the complex-valued SAR problem. We demonstrate the performance of the method on synthetic and real SAR scenes.

2.
IEEE Trans Image Process ; 10(7): 1118-28, 2001.
Artigo em Inglês | MEDLINE | ID: mdl-18249684

RESUMO

Motivated by work in the area of dynamic magnetic resonance imaging (MRI), we develop a new approach to the problem of reduced-order MRI acquisition. Efforts in this field have concentrated on the use of Fourier and singular value decomposition (SVD) methods to obtain low-order representations of an entire image plane. We augment this work to the case of imaging an arbitrarily-shaped region of interest (ROI) embedded within the full image. After developing a natural error metric for this problem, we show that determining the minimal order required to meet a prescribed error level is in general intractable, but can be solved under certain assumptions. We then develop an optimization approach to the related problem of minimizing the error for a given order. Finally, we demonstrate the utility of this approach and its advantages over existing Fourier and SVD methods on a number of MRI images.

3.
IEEE Trans Med Imaging ; 19(3): 243-55, 2000 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-10875708

RESUMO

Arterial diameter estimation from X-ray ciné angiograms is important for quantifying coronary artery disease (CAD) and for evaluating therapy. However, diameter measurement in vessel cross sections < or =1.0 mm is associated with large measurement errors. We present a novel diameter estimator which reduces both magnitude and variability of measurement error. We use a parametric nonlinear imaging model for X-ray ciné angiography and estimate unknown model parameters directly from the image data. Our technique allows us to exploit additional diameter information contained within the intensity profile amplitude, a feature which is overlooked by existing methods. This method uses a two-step procedure: the first step estimates the imaging model parameters directly from the angiographic frame and the second step uses these measurements to estimate the diameter of vessels in the same image. In Monte-Carlo simulation over a range of imaging conditions, our approach consistently produced lower estimation error and variability than conventional methods. With actual X-ray images, our estimator is also better than existing methods for the diameters examined (0.4-4.0 mm). These improvements are most significant in the range of narrow vessel widths associated with severe coronary artery disease.


Assuntos
Cineangiografia , Processamento de Imagem Assistida por Computador/métodos , Arteriopatias Oclusivas/diagnóstico por imagem , Simulação por Computador , Humanos , Imagens de Fantasmas , Reprodutibilidade dos Testes
4.
IEEE Trans Image Process ; 9(3): 456-68, 2000.
Artigo em Inglês | MEDLINE | ID: mdl-18255416

RESUMO

This paper addresses the problem of both segmenting and reconstructing a noisy signal or image. The work is motivated by large problems arising in certain scientific applications, such as medical imaging. Two objectives for a segmentation and denoising algorithm are laid out: it should be computationally efficient and capable of generating statistics for the errors in the reconstruction and estimates of the boundary locations. The starting point for the development of a suitable algorithm is a variational approach to segmentation (Shah 1992). This paper then develops a precise statistical interpretation of a one dimensional (1-D) version of this variational approach to segmentation. The 1-D algorithm that arises as a result of this analysis is computationally efficient and capable of generating error statistics. A straightforward extension of this algorithm to two dimensions would incorporate recursive procedures for computing estimates of inhomogeneous Gaussian Markov random fields. Such procedures require an unacceptably large number of operations. To meet the objective of developing a computationally efficient algorithm, the use of previously developed multiscale statistical methods is investigated. This results in the development of an algorithm for segmenting and denoising which is not only computationally efficient but also capable of generating error statistics, as desired.

5.
IEEE Trans Image Process ; 7(6): 825-37, 1998.
Artigo em Inglês | MEDLINE | ID: mdl-18276296

RESUMO

In this paper, we investigate the problems of anomaly detection and localization from noisy tomographic data. These are characteristic of a class of problems that cannot be optimally solved because they involve hypothesis testing over hypothesis spaces with extremely large cardinality. Our multiscale hypothesis testing approach addresses the key issues associated with this class of problems. A multiscale hypothesis test is a hierarchical sequence of composite hypothesis tests that discards large portions of the hypothesis space with minimal computational burden and zooms in on the likely true hypothesis. For the anomaly detection and localization problems, hypothesis zooming corresponds to spatial zooming - anomalies are successively localized to finer and finer spatial scales. The key challenges we address include how to hierarchically divide a large hypothesis space and how to process the data at each stage of the hierarchy to decide which parts of the hypothesis space deserve more attention. For the latter, we pose and solve a nonlinear optimization problem for a decision statistic that maximally disambiguates composite hypotheses. With no more computational complexity, our optimized statistic shows substantial improvement over conventional approaches. We provide examples that demonstrate this and quantify how much performance is sacrificed by the use of a suboptimal method as compared to that achievable if the optimal approach were computationally feasible.

6.
IEEE Trans Image Process ; 6(1): 7-20, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18282875

RESUMO

We present efficient multiscale approaches to the segmentation of natural clutter, specifically grass and forest, and to the enhancement of anomalies in synthetic aperture radar (SAR) imagery. The methods we propose exploit the coherent nature of SAR sensors. In particular, they take advantage of the characteristic statistical differences in imagery of different terrain types, as a function of scale, due to radar speckle. We employ a class of multiscale stochastic processes that provide a powerful framework for describing random processes and fields that evolve in scale. We build models representative of each category of terrain of interest (i.e., grass and forest) and employ them in directing decisions on pixel classification, segmentation, and anomalous behaviour. The scale-autoregressive nature of our models allows extremely efficient calculation of likelihoods for different terrain classifications over windows of SAR imagery. We subsequently use these likelihoods as the basis for both image pixel classification and grass-forest boundary estimation. In addition, anomaly enhancement is possible with minimal additional computation. Specifically, the residuals produced by our models in predicting SAR imagery from coarser scale images are theoretically uncorrelated. As a result, potentially anomalous pixels and regions are enhanced and pinpointed by noting regions whose residuals display a high level of correlation throughout scale. We evaluate the performance of our techniques through testing on 0.3-m resolution SAR data gathered with Lincoln Laboratory's millimeter-wave SAR.

7.
IEEE Trans Image Process ; 6(3): 463-78, 1997.
Artigo em Inglês | MEDLINE | ID: mdl-18282941

RESUMO

We use a natural pixel-type representation of an object, originally developed for incomplete data tomography problems, to construct nearly orthonormal multiscale basis functions. The nearly orthonormal behavior of the multiscale basis functions results in a system matrix, relating the input (the object coefficients) and the output (the projection data), which is extremely sparse. In addition, the coarsest scale elements of this matrix capture any ill conditioning in the system matrix arising from the geometry of the imaging system. We exploit this feature to partition the system matrix by scales and obtain a reconstruction procedure that requires inversion of only a well-conditioned and sparse matrix. This enables us to formulate a tomographic reconstruction technique from incomplete data wherein the object is reconstructed at multiple scales or resolutions. In case of noisy projection data we extend our multiscale reconstruction technique to explicitly account for noise by calculating maximum a posteriori probability (MAP) multiscale reconstruction estimates based on a certain self-similar prior on the multiscale object coefficients. The framework for multiscale reconstruction presented can find application in regularization of imaging problems where the projection data are incomplete, irregular, and noisy, and in object feature recognition directly from projection data.

8.
Ultrasound Med Biol ; 22(1): 25-34, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-8928314

RESUMO

Arterial diameter is an important parameter of vascular physiology in vivo. Noninvasive measurements of arterial diameter can be used in the assessment of endothelium-dependent vasoreactivity (EDV) and arterial compliance. Measurements of EDV may serve for assessment of early atherosclerosis. The potential value of EDV measurements with specificity for individual subjects is a strong motivation for improvements in the ultrasonic measurement of arterial diameter. This article presents and evaluates new methods for the measurement and tracking of arterial diameter from B-mode images. B-mode images acquired in planes longitudinal to the vessel and in planes rotated slightly off of the vessel axis ("skew") are considered. The cross-sections of arteries in these planes are modeled as parabola pairs or as ellipses. For the brachial artery, the variance of caliper-based diameter estimates (0.0139 mm2) is twice as large as that of elliptical-model-based diameter estimates (0.0072 mm2) and five times as large as parabolic-model-based diameter estimates (0.0027 mm2). Diameter estimates from the skew and longitudinal planes perform equivalently in limited-motion quantitative comparisons. However, diameter estimates from skew planes are less sensitive to translational motions of the artery. Also, translational motions are unambiguously represented in the skew image, thus facilitating compensatory motions of the transducer. The methods described here are relatively simple to implement and may provide adequate resolution for noninvasive assessment of EDV with individual specificity.


Assuntos
Algoritmos , Artéria Braquial/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Arteriosclerose/diagnóstico por imagem , Artéria Braquial/anatomia & histologia , Artéria Braquial/fisiologia , Endotélio Vascular/fisiologia , Humanos , Ultrassonografia/métodos , Resistência Vascular
9.
Ultrasound Med Biol ; 22(1): 35-42, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-8928315

RESUMO

Measurements of endothelium-dependent vasoreactivity and arterial compliance are important metrics of vascular pathophysiology which may be used for the development and evaluation of therapeutic methods. The technique of ultrasonic echo tracking is applicable to measurements of endothelium-dependent vasoreactivity and arterial compliance. To evaluate the application of echo tracking to these measurements, we constructed a system based upon analog-to-digital conversion and storage of the radio frequency (RF) ultrasound signals. Off-line analysis of the RF data with various echo-tracking algorithms demonstrated two potential sources of error: tracking drift and RF transition regions. The tracking drift resulted from the slow accumulation of tracking error. The RF transition regions were associated with disparate motions of neighboring reflectors or the insonation of a new series of tissue layers. As a result of these sources of error, the application of echo tracking to endothelium-dependent vasoreactivity measurements is unlikely to outperform duplex ultrasound methods. The application of echo tracking to arterial compliance measurements via the arterial pressure/diameter relationship may produce variable results due to RF transition regions. Finally, the application of echo tracking to arterial compliance measurements via the pulse wave velocity is relatively insensitive to these sources of error because the pulse-wave velocity measurement depends upon the timing of the peak arterial distension, not on the absolute value of the distension.


Assuntos
Algoritmos , Artérias/diagnóstico por imagem , Endotélio Vascular/fisiologia , Conversão Análogo-Digital , Artérias/fisiologia , Velocidade do Fluxo Sanguíneo/fisiologia , Humanos , Processamento de Sinais Assistido por Computador , Ultrassonografia Doppler/métodos , Resistência Vascular/fisiologia
10.
IEEE Trans Med Imaging ; 15(1): 92-101, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18215892

RESUMO

The authors represent the standard ramp filter operator of the filtered-back-projection (FBP) reconstruction in different bases composed of Haar and Daubechies compactly supported wavelets. The resulting multiscale representation of the ramp-filter matrix operator is approximately diagonal. The accuracy of this diagonal approximation becomes better as wavelets with larger numbers of vanishing moments are used. This wavelet-based representation enables the authors to formulate a multiscale tomographic reconstruction technique in which the object is reconstructed at multiple scales or resolutions. A complete reconstruction is obtained by combining the reconstructions at different scales. The authors' multiscale reconstruction technique has the same computational complexity as the FBP reconstruction method. It differs from other multiscale reconstruction techniques in that (1) the object is defined through a one-dimensional multiscale transformation of the projection domain, and (2) the authors explicitly account for noise in the projection data by calculating maximum a posteriori probability (MAP) multiscale reconstruction estimates based on a chosen fractal prior on the multiscale object coefficients. The computational complexity of this maximum a posteriori probability (MAP) solution is also the same as that of the FBP reconstruction. This result is in contrast to commonly used methods of statistical regularization, which result in computationally intensive optimization algorithms.

11.
IEEE Trans Image Process ; 5(3): 459-70, 1996.
Artigo em Inglês | MEDLINE | ID: mdl-18285131

RESUMO

We describe a variational framework for the tomographic reconstruction of an image from the maximum likelihood (ML) estimates of its orthogonal moments. We show how these estimated moments and their (correlated) error statistics can be computed directly, and in a linear fashion from given noisy and possibly sparse projection data. Moreover, thanks to the consistency properties of the Radon transform, this two-step approach (moment estimation followed by image reconstruction) can be viewed as a statistically optimal procedure. Furthermore, by focusing on the important role played by the moments of projection data, we immediately see the close connection between tomographic reconstruction of nonnegative valued images and the problem of nonparametric estimation of probability densities given estimates of their moments. Taking advantage of this connection, our proposed variational algorithm is based on the minimization of a cost functional composed of a term measuring the divergence between a given prior estimate of the image and the current estimate of the image and a second quadratic term based on the error incurred in the estimation of the moments of the underlying image from the noisy projection data. We show that an iterative refinement of this algorithm leads to a practical algorithm for the solution of the highly complex equality constrained divergence minimization problem. We show that this iterative refinement results in superior reconstructions of images from very noisy data as compared with the classical filtered back-projection (FBP) algorithm.

12.
IEEE Trans Med Imaging ; 14(2): 249-58, 1995.
Artigo em Inglês | MEDLINE | ID: mdl-18215828

RESUMO

The estimation of dynamically evolving ellipsoids from noisy lower-dimensional projections is examined. In particular, this work describes a model-based approach using geometric reconstruction and recursive estimation techniques to obtain a dynamic estimate of left-ventricular ejection fraction from a gated set of planar myocardial perfusion images. The proposed approach differs from current ejection fraction estimation techniques both in the imaging modality used and in the subsequent processing which yields a dynamic ejection fraction estimate. For this work, the left ventricle is modeled as a dynamically evolving three-dimensional (3-D) ellipsoid. The left-ventricular outline observed in the myocardial perfusion images is then modeled as a dynamic, two-dimensional (2-D) ellipsoid, obtained as the projection of the former 3-D ellipsoid. This data is processed in two ways: first, as a 3-D dynamic ellipsoid reconstruction problem; second, each view is considered as a 2-D dynamic ellipse estimation problem and then the 3-D ejection fraction is obtained by combining the effective 2-D ejection fractions of each view. The approximating ellipsoids are reconstructed using a Rauch-Tung-Striebel smoothing filter, which produces an ejection fraction estimate that is more robust to noise since it is based on the entire data set; in contrast, traditional ejection fraction estimates are based only on true frames of data. Further, numerical studies of the sensitivity of this approach to unknown dynamics and projection geometry are presented, providing a rational basis for specifying system parameters. This investigation includes estimation of ejection fraction from both simulated and real data.

13.
IEEE Trans Image Process ; 3(6): 773-88, 1994.
Artigo em Inglês | MEDLINE | ID: mdl-18296246

RESUMO

In the computation of dense optical flow fields, spatial coherence constraints are commonly used to regularize otherwise ill-posed problem formulations, providing spatial integration of data. We present a temporal, multiframe extension of the dense optical flow estimation formulation proposed by Horn and Schunck (1981) in which we use a temporal coherence constraint to yield the optimal fusing of data from multiple frames of measurements. Conceptually, standard Kalman filtering algorithms are applicable to the resulting multiframe optical flow estimation problem, providing a solution that is sequential and recursive in time. Experiments are presented to demonstrate that the resulting multiframe estimates are more robust to noise than those provided by the original, single-frame formulation. In addition, we demonstrate cases where the aperture problem of motion vision cannot be resolved satisfactorily without the temporal integration of data enabled by the proposed formulation. Practically, the large matrix dimensions involved in the problem prohibit exact implementation of the optimal Kalman filter. To overcome this limitation, we present a computationally efficient, yet near-optimal approximation of the exact filtering algorithm. This approximation has a precise interpretation as the sequential estimation of a reduced-order spatial model for the optical flow estimation error process at each time step and arises from an estimation-theoretic treatment of the filtering problem. Experiments also demonstrate the efficacy of this near-optimal filter.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...