Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 21(12): 4819-29, 2012 Dec.
Article in English | MEDLINE | ID: mdl-22692908

ABSTRACT

Global voting schemes based on the Hough transform (HT) have been widely used to robustly detect lines in images. However, since the votes do not take line connectivity into account, these methods do not deal well with cluttered images. On the other hand, the so-called local methods enforce connectivity but lack robustness to deal with challenging situations that occur in many realistic scenarios, e.g., when line segments cross or when long segments are corrupted. We address the critical limitations of the HT as a line segment extractor by incorporating connectivity in the voting process. This is done by only accounting for the contributions of edge points lying in increasingly larger neighborhoods and whose position and directional information agree with potential line segments. As a result, our method, which we call segment extraction by connectivity-enforcing HT (STRAIGHT), extracts the longest connected segments in each location of the image, thus also integrating into the HT voting process the usually separate step of individual segment extraction. The usage of the Hough space mapping and a corresponding hierarchical implementation make our approach computationally feasible. We present experiments that illustrate, with synthetic and real images, how STRAIGHT succeeds in extracting complete segments in situations where current methods fail.

2.
IEEE Trans Image Process ; 20(10): 2896-911, 2011 Oct.
Article in English | MEDLINE | ID: mdl-21518659

ABSTRACT

When comparing 2-D shapes, a key issue is their normalization. Translation and scale are easily taken care of by removing the mean and normalizing the energy. However, defining and computing the orientation of a 2-D shape is not so simple. In fact, although for elongated shapes the principal axis can be used to define one of two possible orientations, there is no such tool for general shapes. As we show in the paper, previous approaches fail to compute the orientation of even noiseless observations of simple shapes. We address this problem and show how to uniquely define the orientation of an arbitrary 2-D shape, in terms of what we call its Principal Moments. We start by showing that a small subset of these moments suffices to describe the underlying 2-D shape, i.e., that they form a compact representation, which is particularly relevant when dealing with large databases. Then, we propose a new method to efficiently compute the shape orientation: Principal Moment Analysis. Finally, we discuss how this method can further be applied to normalize gray-level images. Besides the theoretical proof of correctness, we describe experiments demonstrating robustness to noise and illustrating with real images.

3.
Article in English | MEDLINE | ID: mdl-19163933

ABSTRACT

The analysis of 3D SPECT brain images requires several pre-processing steps such as registration, intensity normalization and brain extraction. Usually, registration is performed before intensity normalization, which requires robust registration methods, such as those based on the maximization of the Mutual Information (MI), which are computationally complex. In this paper we propose using a computationally simple method to perform the simultaneous registration and intensity normalization of SPECT brain perfusion images. The approach, which extends to 3D data a method originally proposed in [1] for 2D photographic images, estimates in alternate steps the intensity normalization parameters and the registration parameters. Our experiments, with real SPECT images, show that the proposed registration method leads to results similar to those obtained by using more expensive algorithms such as those based on the MI criterion.


Subject(s)
Algorithms , Artificial Intelligence , Brain/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Tomography, Emission-Computed, Single-Photon/methods , Humans , Image Enhancement/methods , Reproducibility of Results , Sensitivity and Specificity
4.
IEEE Trans Image Process ; 14(8): 1109-24, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16121459

ABSTRACT

Layered video representations are increasingly popular; see [2] for a recent review. Segmentation of moving objects is a key step for automating such representations. Current motion segmentation methods either fail to segment moving objects in low-textured regions or are computationally very expensive. This paper presents a computationally simple algorithm that segments moving objects, even in low-texture/low-contrast scenes. Our method infers the moving object templates directly from the image intensity values, rather than computing the motion field as an intermediate step. Our model takes into account the rigidity of the moving object and the occlusion of the background by the moving object. We formulate the segmentation problem as the minimization of a penalized likelihood cost function and present an algorithm to estimate all the unknown parameters: the motions, the template of the moving object, and the intensity levels of the object and of the background pixels. The cost function combines a maximum likelihood estimation term with a term that penalizes large templates. The minimization algorithm performs two alternate steps for which we derive closed-form solutions. Relaxation improves the convergence even when low texture makes it very challenging to segment the moving object from the background. Experiments demonstrate the good performance of our method.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Movement , Pattern Recognition, Automated/methods , Subtraction Technique , Video Recording/methods , Computer Simulation , Models, Statistical , Numerical Analysis, Computer-Assisted , Signal Processing, Computer-Assisted
5.
IEEE Trans Pattern Anal Mach Intell ; 27(5): 822-7, 2005 May.
Article in English | MEDLINE | ID: mdl-15875804

ABSTRACT

The problem of inferring 3D orientation of a camera from video sequences has been mostly addressed by first computing correspondences of image features. This intermediate step is now seen as the main bottleneck of those approaches. In this paper, we propose a new 3D orientation estimation method for urban (indoor and outdoor) environments, which avoids correspondences between frames. The scene property exploited by our method is that many edges are oriented along three orthogonal directions; this is the recently introduced Manhattan world (MW) assumption. The main contributions of this paper are: the definition of equivalence classes of equiprojective orientations, the introduction of a new small rotation model, formalizing the fact that the camera moves smoothly, and the decoupling of elevation and twist angle estimation from that of the compass angle. We build a probabilistic sequential orientation estimation method, based on an MW likelihood model, with the above-listed contributions allowing a drastic reduction of the search space for each orientation estimate. We demonstrate the performance of our method using real video sequences.


Subject(s)
Algorithms , Artificial Intelligence , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Space Perception , Video Recording/methods , Cluster Analysis , Image Enhancement/methods , Information Storage and Retrieval/methods , Photography/methods , Subtraction Technique
SELECTION OF CITATIONS
SEARCH DETAIL
...