Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 25
Filter
1.
IEEE Trans Image Process ; 33: 3161-3173, 2024.
Article in English | MEDLINE | ID: mdl-38683701

ABSTRACT

Detecting ellipses poses a challenging low-level task indispensable to many image analysis applications. Existing ellipse detection methods commonly encounter two fundamental issues. First, inferior detection accuracy could be incurred on a small ellipse than that on a large one; this introduces the scale issue. Second, inferior detection accuracy could be yielded along the minor axis than along the major one of the same ellipse; this leads to the anisotropy issue. To address these issues simultaneously, a novel anisotropic scale-invariant (ASI) ellipse detection methodology is proposed. Our basic idea is to perform ellipse detection in a transformed image space referred to as the ellipse normalization (EN) space, in which the desired ellipse from the original image is 'normalized' to the unit circle. With the establishment of the EN-space, an analytical ellipse fitting scheme and a set of distance measures are developed. Theoretical justifications are then derived to prove that both our ellipse fitting scheme and distance measures are invariant to anisotropic scaling, and thus each ellipse can be detected with the same accuracy regardless of its size and ellipticity. By incorporating these components into two recent state-of-the-art algorithms, two ASI ellipse detectors are finally developed and exploited to verify the effectiveness of our proposed methodology.

2.
IEEE Trans Image Process ; 31: 6175-6187, 2022.
Article in English | MEDLINE | ID: mdl-36126028

ABSTRACT

In this paper, a full-reference video quality assessment (VQA) model is designed for the perceptual quality assessment of the screen content videos (SCVs), called the hybrid spatiotemporal feature-based model (HSFM). The SCVs are of hybrid structure including screen and natural scenes, which are perceived by the human visual system (HVS) with different visual effects. With this consideration, the three dimensional Laplacian of Gaussian (3D-LOG) filter and three dimensional Natural Scene Statistics (3D-NSS) are exploited to extract the screen and natural spatiotemporal features, based on the reference and distorted SCV sequences separately. The similarities of these extracted features are then computed independently, followed by generating the distorted screen and natural quality scores for screen and natural scenes. After that, an adaptive screen and natural quality fusion scheme through the local video activity is developed to combine them for arriving at the final VQA score of the distorted SCV under evaluation. The experimental results on the Screen Content Video Database (SCVD) and Compressed Screen Content Video Quality (CSCVQ) databases have shown that the proposed HSFM is more in line with the perceptual quality assessment of the SCVs perceived by the HVS, compared with a variety of classic and latest IQA/VQA models.


Subject(s)
Algorithms , Databases, Factual , Humans , Video Recording/methods
3.
IEEE Trans Image Process ; 31: 3765-3779, 2022.
Article in English | MEDLINE | ID: mdl-35604974

ABSTRACT

This paper proposes a new full-reference image quality assessment (IQA) model for performing perceptual quality evaluation on light field (LF) images, called the spatial and geometry feature-based model (SGFM). Considering that the LF image describe both spatial and geometry information of the scene, the spatial features are extracted over the sub-aperture images (SAIs) by using contourlet transform and then exploited to reflect the spatial quality degradation of the LF images, while the geometry features are extracted across the adjacent SAIs based on 3D-Gabor filter and then explored to describe the viewing consistency loss of the LF images. These schemes are motivated and designed based on the fact that the human eyes are more interested in the scale, direction, contour from the spatial perspective and viewing angle variations from the geometry perspective. These operations are applied to the reference and distorted LF images independently. The degree of similarity can be computed based on the above-measured quantities for jointly arriving at the final IQA score of the distorted LF image. Experimental results on three commonly-used LF IQA datasets show that the proposed SGFM is more in line with the quality assessment of the LF images perceived by the human visual system (HVS), compared with multiple classical and state-of-the-art IQA models.

4.
Article in English | MEDLINE | ID: mdl-32881686

ABSTRACT

Existing neural networks proposed for low-level image processing tasks are usually implemented by stacking convolution layers with limited kernel size. Every convolution layer merely involves in context information from a small local neighborhood. More contextual features can be explored as more convolution layers are adopted. However it is difficult and costly to take full advantage of long-range dependencies. We propose a novel non-local module, Pyramid Non-local Block, to build up connection between every pixel and all remain pixels. The proposed module is capable of efficiently exploiting pairwise dependencies between different scales of low-level structures. The target is fulfilled through first learning a query feature map with full resolution and a pyramid of reference feature maps with downscaled resolutions. Then correlations with multi-scale reference features are exploited for enhancing pixel-level feature representation. The calculation procedure is economical considering memory consumption and computational cost. Based on the proposed module, we devise a Pyramid Non-local Enhanced Networks for edge-preserving image smoothing which achieves state-of-the-art performance in imitating three classical image smoothing algorithms. Additionally, the pyramid non-local block can be directly incorporated into convolution neural networks for other image restoration tasks. We integrate it into two existing methods for image denoising and single image super-resolution, achieving consistently improved performance.

5.
Article in English | MEDLINE | ID: mdl-32997630

ABSTRACT

A new multi-scale deep learning (MDL) framework is proposed and exploited for conducting image interpolation in this paper. The core of the framework is a seeding network that needs to be designed for the targeted task. For image interpolation, a novel attention-aware inception network (AIN) is developed as the seeding network; it has two key stages: 1) feature extraction based on the low-resolution input image; and 2) feature-to-image mapping to enlarge image's size or resolution. Note that the designed seeding network, AIN, needs to be trained with a matched training dataset at each scale. For that, multi-scale image patches are generated using our proposed pyramid cut, which outperforms the conventional image pyramid method by completely avoiding aliasing issue. After training, the trained AINs are then combined for processing the input image in the testing stage. Extensive experimental simulation results obtained from seven image datasets (comprising 359 images in total) have clearly shown that the proposed MAIN consistently delivers highly accurate interpolated images.

6.
Article in English | MEDLINE | ID: mdl-32886610

ABSTRACT

Lossy compression brings artifacts into the compressed image and degrades the visual quality. In recent years, many compression artifacts removal methods based on convolutional neural network (CNN) have been developed with great success. However, these methods usually train a model based on one specific value or a small range of quality factors. Obviously, if the test images quality factor does not match to the assumed value range, then degraded performance will be resulted. With this motivation and further consideration of practical usage, a highly robust compression artifacts removal network is proposed in this paper. Our proposed network is a single model approach that can be trained for handling a wide range of quality factors while consistently delivering superior or comparable image artifacts removal performance. To demonstrate, we focus on the JPEG compression with quality factors, ranging from 1 to 60. Note that a turnkey success of our proposed network lies in the novel utilization of the quantization tables as part of the training data. Furthermore, it has two branches in parallel-i.e., the restoration branch and the global branch. The former effectively removes the local artifacts, such as ringing artifacts removal. On the other hand, the latter extracts the global features of the entire image that provides highly instrumental image quality improvement, especially effective on dealing with the global artifacts, such as blocking, color shifting. Extensive experimental results performed on color and grayscale images have clearly demonstrated the effectiveness and efficacy of our proposed single-model approach on the removal of compression artifacts from the decoded image.

7.
Article in English | MEDLINE | ID: mdl-32845839

ABSTRACT

In this paper, we make the first attempt to study the subjective and objective quality assessment for the screen content videos (SCVs). For that, we construct the first large-scale video quality assessment (VQA) database specifically for the SCVs, called the screen content video database (SCVD). This SCVD provides 16 reference SCVs, 800 distorted SCVs, and their corresponding subjective scores, and it is made publicly available for research usage. The distorted SCVs are generated from each reference SCV with 10 distortion types and 5 degradation levels for each distortion type. Each distorted SCV is rated by at least 32 subjects in the subjective test. Furthermore, we propose the first full-reference VQA model for the SCVs, called the spatiotemporal Gabor feature tensor-based model (SGFTM), to objectively evaluate the perceptual quality of the distorted SCVs. This is motivated by the observation that 3D-Gabor filter can well stimulate the visual functions of the human visual system (HVS) on perceiving videos, being more sensitive to the edge and motion information that are often-encountered in the SCVs. Specifically, the proposed SGFTM exploits 3D-Gabor filter to individually extract the spatiotemporal Gabor feature tensors from the reference and distorted SCVs, followed by measuring their similarities and later combining them together through the developed spatiotemporal feature tensor pooling strategy to obtain the final SGFTM score. Experimental results on SCVD have shown that the proposed SGFTM yields a high consistency on the subjective perception of SCV quality and consistently outperforms multiple classical and state-of-the-art image/video quality assessment models.

8.
Article in English | MEDLINE | ID: mdl-32149636

ABSTRACT

In this paper, a progressive collaborative representation (PCR) framework is proposed that is able to incorporate any existing color image demosaicing method for further boosting its demosaicing performance. Our PCR consists of two phases: (i) offline training and (ii) online refinement. In phase (i), multiple training-and-refining stages will be performed. In each stage, a new dictionary will be established through the learning of a large number of feature-patch pairs, extracted from the demosaicked images of the current stage and their corresponding original full-color images. After training, a projection matrix will be generated and exploited to refine the current demosaicked image. The updated image with improved image quality will be used as the input for the next training-and-refining stage and performed the same processing likewise. At the end of phase (i), all the projection matrices generated as above-mentioned will be exploited in phase (ii) to conduct online demosaicked image refinement of the test image. Extensive simulations conducted on two commonly-used test datasets (i.e., the IMAX and Kodak) for evaluating the demosaicing algorithms have clearly demonstrated that our proposed PCR framework is able to constantly boost the performance of any image demosaicing method we experimented, in terms of the objective and subjective performance evaluations.

9.
Article in English | MEDLINE | ID: mdl-31478850

ABSTRACT

3D point clouds associated with attributes are considered as a promising paradigm for immersive communication. However, the corresponding compression schemes for this media are still in the infant stage. Moreover, in contrast to conventional image/video compression, it is a more challenging task to compress 3D point cloud data, arising from the irregular structure. In this paper, we propose a novel and effective compression scheme for the attributes of voxelized 3D point clouds. In the first stage, an input voxelized 3D point cloud is divided into blocks of equal size. Then, to deal with the irregular structure of 3D point clouds, a geometry-guided sparse representation (GSR) is proposed to eliminate the redundancy within each block, which is formulated as an ℓ0-norm regularized optimization problem. Also, an inter-block prediction scheme is applied to remove the redundancy between blocks. Finally, by quantitatively analyzing the characteristics of the resulting transform coefficients by GSR, an effective entropy coding strategy that is tailored to our GSR is developed to generate the bitstream. Experimental results over various benchmark datasets show that the proposed compression scheme is able to achieve better rate-distortion performance and visual quality, compared with state-of-the-art methods.

10.
IEEE Trans Image Process ; 27(9): 4465-4477, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29897872

ABSTRACT

In this paper, a highly-adaptive unsharp masking (UM) method is proposed and called the blurriness-guided UM, or BUM, in short. The proposed BUM exploits the estimated local blurriness as the guidance information to perform pixel-wise enhancement. The consideration of local blurriness is motivated by the fact that enhancing a highly-sharp or a highly-blurred image region is undesirable, since this could easily yield unpleasant image artifacts due to over-enhancement or noise enhancement, respectively. Our proposed BUM algorithm has two powerful adaptations as follows. First, the enhancement strength is adjusted for each pixel on the input image according to the degree of local blurriness measured at the local region of this pixel's location. All such measurements collectively form the blurriness map, from which the scaling matrix can be obtained using our proposed mapping process. Second, we also consider the type of layer-decomposition filter exploited for generating the base layer and the detail layer, since this consideration would effectively help to prevent over-enhancement artifacts. In this paper, the layer-decomposition filter is considered from the viewpoint of edge-preserving type versus non-edge-preserving type. Extensive simulations experimented on various test images have clearly demonstrated that our proposed BUM is able to consistently yield superior enhanced images with better perceptual quality to that of using a fixed enhancement strength or other state-of-the-art adaptive UM methods.

11.
IEEE Trans Image Process ; 27(9): 4516-4528, 2018 Sep.
Article in English | MEDLINE | ID: mdl-29897876

ABSTRACT

In this paper, an accurate and efficient full-reference image quality assessment (IQA) model using the extracted Gabor features, called Gabor feature-based model (GFM), is proposed for conducting objective evaluation of screen content images (SCIs). It is well-known that the Gabor filters are highly consistent with the response of the human visual system (HVS), and the HVS is highly sensitive to the edge information. Based on these facts, the imaginary part of the Gabor filter that has odd symmetry and yields edge detection is exploited to the luminance of the reference and distorted SCI for extracting their Gabor features, respectively. The local similarities of the extracted Gabor features and two chrominance components, recorded in the LMN color space, are then measured independently. Finally, the Gabor-feature pooling strategy is employed to combine these measurements and generate the final evaluation score. Experimental simulation results obtained from two large SCI databases have shown that the proposed GFM model not only yields a higher consistency with the human perception on the assessment of SCIs but also requires a lower computational complexity, compared with that of classical and state-of-the-art IQA models. The source code for the proposed GFM will be available at http://smartviplab.org/pubilcations/GFM.html.

12.
IEEE Trans Image Process ; 26(10): 4818-4831, 2017 Oct.
Article in English | MEDLINE | ID: mdl-28644808

ABSTRACT

In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

13.
IEEE Trans Image Process ; 24(12): 5879-91, 2015 Dec.
Article in English | MEDLINE | ID: mdl-26441414

ABSTRACT

A recently developed demosaicing methodology, called residual interpolation (RI), has demonstrated superior performance over the conventional color-component difference interpolation. However, it has been observed that the existing RI-based methods fail to fully exploit the potential of RI strategy on the reconstruction of the most important G channel, as only the R and B channels are restored through the RI strategy. Since any reconstruction error introduced in the G channel will be carried over into the demosaicing process of the other two channels, this makes the restoration of the G channel highly instrumental to the quality of the final demosaiced image. In this paper, a novel iterative RI (IRI) process is developed for reconstructing a highly accurate G channel first; in essence, it can be viewed as an iterative refinement process for the estimation of those missing pixel values on the G channel. The key novelty of the proposed IRI process is that all the three channels will mutually guide each other until a stopping criterion is met. Based on the restored G channel, the mosaiced R and B channels will be, respectively, reconstructed by exploiting the existing RI method without iteration. Extensive simulations conducted on two commonly-used test datasets for demosaicing algorithms have demonstrated that our algorithm has achieved the best performance in most cases, compared with the existing state-of-the-art demosaicing methods on both objective and subjective performance evaluations.

14.
IEEE Trans Image Process ; 23(3): 1408-18, 2014 Mar.
Article in English | MEDLINE | ID: mdl-24723536

ABSTRACT

A directed graph (or digraph) approach is proposed in this paper for identifying all the visual objects commonly presented in the two images under comparison. As a model, the directed graph is superior to the undirected graph, since there are two link weights with opposite orientations associated with each link of the graph. However, it inevitably draws two main challenges: 1) how to compute the two link weights for each link and 2) how to extract the subgraph from the digraph. For 1), a novel n-ranking process for computing the generalized median and the Gaussian link-weight mapping function are developed that basically map the established undirected graph to the digraph. To achieve this graph mapping, the proposed process and function are applied to each vertex independently for computing its directed link weight by not only considering the influences inserted from its immediately adjacent neighboring vertices (in terms of their link-weight values), but also offering other desirable merits-i.e., link-weight enhancement and computational complexity reduction. For 2), an evolutionary iterative process for solving the non-cooperative game theory is exploited to handle the non-symmetric weighted adjacency matrix. The abovementioned two stages of processes will be conducted for each assumed scale-change factor, experimented over a range of possible values, one factor at a time. If there is a match on the scale-change factor under experiment, the common visual patterns with the same scale-change factor will be extracted. If more than one pattern are extracted, the proposed topological splitting method is able to further differentiate among them provided that the visual objects are sufficiently far apart from each other. Extensive simulation results have clearly demonstrated the superior performance accomplished by the proposed digraph approach, compared with those of using the undirected graph approach.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Reproducibility of Results , Sensitivity and Specificity
15.
IEEE Trans Image Process ; 22(11): 4271-85, 2013 Nov.
Article in English | MEDLINE | ID: mdl-23846469

ABSTRACT

In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods , Subtraction Technique , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
16.
IEEE Trans Image Process ; 19(8): 2171-89, 2010 Aug.
Article in English | MEDLINE | ID: mdl-20350854

ABSTRACT

Curve smoothing has two important applications in computer vision and image processing: 1) the curvature scale-space (CSS) technique for shape analysis, and 2) the Gaussian filter for noise suppression. In this paper, we study how planar curves converge as they are smoothed with increasing scales. First, two types of convergence behavior are clarified. The coined term shrinkage refers to the reduction of arc-length of a smoothed planar curve, which describes the convergence of the curve latitudinally; and another coined term collapse refers to the movement of each point to its limiting position, which describes the convergence of the curve longitudinally. A systematic study on the shrinkage and collapse of three categories of curve models is then presented. The corner models helps to reveal how the local structures of planar curves collapse and what the smoothed curves may converge to. The sawtooth models allows us to gain insights regarding how noise is suppressed from noisy planar curves by the Gaussian filter. Our investigation on the closed curves shows that each curve collapses to a point at its center of mass. However, different curves may yield different limiting shapes at the infinity scale. Finally, based upon the derived results the performance of the CSS technique in corner detection and shape representation is analyzed, and a fast implementation method of the Gaussian filter for noise suppression is proposed.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
17.
IEEE Trans Pattern Anal Mach Intell ; 31(8): 1517-24, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19542584

ABSTRACT

The curvature scale-space (CSS) technique is suitable for extracting curvature features from objects with noisy boundaries. To detect corner points in a multiscale framework, Rattarangsi and Chin investigated the scale-space behavior of planar-curve corners. Unfortunately, their investigation was based on an incorrect assumption, viz., that planar curves have no shrinkage under evolution. In the present paper, this mistake is corrected. First, it is demonstrated that a planar curve may shrink nonuniformly as it evolves across increasing scales. Then, by taking into account the shrinkage effect of evolved curves, the CSS trajectory maps of various corner models are investigated and their properties are summarized. The scale-space trajectory of a corner may either persist, vanish, merge with a neighboring trajectory, or split into several trajectories. The scale-space trajectories of adjacent corners may attract each other when the corners have the same concavity, or repel each other when the corners have opposite concavities. Finally, we present a standard curvature measure for computing the CSS maps of digital curves, with which it is shown that planar-curve corners have consistent scale-space behavior in the digital case as in the continuous case.

18.
IEEE Trans Image Process ; 16(2): 428-41, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17269636

ABSTRACT

It has been well established that critically sampled boundary pre-/postfiltering operators can improve the coding efficiency and mitigate blocking artifacts in traditional discrete cosine transform-based block coders at low bit rates. In these systems, both the prefilter and the postfilter are square matrices. This paper proposes to use undersampled boundary pre- and postfiltering modules, where the pre-/postfilters are rectangular matrices. Specifically, the prefilter is a "fat" matrix, while the postfilter is a "tall" one. In this way, the size of the prefiltered image is smaller than that of the original input image, which leads to improved compression performance and reduced computational complexities at low bit rates. The design and VLSI-friendly implementation of the undersampled pre-/postfilters are derived. Their relations to lapped transforms and filter banks are also presented. Two design examples are also included to demonstrate the validity of the theory. Furthermore, image coding results indicate that the proposed undersampled pre-/postfiltering systems yield excellent and stable performance in low bit-rate image coding.


Subject(s)
Algorithms , Data Compression/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Signal Processing, Computer-Assisted , Numerical Analysis, Computer-Assisted , Sample Size
19.
IEEE Trans Image Process ; 16(2): 491-502, 2007 Feb.
Article in English | MEDLINE | ID: mdl-17269641

ABSTRACT

In this paper, the design of the error resilient time-domain lapped transform is formulated as a linear minimal mean-squared error problem. The optimal Wiener solution and several simplifications with different tradeoffs between complexity and performance are developed. We also prove the persymmetric structure of these Wiener filters. The existing mean reconstruction method is proven to be a special case of the proposed framework. Our method also includes as a special case the linear interpolation method used in DCT-based systems when there is no pre/postfiltering and when the quantization noise is ignored. The design criteria in our previous results are scrutinized and improved solutions are obtained. Various design examples and multiple description image coding experiments are reported to demonstrate the performance of the proposed method.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Information Storage and Retrieval/methods , Signal Processing, Computer-Assisted , Numerical Analysis, Computer-Assisted
20.
IEEE Trans Image Process ; 15(6): 1506-16, 2006 Jun.
Article in English | MEDLINE | ID: mdl-16764275

ABSTRACT

A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups--lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as "uncorrupted," provided that it belongs to the "uncorrupted" pixel group, or "corrupted." For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy--in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.


Subject(s)
Algorithms , Artifacts , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Signal Processing, Computer-Assisted , Discriminant Analysis , Filtration/methods , Information Storage and Retrieval/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...