RESUMO
Shearlets are a relatively new directional multi-scale framework for signal analysis, which have been shown effective to enhance signal discontinuities, such as edges and corners at multiple scales even in the presence of a large quantity of noise. In this paper, we consider blob-like features in the shearlets framework. We derive a measure, which is very effective for blob detection, and, based on this measure, we propose a blob detector and a keypoint description, whose combination outperforms the state-of-the-art algorithms with noisy and compressed images. We also demonstrate that the measure satisfies the perfect scale invariance property in the continuous case. We evaluate the robustness of our algorithm to different types of noise, including blur, compression artifacts, and Gaussian noise. Furthermore, we carry on a comparative analysis on benchmark data, referring, in particular, to tolerance to noise and image compression.
RESUMO
Shearlets are a relatively new and very effective multi-scale framework for signal analysis. Contrary to the traditional wavelets, shearlets are capable to efficiently capture the anisotropic information in multivariate problem classes. Therefore, shearlets can be seen as the valid choice for multi-scale analysis and detection of directional sensitive visual features like edges and corners. In this paper, we start by reviewing the main properties of shearlets that are important for edge and corner detection. Then, we study algorithms for multi-scale edge and corner detection based on the shearlet representation. We provide an extensive experimental assessment on benchmark data sets which empirically confirms the potential of shearlets feature detection.
RESUMO
In this letter, we investigate the impact of choosing different loss functions from the viewpoint of statistical learning theory. We introduce a convexity assumption, which is met by all loss functions commonly used in the literature, and study how the bound on the estimation error changes with the loss. We also derive a general result on the minimizer of the expected risk for a convex loss function in the case of classification. The main outcome of our analysis is that for classification, the hinge loss appears to be the loss of choice. Other things being equal, the hinge loss leads to a convergence rate practically indistinguishable from the logistic loss rate and much better than the square loss rate. Furthermore, if the hypothesis space is sufficiently rich, the bounds obtained for the hinge loss are not loosened by the thresholding stage.