Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Entropy (Basel) ; 22(10)2020 Sep 25.
Article in English | MEDLINE | ID: mdl-33286849

ABSTRACT

Conventional image entropy merely involves the overall pixel intensity statistics which cannot respond to intensity patterns over spatial domain. However, spatial distribution of pixel intensity is definitely crucial to any biological or computer vision system, and that is why gestalt grouping rules involve using features of both aspects. Recently, the increasing integration of knowledge from gestalt research into visualization-related techniques has fundamentally altered both fields, offering not only new research questions, but also new ways of solving existing issues. This paper presents a Bayesian edge detector called GestEdge, which is effective in detecting gestalt edges, especially useful for forming object boundaries as perceived by human eyes. GestEdge is characterized by employing a directivity-aware sampling window or mask that iteratively deforms to probe or explore the existence of principal direction of sampling pixels; when convergence is reached, the window covers pixels best representing the directivity in compliance with the similarity and proximity laws in gestalt theory. During the iterative process based on the unsupervised Expectation-Minimization (EM) algorithm, the shape of the sampling window is optimally adjusted. Such a deformable window allows us to exploit the similarity and proximity among the sampled pixels. Comparisons between GestEdge and other edge detectors are shown to justify the effectiveness of GestEdge in extracting the gestalt edges.

2.
Sensors (Basel) ; 18(8)2018 Jul 24.
Article in English | MEDLINE | ID: mdl-30042339

ABSTRACT

Recently, an upsurge of deep learning has provided a new direction for the field of computer vision and visual tracking. However, expensive offline training time and the large number of images required by deep learning have greatly hindered progress. This paper aims to further improve the computational performance of CNT which is reported to deliver 5 fps performance in visual tracking, we propose a method called Fast-CNT which differs from CNT in three aspects: firstly, an adaptive k value (rather than a constant 100) is determined for an input video; secondly, background filters used in CNT are omitted in this work to save computation time without affecting performance; thirdly, SURF feature points are used in conjunction with the particle filter to address the drift problem in CNT. Extensive experimental results on land and undersea video sequences show that Fast-CNT outperforms CNT by 2~10 times in terms of computational efficiency.

SELECTION OF CITATIONS
SEARCH DETAIL
...