Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS One ; 19(7): e0305199, 2024.
Article in English | MEDLINE | ID: mdl-39024253

ABSTRACT

Feature description is a critical task in Augmented Reality Tracking. This article introduces a Convex Based Feature Descriptor (CBFD) system designed to withstand rotation, lighting, and blur variations while remaining computationally efficient. We have developed two filters capable of computing pixel intensity variations, followed by the covariance matrix of the polynomial to describe the features. The superiority of CBFD is validated through precision, recall, computation time, and feature location distance. Additionally, we provide a solution to determine the optimal block size for describing nonlinear regions, thereby enhancing resolution. The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. Noteworthy is CBFD's recall value of 0.87 representing at the maximum of a 13.6% improvement over Superpoint, DITF, BRIEF, BRISK, SURF, and SIFT. Furthermore, the matching score for the test image is 0.975. The computation time for CBFD is 2.8 ms, which is at least 6.7% lower than that of other algorithms. Finally, the plot of location feature distance illustrates that CBFD exhibits minimal distance compared to DITF and Histogram of Oriented Gradients (HOG). These results highlight the speed and robustness of CBFD across various transformations.


Subject(s)
Algorithms , Augmented Reality , Humans , Image Processing, Computer-Assisted/methods
2.
Sci Rep ; 13(1): 20311, 2023 Nov 20.
Article in English | MEDLINE | ID: mdl-37985678

ABSTRACT

Augmented Reality (AR) is applied in almost every field, and a few, but not limited, are engineering, medical, gaming and internet of things. The application of image tracking is inclusive in all these mentioned fields. AR uses image tracking to localize and register the position of the user/AR device for superimposing the virtual image into the real-world. In general terms, tracking the image enhance the users' experience. However, in the image tracking application, establishing the interface between virtual realm and the physical world has many shortcomings. Many tracking systems are available, but it lacks in robustness and efficiency. The robustness of the tracking algorithm, is the challenging task of implementation. This study aims to enhance the users' experience in AR by describing an image using Directional Intensified Features with Tertiary Filtering. This way of describing the features improve the robustness, which is desired in the image tracking. A feature descriptor is robust, in the sense that it does not compromise, when the image undergoes various transformations. This article, describes the features based on the Directional Intensification using Tertiary Filtering (DITF). The robustness of the algorithm is improved, because of the inherent design of Tri-ocular, Bi-ocular and Dia-ocular filters that can intensify the features in all required directions. The algorithm's robustness is verified with respect to various image transformations. The oxford dataset is used for performance analysis and validation. DITF model is designed to achieve the repeatability score of illumination-variation , blur changes and view-point variation, as 100%, 100% and 99% respectively. The comparative analysis has been performed in terms of precision and re-call. DITF outperforms the state-of-the-art descriptors, namely, BEBLID, BOOST, HOG, LBP, BRISK and AKAZE. An Implementation of DITF source code is available in the following GitHub repository: github.com/Johnchristopherclement/Directional-Intensified-Feature-Descriptor.

3.
Sensors (Basel) ; 23(3)2023 Jan 24.
Article in English | MEDLINE | ID: mdl-36772366

ABSTRACT

Cognitive radio networks are vulnerable to numerous threats during spectrum sensing. Different approaches can be used to lessen these attacks as the malicious users degrade the performance of the network. The cutting-edge technologies of machine learning and deep learning step into cognitive radio networks (CRN) to detect network problems. Several studies have been conducted utilising various deep learning and machine learning methods. However, only a small number of analyses have used gated recurrent units (GRU), and that too in software defined networks, but these are seldom used in CRN. In this paper, we used GRU in CRN to train and test the dataset of spectrum sensing results. One of the deep learning models with less complexity and more effectiveness for small datasets is GRU, the lightest variant of the LSTM. The support vector machine (SVM) classifier is employed in this study's output layer to distinguish between authorised users and malicious users in cognitive radio network. The novelty of this paper is the application of combined models of GRU and SVM in cognitive radio networks. A high testing accuracy of 82.45%, training accuracy of 80.99% and detection probability of 1 is achieved at 65 epochs in this proposed work.

SELECTION OF CITATIONS
SEARCH DETAIL
...