Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(15)2023 Aug 05.
Article in English | MEDLINE | ID: mdl-37571748

ABSTRACT

The features of measurement and process noise are directly related to the optimal performance of the cubature Kalman filter. The maneuvering target model's high level of uncertainty and non-Gaussian mean noise are typical issues that the radar tracking system must deal with, making it impossible to obtain the appropriate estimation. How to strike a compromise between high robustness and estimation accuracy while designing filters has always been challenging. The H-infinity filter is a widely used robust algorithm. Based on the H-infinity cubature Kalman filter (HCKF), a novel adaptive robust cubature Kalman filter (ARCKF) is suggested in this paper. There are two adaptable components in the algorithm. First, an adaptive fading factor addresses the model uncertainty issue brought on by the target's maneuvering turn. Second, an improved Sage-Husa estimation based on the Mahalanobis distance (MD) is suggested to estimate the measurement noise covariance matrix adaptively. The new approach significantly increases the robustness and estimation precision of the HCKF. According to the simulation results, the suggested algorithm is more effective than the conventional HCKF at handling system model errors and abnormal observations.

2.
Sensors (Basel) ; 24(1)2023 Dec 19.
Article in English | MEDLINE | ID: mdl-38202882

ABSTRACT

In the field of image fusion, the integration of infrared and visible images aims to combine complementary features into a unified representation. However, not all regions within an image bear equal importance. Target objects, often pivotal in subsequent decision-making processes, warrant particular attention. Conventional deep-learning approaches for image fusion primarily focus on optimizing textural detail across the entire image at a pixel level, neglecting the pivotal role of target objects and their relevance to downstream visual tasks. In response to these limitations, TDDFusion, a Target-Driven Dual-Branch Fusion Network, has been introduced. It is explicitly designed to enhance the prominence of target objects within the fused image, thereby bridging the existing performance disparity between pixel-level fusion and downstream object detection tasks. The architecture consists of a parallel, dual-branch feature extraction network, incorporating a Global Semantic Transformer (GST) and a Local Texture Encoder (LTE). During the training phase, a dedicated object detection submodule is integrated to backpropagate semantic loss into the fusion network, enabling task-oriented optimization of the fusion process. A novel loss function is devised, leveraging target positional information to amplify visual contrast and detail specific to target objects. Extensive experimental evaluation on three public datasets demonstrates the model's superiority in preserving global environmental information and local detail, outperforming state-of-the-art alternatives in balancing pixel intensity and maintaining the texture of target objects. Most importantly, it exhibits significant advantages in downstream object detection tasks.

SELECTION OF CITATIONS
SEARCH DETAIL
...