Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(5)2023 Mar 06.
Article in English | MEDLINE | ID: mdl-36905069

ABSTRACT

A multi-modal 3D object-detection method, based on data from cameras and LiDAR, has become a subject of research interest. PointPainting proposes a method for improving point-cloud-based 3D object detectors using semantic information from RGB images. However, this method still needs to improve on the following two complications: first, there are faulty parts in the image semantic segmentation results, leading to false detections. Second, the commonly used anchor assigner only considers the intersection over union (IoU) between the anchors and ground truth boxes, meaning that some anchors contain few target LiDAR points assigned as positive anchors. In this paper, three improvements are suggested to address these complications. Specifically, a novel weighting strategy is proposed for each anchor in the classification loss. This enables the detector to pay more attention to anchors containing inaccurate semantic information. Then, SegIoU, which incorporates semantic information, instead of IoU, is proposed for the anchor assignment. SegIoU measures the similarity of the semantic information between each anchor and ground truth box, avoiding the defective anchor assignments mentioned above. In addition, a dual-attention module is introduced to enhance the voxelized point cloud. The experiments demonstrate that the proposed modules obtained significant improvements in various methods, consisting of single-stage PointPillars, two-stage SECOND-IoU, anchor-base SECOND, and an anchor-free CenterPoint on the KITTI dataset.

2.
Sensors (Basel) ; 22(19)2022 Oct 02.
Article in English | MEDLINE | ID: mdl-36236572

ABSTRACT

Continuous frames of point-cloud-based object detection is a new research direction. Currently, most research studies fuse multi-frame point clouds using concatenation-based methods. The method aligns different frames by using information on GPS, IMU, etc. However, this fusion method can only align static objects and not moving objects. In this paper, we proposed a non-local-based multi-scale feature fusion method, which can handle both moving and static objects without GPS- and IMU-based registrations. Considering that non-local methods are resource-consuming, we proposed a novel simplified non-local block based on the sparsity of the point cloud. By filtering out empty units, memory consumption decreased by 99.93%. In addition, triple attention is adopted to enhance the key information on the object and suppresses background noise, further benefiting non-local-based feature fusion methods. Finally, we verify the method based on PointPillars and CenterPoint. Experimental results show that the mAP of the proposed method improved by 3.9% and 4.1% in mAP compared with concatenation-based fusion modules, PointPillars-2 and CenterPoint-2, respectively. In addition, the proposed network outperforms powerful 3D-VID by 1.2% in mAP.

3.
Sensors (Basel) ; 21(24)2021 Dec 19.
Article in English | MEDLINE | ID: mdl-34960572

ABSTRACT

With the development of imaging and space-borne satellite technology, a growing number of multipolarized SAR imageries have been implemented for object detection. However, most of the existing public SAR ship datasets are grayscale images under single polarization mode. To make full use of the polarization characteristics of multipolarized SAR, a dual-polarimetric SAR dataset specifically used for ship detection is presented in this paper (DSSDD). For construction, 50 dual-polarimetric Sentinel-1 SAR images were cropped into 1236 image slices with the size of 256 × 256 pixels. The variances and covariance of both VV and VH polarization were fused into R,G,B channels of the pseudo-color image. Each ship was labeled with both a rotatable bounding box (RBox) and a horizontal bounding box (BBox). Apart from 8-bit pseudo-color images, DSSDD also provides 16-bit complex data for readers. Two prevalent object detectors R3Det and Yolo-v4 were implemented on DSSDD to establish the baselines of the detectors with the RBox and BBox respectively. Furthermore, we proposed a weakly supervised ship detection method based on anomaly detection via advanced memory-augmented autoencoder (MemAE), which can significantly remove false alarms generated by the two-parameter CFAR algorithm applied upon our dual-polarimetric dataset. The proposed advanced MemAE method has the advantages of a lower annotation workload, high efficiency, good performance even compared with supervised methods, making it a promising direction for ship detection in dual-polarimetric SAR images. The dataset is available on github.


Subject(s)
Algorithms , Ships , Refraction, Ocular , Technology
4.
Sensors (Basel) ; 21(13)2021 Jun 24.
Article in English | MEDLINE | ID: mdl-34202766

ABSTRACT

At present, synthetic aperture radar (SAR) automatic target recognition (ATR) has been deeply researched and widely used in military and civilian fields. SAR images are very sensitive to the azimuth aspect of the imaging geomety; the same target at different aspects differs greatly. Thus, the multi-aspect SAR image sequence contains more information for classification and recognition, which requires the reliable and robust multi-aspect target recognition method. Nowadays, SAR target recognition methods are mostly based on deep learning. However, the SAR dataset is usually expensive to obtain, especially for a certain target. It is difficult to obtain enough samples for deep learning model training. This paper proposes a multi-aspect SAR target recognition method based on a prototypical network. Furthermore, methods such as multi-task learning and multi-level feature fusion are also introduced to enhance the recognition accuracy under the case of a small number of training samples. The experiments by using the MSTAR dataset have proven that the recognition accuracy of our method can be close to the accruacy level by all samples and our method can be applied to other feather extraction models to deal with small sample learning problems.


Subject(s)
Pattern Recognition, Automated , Radar , Algorithms
5.
Sensors (Basel) ; 18(2)2018 Jan 24.
Article in English | MEDLINE | ID: mdl-29364194

ABSTRACT

Target detection is one of the important applications in the field of remote sensing. The Gaofen-3 (GF-3) Synthetic Aperture Radar (SAR) satellite launched by China is a powerful tool for maritime monitoring. This work aims at detecting ships in GF-3 SAR images using a new land masking strategy, the appropriate model for sea clutter and a neural network as the discrimination scheme. Firstly, the fully convolutional network (FCN) is applied to separate the sea from the land. Then, by analyzing the sea clutter distribution in GF-3 SAR images, we choose the probability distribution model of Constant False Alarm Rate (CFAR) detector from K-distribution, Gamma distribution and Rayleigh distribution based on a tradeoff between the sea clutter modeling accuracy and the computational complexity. Furthermore, in order to better implement CFAR detection, we also use truncated statistic (TS) as a preprocessing scheme and iterative censoring scheme (ICS) for boosting the performance of detector. Finally, we employ a neural network to re-examine the results as the discrimination stage. Experiment results on three GF-3 SAR images verify the effectiveness and efficiency of this approach.

6.
Sensors (Basel) ; 17(7)2017 Jul 05.
Article in English | MEDLINE | ID: mdl-28678197

ABSTRACT

This study aims to detect vessels with lengths ranging from about 70 to 300 m, in Gaofen-3 (GF-3) SAR images with ultrafine strip-map (UFS) mode as fast as possible. Based on the analysis of the characteristics of vessels in GF-3 SAR imagery, an effective vessel detection method is proposed in this paper. Firstly, the iterative constant false alarm rate (CFAR) method is employed to detect the potential ship pixels. Secondly, the mean-shift operation is applied on each potential ship pixel to identify the candidate target region. During the mean-shift process, we maintain a selection matrix recording which pixels can be taken, and these pixels are called as the valid points of the candidate target. The l 1 norm regression is used to extract the principal axis and detect the valid points. Finally, two kinds of false alarms, the bright line and the azimuth ambiguity, are removed by comparing the valid area of the candidate target with a pre-defined value and computing the displacement between the true target and the corresponding replicas respectively. Experimental results on three GF-3 SAR images with UFS mode demonstrate the effectiveness and efficiency of the proposed method.

SELECTION OF CITATIONS
SEARCH DETAIL
...