Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(15)2023 Jul 25.
Article in English | MEDLINE | ID: mdl-37571439

ABSTRACT

Event cameras are the emerging bio-mimetic sensors with microsecond-level responsiveness in recent years, also known as dynamic vision sensors. Due to the inherent sensitivity of event camera hardware to light sources and interference from various external factors, various types of noises are inevitably present in the camera's output results. This noise can degrade the camera's perception of events and the performance of algorithms for processing event streams. Moreover, since the output of event cameras is in the form of address-event representation, efficient denoising methods for traditional frame images are no longer applicable in this case. Most existing denoising methods for event cameras target background activity noise and sometimes remove real events as noise. Furthermore, these methods are ineffective in handling noise generated by high-frequency flickering light sources and changes in diffused light reflection. To address these issues, we propose an event stream denoising method based on salient region recognition in this paper. This method can effectively remove conventional background activity noise as well as irregular noise caused by diffuse reflection and flickering light source changes without significantly losing real events. Additionally, we introduce an evaluation metric that can be used to assess the noise removal efficacy and the preservation of real events for various denoising methods.

2.
Sensors (Basel) ; 23(4)2023 Feb 14.
Article in English | MEDLINE | ID: mdl-36850751

ABSTRACT

The event camera efficiently detects scene radiance changes and produces an asynchronous event stream with low latency, high dynamic range (HDR), high temporal resolution, and low power consumption. However, the large output data caused by the asynchronous imaging mechanism makes the increase in spatial resolution of the event camera limited. In this paper, we propose a novel event camera super-resolution (SR) network (EFSR-Net) based on a deep learning approach to address the problems of low spatial resolution and poor visualization of event cameras. The network model is capable of reconstructing high-resolution (HR) intensity images using event streams and active sensor pixel (APS) frame information. We design the coupled response blocks (CRB) in the network that are able of fusing the feature information of both data to achieve the recovery of detailed textures in the shadows of real images. We demonstrate that our method is able to reconstruct high-resolution intensity images with more details and less blurring in synthetic and real datasets, respectively. The proposed EFSR-Net can improve the peak signal-to-noise ratio (PSNR) metric by 1-2 dB compared with state-of-the-art methods.

3.
Micromachines (Basel) ; 14(1)2023 Jan 13.
Article in English | MEDLINE | ID: mdl-36677264

ABSTRACT

The advantages of an event camera, such as low power consumption, large dynamic range, and low data redundancy, enable it to shine in extreme environments where traditional image sensors are not competent, especially in high-speed moving target capture and extreme lighting conditions. Optical flow reflects the target's movement information, and the target's detailed movement can be obtained using the event camera's optical flow information. However, the existing neural network methods for optical flow prediction of event cameras has the problems of extensive computation and high energy consumption in hardware implementation. The spike neural network has spatiotemporal coding characteristics, so it can be compatible with the spatiotemporal data of an event camera. Moreover, the sparse coding characteristic of the spike neural network makes it run with ultra-low power consumption on neuromorphic hardware. However, because of the algorithmic and training complexity, the spike neural network has not been applied in the prediction of the optical flow for the event camera. For this case, this paper proposes an end-to-end spike neural network to predict the optical flow of the discrete spatiotemporal data stream for the event camera. The network is trained with the spatio-temporal backpropagation method in a self-supervised way, which fully combines the spatiotemporal characteristics of the event camera while improving the network performance. Compared with the existing methods on the public dataset, the experimental results show that the method proposed in this paper is equivalent to the best existing methods in terms of optical flow prediction accuracy, and it can save 99% more power consumption than the existing algorithm, which is greatly beneficial to the hardware implementation of the event camera optical flow prediction., laying the groundwork for future low-power hardware implementation of optical flow prediction for event cameras.

4.
Sensors (Basel) ; 23(1)2022 Dec 30.
Article in English | MEDLINE | ID: mdl-36617024

ABSTRACT

To address the challenge of no-reference image quality assessment (NR-IQA) for authentically and synthetically distorted images, we propose a novel network called the Combining Convolution and Self-Attention for Image Quality Assessment network (Conv-Former). Our model uses a multi-stage transformer architecture similar to that of ResNet-50 to represent appropriate perceptual mechanisms in image quality assessment (IQA) to build an accurate IQA model. We employ adaptive learnable position embedding to handle images with arbitrary resolution. We propose a new transformer block (TB) by taking advantage of transformers to capture long-range dependencies, and of local information perception (LIP) to model local features for enhanced representation learning. The module increases the model's understanding of the image content. Dual path pooling (DPP) is used to keep more contextual image quality information in feature downsampling. Experimental results verify that Conv-Former not only outperforms the state-of-the-art methods on authentic image databases, but also achieves competing performances on synthetic image databases which demonstrate the strong fitting performance and generalization capability of our proposed model.


Subject(s)
Electric Power Supplies , Learning , Databases, Factual
5.
Appl Opt ; 59(33): 10441-10450, 2020 Nov 20.
Article in English | MEDLINE | ID: mdl-33361977

ABSTRACT

The real-time processing of synthetic aperture radar (SAR) data has a high requirement for the processor, which is a difficult problem in SAR real-time processing. With the rapid development of optoelectronic devices, traditional electrical SAR data processing can be converted into optoelectronic processing to improve the processing speed. In this paper, a new type of optical device is proposed to improve the processing speed of SAR data. With the help of a spatial light modulator (SLM), the initial SAR signal and matched filter function are loaded on the input plane and spectrum plane of the 4f system, respectively. Using an optical lens with the function of the Fourier transform, the Fourier transform and inverse Fourier transform of the SAR signal are carried out to realize the fast imaging of SAR. In theory, the processing speed of SAR data is the speed of light. Compared with traditional methods such as the range-Doppler (RD) algorithm, it is no longer necessary to carry out a one-dimensional Fourier transform but to carry out matching filtering for the azimuth and range of the spectrum plane of 4f system at the same time. In this way, it is not necessary to introduce a cylindrical lens, only a spherical lens is needed to realize the Fourier transform imaging of SAR. Finally, a two-dimensional SAR processing optical system is built to obtain the SAR image in real time.

6.
Sensors (Basel) ; 20(16)2020 Aug 06.
Article in English | MEDLINE | ID: mdl-32781628

ABSTRACT

To address the miniaturization of the spectral imaging system required by a mounted platform and to overcome the low luminous flux caused by current spectroscopic technology, we propose a method for the multichannel measurement of spectra using a broadband filter in this work. The broadband filter is placed in front of a lens, and the spectral absorption characteristics of the broadband filter are used to achieve the modulation of the incident spectrum of the detection target and to establish a mathematical model for the detection of the target. The spectral and spatial information of the target can be obtained by acquiring data using a push-broom method and reconstructing the spectrum using the GCV-based Tikhonov regularization algorithm. In this work, we compare the accuracy of the reconstructed spectra using the least-squares method and the Tikhonov algorithm based on the L-curve. The effect of errors in the spectral modulation function on the accuracy of the reconstructed spectra is analyzed. We also analyze the effect of the number of overdetermined equations on the accuracy of the reconstructed spectra and consider the effect of detector noise on the spectral recovery. A comparison between the known data cubes and our simulation results shows that the spectral image quality based on broadband filter reduction is better, which validates the feasibility of the method. The proposed method of combining broadband filter-based spectroscopy with a panchromatic imaging process for measurement modulation rather than spectroscopic modulation provides a new approach to spectral imaging.

7.
Sensors (Basel) ; 20(11)2020 May 31.
Article in English | MEDLINE | ID: mdl-32486498

ABSTRACT

How to perform imaging beyond the diffraction limit has always been an essential subject for the research of optical systems. One effective way to achieve this purpose is Fourier ptychography, which has been widely used in microscopic imaging. However, microscopic imaging measurement technology cannot be directly extended to imaging macro objects at long distances. In this paper, a reconstruction algorithm is proposed to solve the need for oversampling low-resolution images, and it is successfully applied to macroscopic imaging. Compared with the traditional FP technology, the proposed sub-sampling method can significantly reduce the number of iterations in reconstruction. Experiments prove that the proposed method can reconstruct low-resolution images captured by the camera and achieve high-resolution imaging of long-range macroscopic objects.

SELECTION OF CITATIONS
SEARCH DETAIL
...