Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 14375, 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38909068

ABSTRACT

During nighttime road scenes, images are often affected by contrast distortion, loss of detailed information, and a significant amount of noise. These factors can negatively impact the accuracy of segmentation and object detection in nighttime road scenes. A cycle-consistent generative adversarial network has been proposed to address this issue to improve the quality of nighttime road scene images. The network includes two generative networks with identical structures and two adversarial networks with identical structures. The generative network comprises an encoder network and a corresponding decoder network. A context feature extraction module is designed as the foundational element of the encoder-decoder network to capture more contextual semantic information with different receptive fields. A receptive field residual module is also designed to increase the receptive field in the encoder network.The illumination attention module is inserted between the encoder and decoder to transfer critical features extracted by the encoder to the decoder. The network also includes a multiscale discriminative network to discriminate better whether the image is a real high-quality or generated image. Additionally, an improved loss function is proposed to enhance the efficacy of image enhancement. Compared to state-of-the-art methods, the proposed approach achieves the highest performance in enhancing nighttime images, making them clearer and more natural.

2.
Entropy (Basel) ; 25(6)2023 Jun 13.
Article in English | MEDLINE | ID: mdl-37372276

ABSTRACT

Low-light image enhancement aims to improve the perceptual quality of images captured under low-light conditions. This paper proposes a novel generative adversarial network to enhance low-light image quality. Firstly, it designs a generator consisting of residual modules with hybrid attention modules and parallel dilated convolution modules. The residual module is designed to prevent gradient explosion during training and to avoid feature information loss. The hybrid attention module is designed to make the network pay more attention to useful features. A parallel dilated convolution module is designed to increase the receptive field and capture multi-scale information. Additionally, a skip connection is utilized to fuse shallow features with deep features to extract more effective features. Secondly, a discriminator is designed to improve the discrimination ability. Finally, an improved loss function is proposed by incorporating pixel loss to effectively recover detailed information. The proposed method demonstrates superior performance in enhancing low-light images compared to seven other methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...