Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Image Process ; 33: 4404-4418, 2024.
Article in English | MEDLINE | ID: mdl-39078763

ABSTRACT

Prior methodologies have disregarded the diversities among distinct degradation types during image reconstruction, employing a uniform network model to handle multiple deteriorations. Nevertheless, we discover that prevalent degradation modalities, including sampling, blurring, and noise, can be roughly categorized into two classes. We classify the first class as spatial-agnostic dominant degradations, less affected by regional changes in image space, such as downsampling and noise degradation. The second class degradation type is intimately associated with the spatial position of the image, such as blurring, and we identify them as spatial-specific dominant degradations. We introduce a dynamic filter network integrating global and local branches to address these two degradation types. This network can greatly alleviate the practical degradation problem. Specifically, the global dynamic filtering layer can perceive the spatial-agnostic dominant degradation in different images by applying weights generated by the attention mechanism to multiple parallel standard convolution kernels, enhancing the network's representation ability. Meanwhile, the local dynamic filtering layer converts feature maps of the image into a spatially specific dynamic filtering operator, which performs spatially specific convolution operations on the image features to handle spatial-specific dominant degradations. By effectively integrating both global and local dynamic filtering operators, our proposed method outperforms state-of-the-art blind super-resolution algorithms in both synthetic and real image datasets.

2.
IEEE Trans Image Process ; 31: 1761-1773, 2022.
Article in English | MEDLINE | ID: mdl-35104218

ABSTRACT

Deep convolutional neural network based video super-resolution (SR) models have achieved significant progress in recent years. Existing deep video SR methods usually impose optical flow to wrap the neighboring frames for temporal alignment. However, accurate estimation of optical flow is quite difficult, which tends to produce artifacts in the super-resolved results. To address this problem, we propose a novel end-to-end deep convolutional network that dynamically generates the spatially adaptive filters for the alignment, which are constituted by the local spatio-temporal channels of each pixel. Our method avoids generating explicit motion compensation and utilizes spatio-temporal adaptive filters to achieve the operation of alignment, which effectively fuses the multi-frame information and improves the temporal consistency of the video. Capitalizing on the proposed adaptive filter, we develop a reconstruction network and take the aligned frames as input to restore the high-resolution frames. In addition, we employ residual modules embedded with channel attention as the basic unit to extract more informative features for video SR. Both quantitative and qualitative evaluation results on three public video datasets demonstrate that the proposed method performs favorably against state-of-the-art super-resolution methods in terms of clearness and texture details.

SELECTION OF CITATIONS
SEARCH DETAIL
...