Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3321-3333, 2024 May.
Article in English | MEDLINE | ID: mdl-38096092

ABSTRACT

Uncertainty quantification for inverse problems in imaging has drawn much attention lately. Existing approaches towards this task define uncertainty regions based on probable values per pixel, while ignoring spatial correlations within the image, resulting in an exaggerated volume of uncertainty. In this paper, we propose PUQ (Principal Uncertainty Quantification) - a novel definition and corresponding analysis of uncertainty regions that takes into account spatial relationships within the image, thus providing reduced volume regions. Using recent advancements in generative models, we derive uncertainty intervals around principal components of the empirical posterior distribution, forming an ambiguity region that guarantees the inclusion of true unseen values with a user-defined confidence probability. To improve computational efficiency and interpretability, we also guarantee the recovery of true unseen values using only a few principal directions, resulting in more informative uncertainty regions. Our approach is verified through experiments on image colorization, super-resolution, and inpainting; its effectiveness is shown through comparison to baseline methods, demonstrating significantly tighter uncertainty regions.

2.
Front Neurosci ; 17: 1184990, 2023.
Article in English | MEDLINE | ID: mdl-37790590

ABSTRACT

Introduction: Epilepsy is a neurological disease characterized by sudden, unprovoked seizures. The unexpected nature of epileptic seizures is a major component of the disease burden. Predicting seizure onset and alarming patients may allow timely intervention, which would improve clinical outcomes and patient quality of life. Currently, algorithms aiming to predict seizures suffer from a high false alarm rate, rendering them unsuitable for clinical use. Methods: We adopted here a risk-controlling prediction calibration method called Learn then Test to reduce false alarm rates of seizure prediction. This method calibrates the output of a "black-box" model to meet a specified false alarm rate requirement. The method was initially validated on synthetic data and subsequently tested on publicly available electroencephalogram (EEG) records from 15 patients with epilepsy by calibrating the outputs of a deep learning model. Results and discussion: Validation showed that the calibration method rigorously controlled the false alarm rate at a user-desired level after our adaptation. Real data testing showed an average of 92% reduction in the false alarm rate, at the cost of missing four of nine seizures of six patients. Better-performing prediction models combined with the proposed method may facilitate the clinical use of real-time seizure prediction systems.

3.
IEEE Trans Image Process ; 28(12): 6063-6076, 2019 Dec.
Article in English | MEDLINE | ID: mdl-31251189

ABSTRACT

Single image super-resolution (SISR) aims to recover a high-resolution image from a given low-resolution version of it. Video super-resolution (VSR) targets a series of given images, aiming to fuse them to create a higher resolution outcome. Although SISR and VSR seem to have a lot in common, most SISR algorithms do not have a simple and direct extension to VSR. VSR is considered a more challenging inverse problem, mainly due to its reliance on a sub-pixel accurate motion-estimation, which has no parallel in SISR. Another complication is the dynamics of the video, often addressed by simply generating a single frame instead of a complete output sequence. In this paper, we suggest a simple and robust super-resolution framework that can be applied to single images and easily extended to video. Our work relies on the observation that denoising of images and videos is well-managed and very effectively treated by a variety of methods. We exploit the plug-and-play-prior framework and the regularization-by-denoising (RED) approach that extends it, and show how to use such denoisers in order to handle the SISR and the VSR problems using a unified formulation and framework. This way, we benefit from the effectiveness and efficiency of existing image/video denoising algorithms, while solving much more challenging problems. More specifically, harnessing the VBM3D video denoiser, we obtain a strongly competitive motion-estimation free VSR algorithm, showing tendency to a high-quality output and fast processing.

4.
IEEE Trans Image Process ; 27(1): 220-235, 2018 Jan.
Article in English | MEDLINE | ID: mdl-28910768

ABSTRACT

Image and texture synthesis is a challenging task that has long been drawing attention in the fields of image processing, graphics, and machine learning. This problem consists of modeling the desired type of images, either through training examples or via a parametric modeling, and then generating images that belong to the same statistical origin. This paper addresses the image synthesis task, focusing on two specific families of images-handwritten digits and face images. This paper offers two main contributions. First, we suggest a simple and intuitive algorithm capable of generating such images in a unified way. The proposed approach taken is pyramidal, consisting of upscaling and refining the estimated image several times. For each upscaling stage, the algorithm randomly draws small patches from a patch database and merges these to form a coherent and novel image with high visual quality. The second contribution is a general framework for the evaluation of the generation performance, which combines three aspects: the likelihood, the originality, and the spread of the synthesized images. We assess the proposed synthesis scheme and show that the results are similar in nature, and yet different from the ones found in the training set, suggesting that true synthesis effect has been obtained.

5.
IEEE Trans Image Process ; 25(9): 3967-78, 2016 09.
Article in English | MEDLINE | ID: mdl-27295669

ABSTRACT

Measuring the similarity between the patches in images is a fundamental building block in various tasks. Naturally, the patch size has a major impact on the matching quality and on the consequent application performance. Under the assumption that our patch database is sufficiently sampled, using large patches (e.g., 21 × 21 ) should be preferred over small ones (e.g., 7 × 7 ). However, this dense-sampling assumption is rarely true; in most cases, large patches cannot find relevant nearby examples. This phenomenon is a consequence of the curse of dimensionality, stating that the database size should grow exponentially with the patch size to ensure proper matches. This explains the favored choice of small patch size in most applications. Is there a way to keep the simplicity and work with small patches while getting some of the benefits that large patches provide? In this paper, we offer such an approach. We propose to concatenate the regular content of a conventional (small) patch with a compact representation of its (large) surroundings-its context. Therefore, with a minor increase of the dimensions (e.g., with additional ten values to the patch representation), we implicitly/softly describe the information of a large patch. The additional descriptors are computed based on a self-similarity behavior of the patch surrounding. We show that this approach achieves better matches, compared with the use of conventional-size patches, without the need to increase the database-size. Also, the effectiveness of the proposed method is tested on three distinct problems: 1) external natural image denoising; 2) depth image super-resolution; and 3) motion-compensated frame-rate up conversion.

6.
IEEE Trans Image Process ; 23(7): 3085-98, 2014 Jul.
Article in English | MEDLINE | ID: mdl-24860029

ABSTRACT

Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

SELECTION OF CITATIONS
SEARCH DETAIL
...