Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Main subject
Language
Publication year range
1.
Signal Image Video Process ; 17(4): 981-989, 2023.
Article in English | MEDLINE | ID: mdl-35910403

ABSTRACT

Deep learning-based image segmentation models rely strongly on capturing sufficient spatial context without requiring complex models that are hard to train with limited labeled data. For COVID-19 infection segmentation on CT images, training data are currently scarce. Attention models, in particular the most recent self-attention methods, have shown to help gather contextual information within deep networks and benefit semantic segmentation tasks. The recent attention-augmented convolution model aims to capture long range interactions by concatenating self-attention and convolution feature maps. This work proposes a novel attention-augmented convolution U-Net (AA-U-Net) that enables a more accurate spatial aggregation of contextual information by integrating attention-augmented convolution in the bottleneck of an encoder-decoder segmentation architecture. A deep segmentation network (U-Net) with this attention mechanism significantly improves the performance of semantic segmentation tasks on challenging COVID-19 lesion segmentation. The validation experiments show that the performance gain of the attention-augmented U-Net comes from their ability to capture dynamic and precise (wider) attention context. The AA-U-Net achieves Dice scores of 72.3% and 61.4% for ground-glass opacity and consolidation lesions for COVID-19 segmentation and improves the accuracy by 4.2% points against a baseline U-Net and 3.09% points compared to a baseline U-Net with matched parameters. Supplementary Information: The online version contains supplementary material available at 10.1007/s11760-022-02302-3.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2045-2048, 2022 07.
Article in English | MEDLINE | ID: mdl-36085933

ABSTRACT

Enormous progress has been made in the domain of determining image quality. However, even the recently proposed deep learning based perceptual quality metrics and the classical structural similarity metric (SSIM) are not designed to operate in the absence of a good quality reference image. Many of the image acquisition processes, especially in medical imaging, would immensely benefit from a metric that can indicate if the quality of an image is improving or worsening based on adaptation of the acquisition parameters. In this work, we propose a novel multi-dimensional no-reference perceptual similarity metric that can compute the quality of a given image without a reference pristine quality image by combining no-reference image quality metric (PIQUE) and perceptual similarity. The dimensions of quality currently explored are in the axis of noise, blur, and contrast. Our experiments demonstrate that our proposed novel no-reference perceptual similarity metric correlates very well with the quality of an image in a multi-dimensional sense.


Subject(s)
Algorithms
SELECTION OF CITATIONS
SEARCH DETAIL
...