Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38954569

RESUMO

Accurate segmentation of colorectal polyps in colonoscopy images is crucial for effective diagnosis and management of colorectal cancer (CRC). However, current deep learning-based methods primarily rely on fusing RGB information across multiple scales, leading to limitations in accurately identifying polyps due to restricted RGB domain information and challenges in feature misalignment during multi-scale aggregation. To address these limitations, we propose the Polyp Segmentation Network with Shunted Transformer (PSTNet), a novel approach that integrates both RGB and frequency domain cues present in the images. PSTNet comprises three key modules: the Frequency Characterization Attention Module (FCAM) for extracting frequency cues and capturing polyp characteristics, the Feature Supplementary Alignment Module (FSAM) for aligning semantic information and reducing misalignment noise, and the Cross Perception localization Module (CPM) for synergizing frequency cues with high-level semantics to achieve efficient polyp segmentation. Extensive experiments on challenging datasets demonstrate PSTNet's significant improvement in polyp segmentation accuracy across various metrics, consistently outperforming state-of-the-art methods. The integration of frequency domain cues and the novel architectural design of PSTNet contribute to advancing computer-assisted polyp segmentation, facilitating more accurate diagnosis and management of CRC. Our source code is available for reference at https://github.com/clearxu/PSTNet.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38913520

RESUMO

Accurate skin lesion segmentation from dermoscopic images is of great importance for skin cancer diagnosis. However, automatic segmentation of melanoma remains a challenging task because it is difficult to incorporate useful texture representations into the learning process. Texture representations are not only related to the local structural information learned by CNN, but also include the global statistical texture information of the input image. In this paper, we propose a transFormer network (SkinFormer) that efficiently extracts and fuses statistical texture representation for Skin lesion segmentation. Specifically, to quantify the statistical texture of input features, a Kurtosis-guided Statistical Counting Operator is designed. We propose Statistical Texture Fusion Transformer and Statistical Texture Enhance Transformer with the help of Kurtosis-guided Statistical Counting Operator by utilizing the transformer's global attention mechanism. The former fuses structural texture information and statistical texture information, and the latter enhances the statistical texture of multi-scale features. Extensive experiments on three publicly available skin lesion datasets validate that our SkinFormer outperforms other SOAT methods, and our method achieves 93.2% Dice score on ISIC 2018. It can be easy to extend SkinFormer to segment 3D images in the future. Our code is available at https://github.com/Rongtao-Xu/SkinFormer.

3.
IEEE Trans Image Process ; 33: 1045-1058, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38271174

RESUMO

Weakly supervised object localization (WSOL) is a challenging and promising task that aims to localize objects solely based on the supervision of image category labels. In the absence of annotated bounding boxes, WSOL methods must employ the intrinsic properties of the image classification task pipeline to generate object localizations. In this work, we propose a WSOL method for exploring the Intrinsic Discrimination and Consistency in the image classification task pipeline, and call it as IDC. First, we develop a Triplet Metrics Based Foreground Modeling (TMFM) framework to directly predict object foreground regions using intrinsic discrimination. Unlike Class Activation Map (CAM) based methods that also rely on intrinsic discrimination, our TMFM framework alleviates the problem of only focusing on the most discriminative parts by optimizing foreground and background regions synergistically. Second, we design a Dual Geometric Transformation Consistency Constraints (DGTC2) training strategy to introduce additional supervision and regularization constraints for WSOL by leveraging intrinsic geometric transformation consistency. The proposed pixel-wise and object-wise consistency constraint losses cost-effectively provide spontaneous supervision for WSOL. Extensive experiments show that our IDC method achieves significant and consistent performance gains compared to existing state-of-the-art WSOL approaches. Code is available at: https://github.com/vignywang/IDC.

4.
Artigo em Inglês | MEDLINE | ID: mdl-37824321

RESUMO

Accurate lung lesion segmentation from computed tomography (CT) images is crucial to the analysis and diagnosis of lung diseases, such as COVID-19 and lung cancer. However, the smallness and variety of lung nodules and the lack of high-quality labeling make the accurate lung nodule segmentation difficult. To address these issues, we first introduce a novel segmentation mask named "soft mask", which has richer and more accurate edge details description and better visualization, and develop a universal automatic soft mask annotation pipeline to deal with different datasets correspondingly. Then, a novel network with detailed representation transfer and soft mask supervision (DSNet) is proposed to process the input low-resolution images of lung nodules into high-quality segmentation results. Our DSNet contains a special detailed representation transfer module (DRTM) for reconstructing the detailed representation to alleviate the small size of lung nodules images and an adversarial training framework with soft mask for further improving the accuracy of segmentation. Extensive experiments validate that our DSNet outperforms other state-of-the-art methods for accurate lung nodule segmentation, and has strong generalization ability in other accurate medical segmentation tasks with competitive results. Besides, we provide a new challenging lung nodules segmentation dataset for further studies (https://drive.google.com/file/d/15NNkvDTb_0Ku0IoPsNMHezJR TH1Oi1wm/view?usp=sharing).

5.
Artigo em Inglês | MEDLINE | ID: mdl-37022079

RESUMO

High spatial resolution (HSR) remote sensing images contain complex foreground-background relationships, which makes the remote sensing land cover segmentation a special semantic segmentation task. The main challenges come from the large-scale variation, complex background samples and imbalanced foreground-background distribution. These issues make recent context modeling methods sub-optimal due to the lack of foreground saliency modeling. To handle these problems, we propose a Remote Sensing Segmentation framework (RSSFormer), including Adaptive TransFormer Fusion Module, Detail-aware Attention Layer and Foreground Saliency Guided Loss. Specifically, from the perspective of relation-based foreground saliency modeling, our Adaptive Transformer Fusion Module can adaptively suppress background noise and enhance object saliency when fusing multi-scale features. Then our Detail-aware Attention Layer extracts the detail and foreground-related information via the interplay of spatial attention and channel attention, which further enhances the foreground saliency. From the perspective of optimization-based foreground saliency modeling, our Foreground Saliency Guided Loss can guide the network to focus on hard samples with low foreground saliency responses to achieve balanced optimization. Experimental results on LoveDA datasets, Vaihingen datasets, Potsdam datasets and iSAID datasets validate that our method outperforms existing general semantic segmentation methods and remote sensing segmentation methods, and achieves a good compromise between computational overhead and accuracy. Our code is available at https://github.com/Rongtao-Xu/RepresentationLearning/tree/main/RSSFormer-TIP2023.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(9): 10632-10649, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37074894

RESUMO

Local features detection and description are widely used in many vision applications with high industrial and commercial demands. With large-scale applications, these tasks raise high expectations for both the accuracy and speed of local features. Most existing studies on local features learning focus on the local descriptions of individual keypoints, which neglect their relationships established from global spatial awareness. In this paper, we present AWDesc with a consistent attention mechanism (CoAM) that opens up the possibility for local descriptors to embrace image-level spatial awareness in both the training and matching stages. For local features detection, we adopt local features detection with feature pyramid to obtain more stable and accurate keypoints localization. For local features description, we provide two versions of AWDesc to cope with different accuracy and speed requirements. On the one hand, we introduce Context Augmentation to address the inherent locality of convolutional neural networks by injecting non-local context information, so that local descriptors can "look wider to describe better". Specifically, well-designed Adaptive Global Context Augmented Module (AGCA) and Diverse Surrounding Context Augmented Module (DSCA) are proposed to construct robust local descriptors with context information from global to surrounding. On the other hand, we design an extremely lightweight backbone network coupled with the proposed special knowledge distillation strategy to achieve the best trade-off in accuracy and speed. What is more, we perform thorough experiments on image matching, homography estimation, visual localization, and 3D reconstruction tasks, and the results demonstrate that our method surpasses the current state-of-the-art local descriptors. Code is available at: https://github.com/vignywang/AWDesc.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...