Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38598386

ABSTRACT

Deep learning based semantic segmentation solutions have yielded compelling results over the preceding decade. They encompass diverse network architectures (FCN based or attention based), along with various mask decoding schemes (parametric softmax based or pixel-query based). Despite the divergence, they can be grouped within a unified framework by interpreting the softmax weights or query vectors as learnable class prototypes. In light of this prototype view, we reveal inherent limitations within the parametric segmentation regime, and accordingly develop a nonparametric alternative based on non-learnable prototypes. In contrast to previous approaches that entail the learning of a single weight/query vector per class in a fully parametric manner, our approach represents each class as a set of non-learnable prototypes, relying solely upon the mean features of training pixels within that class. The pixel-wise prediction is thus achieved by nonparametric nearest prototype retrieving. This allows our model to directly shape the pixel embedding space by optimizing the arrangement between embedded pixels and anchored prototypes. It is able to accommodate an arbitrary number of classes with a constant number of learnable parameters. Through empirical evaluation with FCN based and Transformer based segmentation models (i.e., HRNet, Swin, SegFormer, Mask2Former) and backbones (i.e., ResNet, HRNet, Swin, MiT), our nonparametric framework shows superior performance on standard segmentation datasets (i.e., ADE20K, Cityscapes, COCO-Stuff), as well as in large-vocabulary semantic segmentation scenarios. We expect that this study will provoke a rethink of the current de facto semantic segmentation model design.

2.
Article in English | MEDLINE | ID: mdl-38386572

ABSTRACT

This work studies the problem of image semantic segmentation. Current approaches focus mainly on mining "local" context, i.e., dependencies between pixels within individual images, by specifically-designed, context aggregation modules (e.g., dilated convolution, neural attention) or structure-aware optimization objectives (e.g., IoU-like loss). However, they ignore "global" context of the training data, i.e., rich semantic relations between pixels across different images. Inspired by recent advance in unsupervised contrastive representation learning, we propose a pixel-wise contrastive algorithm, dubbed as PiCo, for semantic segmentation in the fully supervised learning setting. The core idea is to enforce pixel embeddings belonging to a same semantic class to be more similar than embeddings from different classes. It raises a pixel-wise metric learning paradigm for semantic segmentation, by explicitly exploring the structures of labeled pixels, which were rarely studied before. Our training algorithm is compatible with modern segmentation solutions without extra overhead during testing. We experimentally show that, with famous segmentation models (i.e., DeepLabV3, HRNet, OCRNet, SegFormer, Segmenter, MaskFormer) and backbones (i.e., MobileNet, ResNet, HRNet, MiT, ViT), our algorithm brings consistent performance improvements across diverse datasets (i.e., Cityscapes, ADE20K, PASCAL-Context, COCO-Stuff, CamVid). We expect that this work will encourage our community to rethink the current de facto training paradigm in semantic segmentation. Our code is available at https://github.com/tfzhou/ContrastiveSeg.

3.
Genome Biol ; 24(1): 235, 2023 10 19.
Article in English | MEDLINE | ID: mdl-37858204

ABSTRACT

When analyzing data from in situ RNA detection technologies, cell segmentation is an essential step in identifying cell boundaries, assigning RNA reads to cells, and studying the gene expression and morphological features of cells. We developed a deep-learning-based method, GeneSegNet, that integrates both gene expression and imaging information to perform cell segmentation. GeneSegNet also employs a recursive training strategy to deal with noisy training labels. We show that GeneSegNet significantly improves cell segmentation performances over existing methods that either ignore gene expression information or underutilize imaging information.


Subject(s)
Deep Learning , Tomography, X-Ray Computed , RNA , Gene Expression , Image Processing, Computer-Assisted/methods
4.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 10055-10069, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37819831

ABSTRACT

We explore the task of language-guided video segmentation (LVS). Previous algorithms mostly adopt 3D CNNs to learn video representation, struggling to capture long-term context and easily suffering from visual-linguistic misalignment. In light of this, we present Locater (local-global context aware Transformer), which augments the Transformer architecture with a finite memory so as to query the entire video with the language expression in an efficient manner. The memory is designed to involve two components - one for persistently preserving global video content, and one for dynamically gathering local temporal context and segmentation history. Based on the memorized local-global context and the particular content of each frame, Locater holistically and flexibly comprehends the expression as an adaptive query vector for each frame. The vector is used to query the corresponding frame for mask generation. The memory also allows Locater to process videos with linear time complexity and constant size memory, while Transformer-style self-attention computation scales quadratically with sequence length. To thoroughly examine the visual grounding capability of LVS models, we contribute a new LVS dataset, A2D-S +, which is built upon A2D-S dataset but poses increased challenges in disambiguating among similar objects. Experiments on three LVS datasets and our A2D-S + show that Locater outperforms previous state-of-the-arts. Further, we won the 1st place in the Referring Video Object Segmentation Track of the 3rd Large-scale Video Object Segmentation Challenge, where Locater served as the foundation for the winning solution.

5.
IEEE Trans Image Process ; 32: 2678-2692, 2023.
Article in English | MEDLINE | ID: mdl-37155388

ABSTRACT

Learning pyramidal feature representations is important for many dense prediction tasks (e.g., object detection, semantic segmentation) that demand multi-scale visual understanding. Feature Pyramid Network (FPN) is a well-known architecture for multi-scale feature learning, however, intrinsic weaknesses in feature extraction and fusion impede the production of informative features. This work addresses the weaknesses of FPN through a novel tripartite feature enhanced pyramid network (TFPN), with three distinct and effective designs. First, we develop a feature reference module with lateral connections to adaptively extract bottom-up features with richer details for feature pyramid construction. Second, we design a feature calibration module between adjacent layers that calibrates the upsampled features to be spatially aligned, allowing for feature fusion with accurate correspondences. Third, we introduce a feature feedback module in FPN, which creates a communication channel from the feature pyramid back to the bottom-up backbone and doubles the encoding capacity, enabling the entire architecture to generate incrementally more powerful representations. The TFPN is extensively evaluated over four popular dense prediction tasks, i.e., object detection, instance segmentation, panoptic segmentation, and semantic segmentation. The results demonstrate that TFPN consistently and significantly outperforms the vanilla FPN. Our code is available at https://github.com/jamesliang819.

6.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8296-8310, 2023 07.
Article in English | MEDLINE | ID: mdl-37022259

ABSTRACT

In this work, we study the challenging problem of instance-aware human body part parsing. We introduce a new bottom-up regime which achieves the task through learning category-level human semantic segmentation as well as multi-person pose estimation in a joint and end-to-end manner. The output is a compact, efficient and powerful framework that exploits structural information over different human granularities and eases the difficulty of person partitioning. Specifically, a dense-to-sparse projection field, which allows explicitly associating dense human semantics with sparse keypoints, is learnt and progressively improved over the network feature pyramid for robustness. Then, the difficult pixel grouping problem is cast as an easier, multi-person joint assembling task. By formulating joint association as maximum-weight bipartite matching, we develop two novel algorithms based on projected gradient descent and unbalanced optimal transport, respectively, to solve the matching problem differentiablly. These algorithms make our method end-to-end trainable and allow back-propagating the grouping error to directly supervise multi-granularity human representation learning. This is significantly distinguished from current bottom-up human parsers or pose estimators which require sophisticated post-processing or heuristic greedy algorithms. Extensive experiments on three instance-aware human parsing datasets (i.e., MHP-v2, DensePose-COCO, PASCAL-Person-Part) demonstrate that our approach outperforms most existing human parsers with much more efficient inference. Our code is available at https://github.com/tfzhou/MG-HumanParsing.


Subject(s)
Algorithms , Learning , Humans , Semantics , Software
7.
IEEE Trans Pattern Anal Mach Intell ; 45(6): 7099-7122, 2023 Jun.
Article in English | MEDLINE | ID: mdl-36449595

ABSTRACT

Video segmentation-partitioning video frames into multiple segments or objects-plays a critical role in a broad range of practical applications, from enhancing visual effects in movie, to understanding scenes in autonomous driving, to creating virtual background in video conferencing. Recently, with the renaissance of connectionism in computer vision, there has been an influx of deep learning based approaches for video segmentation that have delivered compelling performance. In this survey, we comprehensively review two basic lines of research - generic object segmentation (of unknown categories) in videos, and video semantic segmentation - by introducing their respective task settings, background concepts, perceived need, development history, and main challenges. We also offer a detailed overview of representative literature on both methods and datasets. We further benchmark the reviewed methods on several well-known datasets. Finally, we point out open issues in this field, and suggest opportunities for further research. We also provide a public website to continuously track developments in this fast advancing field: https://github.com/tfzhou/VS-Survey.

8.
Med Image Anal ; 83: 102599, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36327652

ABSTRACT

Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D.

9.
PLoS One ; 17(9): e0275107, 2022.
Article in English | MEDLINE | ID: mdl-36155657

ABSTRACT

Low contrast, poor color saturation, and turbidity are common phenomena of underwater sensing scene images obtained in highly turbid oceans. To address these problems, we propose an underwater image enhancement method by combining Retinex and transmittance optimized multi-scale fusion framework. Firstly, the grayscale of R, G, and B channels are quantized to enhance the image contrast. Secondly, we utilize the Retinex color constancy to eliminate the negative effects of scene illumination and color distortion. Next, a dual transmittance underwater imaging model is built to estimate the background light, backscattering, and direct component transmittance, resulting in defogged images through an inverse solution. Finally, the three input images and corresponding weight maps are fused in a multi-scale framework to achieve high-quality, sharpened results. According to the experimental results and image quality evaluation index, the method combined multiple advantageous algorithms and improved the visual effect of images efficiently.


Subject(s)
Algorithms , Image Enhancement , Image Enhancement/methods , Lighting/methods
10.
IEEE Trans Image Process ; 31: 3111-3124, 2022.
Article in English | MEDLINE | ID: mdl-35380961

ABSTRACT

The success of current deep saliency models heavily depends on large amounts of annotated human fixation data to fit the highly non-linear mapping between the stimuli and visual saliency. Such fully supervised data-driven approaches are annotation-intensive and often fail to consider the underlying mechanisms of visual attention. In contrast, in this paper, we introduce a model based on various cognitive theories of visual saliency, which learns visual attention patterns in a weakly supervised manner. Our approach incorporates insights from cognitive science as differentiable submodules, resulting in a unified, end-to-end trainable framework. Specifically, our model encapsulates the following important components motivated from biological vision. (a) As scene semantics are closely related to visually attentive regions, our model encodes discriminative spatial information for scene understanding through spatial visual semantics embedding. (b) To model the objectness factors in visual attention deployment, we incorporate object-level semantics embedding and object relation information. (c) Considering the "winner-take-all" mechanism in visual stimuli processing, we model the competition mechanism among objects with softmax based neural attention. (d) Lastly, a conditional center prior is learned to mimic the spatial distribution bias of visual attention. Furthermore, we propose novel loss functions to utilize supervision cues from image-level semantics, saliency prior knowledge, and self-information compression. Experiments show that our method achieves promising results, and even outperforms many of its fully supervised counterparts. Overall, our weakly supervised saliency method makes an essential step towards reducing the annotation budget of current approaches, as well as providing a more comprehensive understanding of the visual attention mechanism. Our code is available at: https://github.com/ashleylqx/WeakFixation.git.


Subject(s)
Data Compression , Semantics , Humans
11.
IEEE Trans Pattern Anal Mach Intell ; 44(6): 2827-2840, 2022 06.
Article in English | MEDLINE | ID: mdl-33400648

ABSTRACT

This paper addresses the task of detecting and recognizing human-object interactions (HOI) in images. Considering the intrinsic complexity and structural nature of the task, we introduce a cascaded parsing network (CP-HOI) for a multi-stage, structured HOI understanding. At each cascade stage, an instance detection module progressively refines HOI proposals and feeds them into a structured interaction reasoning module. Each of the two modules is also connected to its predecessor in the previous stage, enabling efficient cross-stage information propagation. The structured interaction reasoning module is built upon a graph parsing neural network (GPNN), which efficiently models potential HOI structures as graphs and mines rich context for comprehensive relation understanding. In particular, GPNN infers a parse graph that i) interprets meaningful HOI structures by a learnable adjacency matrix, and ii) predicts action (edge) labels. Within an end-to-end, message-passing framework, GPNN blends learning and inference, iteratively parsing HOI structures and reasoning HOI representations (i.e., instance and relation features). Further beyond relation detection at a bounding-box level, we make our framework flexible to perform fine-grained pixel-wise relation segmentation; this provides a new glimpse into better relation modeling. A preliminary version of our CP-HOI model reached 1st place in the ICCV2019 Person in Context Challenge, on both relation detection and segmentation. In addition, our CP-HOI shows promising results on two popular HOI recognition benchmarks, i.e., V-COCO and HICO-DET.


Subject(s)
Algorithms , Neural Networks, Computer , Humans , Learning , Visual Perception
12.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3508-3522, 2022 07.
Article in English | MEDLINE | ID: mdl-33513100

ABSTRACT

Modeling the human structure is central for human parsing that extracts pixel-wise semantic information from images. We start with analyzing three types of inference processes over the hierarchical structure of human bodies: direct inference (directly predicting human semantic parts using image information), bottom-up inference (assembling knowledge from constituent parts), and top-down inference (leveraging context from parent nodes). We then formulate the problem as a compositional neural information fusion (CNIF) framework, which assembles the information from the three inference processes in a conditional manner, i.e., considering the confidence of the sources. Based on CNIF, we further present a part-relation-aware human parser (PRHP), which precisely describes three kinds of human part relations, i.e., decomposition, composition, and dependency, by three distinct relation networks. Expressive relation information can be captured by imposing the parameters in the relation networks to satisfy specific geometric characteristics of different relations. By assimilating generic message-passing networks with their edge-typed, convolutional counterparts, PRHP performs iterative reasoning over the human body hierarchy. With these efforts, PRHP provides a more general and powerful form of CNIF, and lays the foundation for more sophisticated and flexible human relation patterns of reasoning. Experiments on five datasets demonstrate that our two human parsers outperform the state-of-the-arts in all cases.


Subject(s)
Algorithms , Semantics , Humans , Software
13.
IEEE Trans Pattern Anal Mach Intell ; 44(8): 4454-4468, 2022 08.
Article in English | MEDLINE | ID: mdl-33656990

ABSTRACT

It is quite laborious and costly to manually label LiDAR point cloud data for training high-quality 3D object detectors. This work proposes a weakly supervised framework which allows learning 3D detection from a few weakly annotated examples. This is achieved by a two-stage architecture design. Stage-1 learns to generate cylindrical object proposals under inaccurate and inexact supervision, obtained by our proposed BEV center-click annotation strategy, where only the horizontal object centers are click-annotated in bird's view scenes. Stage-2 learns to predict cuboids and confidence scores in a coarse-to-fine, cascade manner, under incomplete supervision, i.e., only a small portion of object cuboids are precisely annotated. With KITTI dataset, using only 500 weakly annotated scenes and 534 precisely labeled vehicle instances, our method achieves 86-97 percent the performance of current top-leading, fully supervised detectors (which require 3,712 exhaustively annotated scenes with 15,654 instances). More importantly, with our elaborately designed network architecture, our trained model can be applied as a 3D object annotator, supporting both automatic and active (human-in-the-loop) working modes. The annotations generated by our model can be used to train 3D object detectors, achieving over 95 percent of their original performance (with manually labeled training data). Our experiments also show our model's potential in boosting performance when given more training data. The above designs make our approach highly practical and open-up opportunities for learning 3D detection at reduced annotation cost.


Subject(s)
Algorithms , Learning , Humans
14.
IEEE Trans Image Process ; 31: 799-811, 2022.
Article in English | MEDLINE | ID: mdl-34910633

ABSTRACT

Acquiring sufficient ground-truth supervision to train deep visual models has been a bottleneck over the years due to the data-hungry nature of deep learning. This is exacerbated in some structured prediction tasks, such as semantic segmentation, which require pixel-level annotations. This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation. To achieve this, we propose, for the first time, a novel group-wise learning framework for WSSS. The framework explicitly encodes semantic dependencies in a group of images to discover rich semantic context for estimating more reliable pseudo ground-truths, which are subsequently employed to train more effective segmentation models. In particular, we solve the group-wise learning within a graph neural network (GNN), wherein input images are represented as graph nodes, and the underlying relations between a pair of images are characterized by graph edges. We then formulate semantic mining as an iterative reasoning process which propagates the common semantics shared by a group of images to enrich node representations. Moreover, in order to prevent the model from paying excessive attention to common semantics, we further propose a graph dropout layer to encourage the graph model to capture more accurate and complete object responses. With the above efforts, our model lays the foundation for more sophisticated and flexible group-wise semantic mining. We conduct comprehensive experiments on the popular PASCAL VOC 2012 and COCO benchmarks, and our model yields state-of-the-art performance. In addition, our model shows promising performance in weakly supervised object localization (WSOL) on the CUB-200-2011 dataset, demonstrating strong generalizability. Our code is available at: https://github.com/Lixy1997/Group-WSSS.


Subject(s)
Image Processing, Computer-Assisted , Semantics , Neural Networks, Computer
15.
IEEE Trans Med Imaging ; 40(4): 1196-1206, 2021 04.
Article in English | MEDLINE | ID: mdl-33406037

ABSTRACT

Automatic thoracic disease diagnosis is a rising research topic in the medical imaging community, with many potential applications. However, the inconsistent appearances and high complexities of various lesions in chest X-rays currently hinder the development of a reliable and robust intelligent diagnosis system. Attending to the high-probability abnormal regions and exploiting the priori of a related knowledge graph offers one promising route to addressing these issues. As such, in this paper, we propose two contrastive abnormal attention models and a dual-weighting graph convolution to improve the performance of thoracic multi-disease recognition. First, a left-right lung contrastive network is designed to learn intra-attentive abnormal features to better identify the most common thoracic diseases, whose lesions rarely appear in both sides symmetrically. Moreover, an inter-contrastive abnormal attention model aims to compare the query scan with multiple anchor scans without lesions to compute the abnormal attention map. Once the intra- and inter-contrastive attentions are weighted over the features, in addition to the basic visual spatial convolution, a chest radiology graph is constructed for dual-weighting graph reasoning. Extensive experiments on the public NIH ChestX-ray and CheXpert datasets show that our model achieves consistent improvements over the state-of-the-art methods both on thoracic disease identification and localization.


Subject(s)
Neural Networks, Computer , Thoracic Diseases , Attention , Humans , Lung/diagnostic imaging , Radiography , Thoracic Diseases/diagnostic imaging
16.
Article in English | MEDLINE | ID: mdl-32784135

ABSTRACT

In this paper, we present a novel end-to-end learning neural network, i.e., MATNet, for zero-shot video object segmentation (ZVOS). Motivated by the human visual attention behavior, MATNet leverages motion cues as a bottom-up signal to guide the perception of object appearance. To achieve this, an asymmetric attention block, named Motion-Attentive Transition (MAT), is proposed within a two-stream encoder network to firstly identify moving regions and then attend appearance learning to capture the full extent of objects. Putting MATs in different convolutional layers, our encoder becomes deeply interleaved, allowing for close hierarchical interactions between object apperance and motion. Such a biologically-inspired design is proven to be superb to conventional two-stream structures, which treat motion and appearance independently in separate streams and often suffer severe overfitting to object appearance. Moreover, we introduce a bridge network to modulate multi-scale spatiotemporal features into more compact, discriminative and scale-sensitive representations, which are subsequently fed into a boundary-aware decoder network to produce accurate segmentation with crisp boundaries. We perform extensive quantitative and qualitative experiments on four challenging public benchmarks, i.e., DAVIS16, DAVIS17, FBMS and YouTube-Objects. Results show that our method achieves compelling performance against current state-of-the-art ZVOS methods. To further demonstrate the generalization ability of our spatiotemporal learning framework, we extend MATNet to another relevant task: dynamic visual attention prediction (DVAP). The experiments on two popular datasets (i.e., Hollywood-2 and UCF-Sports) further verify the superiority of our model. Our implementations have been made publicly available at https://github.com/tfzhou/MATNet.

SELECTION OF CITATIONS
SEARCH DETAIL
...