Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 12 de 12
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38781063

ABSTRACT

Embedding visual representations within original hierarchical tables can mitigate additional cognitive load stemming from the division of users' attention. The created hierarchical table visualizations can help users understand and explore complex data with multi-level attributes. However, because of many options available for transforming hierarchical tables and selecting subsets for embedding, the design space of hierarchical table visualizations becomes vast, and the construction process turns out to be tedious, hindering users from constructing hierarchical table visualizations with many data insights efficiently. We propose InsigHTable, a mixed-initiative and insight-driven hierarchical table transformation and visualization system. We first define data insights within hierarchical tables, which consider the hierarchical structure in the table headers. Since hierarchical table visualization construction is a sequential decision-making process, InsigHTable integrates a deep reinforcement learning framework incorporating an auxiliary rewards mechanism. This mechanism addresses the challenge of sparse rewards in constructing hierarchical table visualizations. Within the deep reinforcement learning framework, the agent continuously optimizes its decision-making process to create hierarchical table visualizations to uncover more insights by collaborating with analysts. We demonstrate the usability and effectiveness of InsigHTable through two case studies and sets of experiments. The results validate the effectiveness of the deep reinforcement learning framework and show that InsigHTable can facilitate users to construct hierarchical table visualizations and understand underlying data insights.

2.
Article in English | MEDLINE | ID: mdl-38619943

ABSTRACT

Extracting data insights and generating visual data stories from tabular data are critical parts of data analysis. However, most existing studies primarily focus on tabular data stored as flat tables, typically without leveraging the relations between cells in the headers of hierarchical tables. When properly used, rich table headers can enable the extraction of many additional data stories. To assist analysts in visual data storytelling, an approach is needed to organize these data insights efficiently. In this work, we propose CoInsight, a system to facilitate visual storytelling for hierarchical tables by connecting insights. CoInsight extracts data insights from hierarchical tables and builds insight relations according to the structure of table headers. It further visualizes related data insights using a nested graph with edge bundling. We evaluate the CoInsight system through a usage scenario and a user experiment. The results demonstrate the utility and usability of CoInsight for converting data insights in hierarchical tables into visual data stories.

3.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 9004-9021, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37819799

ABSTRACT

Domain adaptive semantic segmentation attempts to make satisfactory dense predictions on an unlabeled target domain by utilizing the supervised model trained on a labeled source domain. One popular solution is self-training, which retrains the model with pseudo labels on target instances. Plenty of approaches tend to alleviate noisy pseudo labels, however, they ignore the intrinsic connection of the training data, i.e., intra-class compactness and inter-class dispersion between pixel representations across and within domains. In consequence, they struggle to handle cross-domain semantic variations and fail to build a well-structured embedding space, leading to less discrimination and poor generalization. In this work, we propose emantic-Guided Pixel Contrast (SePiCo), a novel one-stage adaptation framework that highlights the semantic concepts of individual pixels to promote learning of class-discriminative and class-balanced pixel representations across domains, eventually boosting the performance of self-training methods. Specifically, to explore proper semantic concepts, we first investigate a centroid-aware pixel contrast that employs the category centroids of the entire source domain or a single source image to guide the learning of discriminative features. Considering the possible lack of category diversity in semantic concepts, we then blaze a trail of distributional perspective to involve a sufficient quantity of instances, namely distribution-aware pixel contrast, in which we approximate the true distribution of each semantic category from the statistics of labeled source data. Moreover, such an optimization objective can derive a closed-form upper bound by implicitly involving an infinite number of (dis)similar pairs, making it computationally efficient. Extensive experiments show that SePiCo not only helps stabilize training but also yields discriminative representations, making significant progress on both synthetic-to-real and daytime-to-nighttime adaptation scenarios. The code and models are available at https://github.com/BIT-DA/SePiCo.

4.
Micromachines (Basel) ; 14(3)2023 Mar 13.
Article in English | MEDLINE | ID: mdl-36985058

ABSTRACT

In recent years, Kubernetes (K8s) has become a dominant resource management and scheduling system in the cloud. In practical scenarios, short-running cloud workloads are usually scheduled through different scheduling algorithms provided by Kubernetes. For example, artificial intelligence (AI) workloads are scheduled through different Volcano scheduling algorithms, such as GANG_MRP, GANG_LRP, and GANG_BRA. One key challenge is that the selection of scheduling algorithms has considerable impacts on job performance results. However, it takes a prohibitively long time to select the optimal algorithm because applying one algorithm in one single job may take a few minutes to complete. This poses the urgent requirement of a simulator that can quickly evaluate the performance impacts of different algorithms, while also considering scheduling-related factors, such as cluster resources, job structures and scheduler configurations. In this paper, we design and implement a Kubernetes simulator called K8sSim, which incorporates typical Kubernetes and Volcano scheduling algorithms for both generic and AI workloads, and provides an accurate simulation of their scheduling process in real clusters. We use real cluster traces from Alibaba to evaluate the effectiveness of K8sSim, and the evaluation results show that (i) compared to the real cluster, K8sSim can accurately evaluate the performance of different scheduling algorithms with similar CloseRate (a novel metric we define to intuitively show the simulation accuracy), and (ii) it can also quickly obtain the scheduling results of different scheduling algorithms by accelerating the scheduling time by an average of 38.56×.

5.
IEEE Trans Cybern ; 53(9): 5641-5654, 2023 Sep.
Article in English | MEDLINE | ID: mdl-35417373

ABSTRACT

Partial domain adaptation (PDA) attempts to learn transferable models from a large-scale labeled source domain to a small unlabeled target domain with fewer classes, which has attracted a recent surge of interest in transfer learning. Most conventional PDA approaches endeavor to design delicate source weighting schemes by leveraging target predictions to align cross-domain distributions in the shared class space. Accordingly, two crucial issues are overlooked in these methods. First, target prediction is a double-edged sword, and inaccurate predictions will result in negative transfer inevitably. Second, not all target samples have equal transferability during the adaptation; thus, "ambiguous" target data predicted with high uncertainty should be paid more attentions. In this article, we propose a critical classes and samples discovering network (CSDN) to identify the most relevant source classes and critical target samples, such that more precise cross-domain alignment in the shared label space could be enforced by co-training two diverse classifiers. Specifically, during the training process, CSDN introduces an adaptive source class weighting scheme to select the most relevant classes dynamically. Meanwhile, based on the designed target ambiguous score, CSDN emphasizes more on ambiguous target samples with larger inconsistent predictions to enable fine-grained alignment. Taking a step further, the weighting schemes in CSDN can be easily coupled with other PDA and DA methods to further boost their performance, thereby demonstrating its flexibility. Extensive experiments verify that CSDN attains excellent results compared to state of the arts on four highly competitive benchmark datasets.

6.
IEEE Trans Vis Comput Graph ; 29(1): 139-148, 2023 Jan.
Article in English | MEDLINE | ID: mdl-36155464

ABSTRACT

Tabular visualization techniques integrate visual representations with tabular data to avoid additional cognitive load caused by splitting users' attention. However, most of the existing studies focus on simple flat tables instead of hierarchical tables, whose complex structure limits the expressiveness of visualization results and affects users' efficiency in visualization construction. We present HiTailor, a technique for presenting and exploring hierarchical tables. HiTailor constructs an abstract model, which defines row/column headings as biclustering and hierarchical structures. Based on our abstract model, we identify three pairs of operators, Swap/Transpose, ToStacked/ToLinear, Fold/Unfold, for transformations of hierarchical tables to support users' comprehensive explorations. After transformation, users can specify a cell or block of interest in hierarchical tables as a TableUnit for visualization, and HiTailor recommends other related TableUnits according to the abstract model using different mechanisms. We demonstrate the usability of the HiTailor system through a comparative study and a case study with domain experts, showing that HiTailor can present and explore hierarchical tables from different viewpoints. HiTailor is available at https://github.com/bitvis2021/HiTailor.

7.
IEEE Trans Image Process ; 31: 6733-6746, 2022.
Article in English | MEDLINE | ID: mdl-36282824

ABSTRACT

Few-shot segmentation aims at learning to segment query images guided by only a few annotated images from the support set. Previous methods rely on mining the feature embedding similarity across the query and the support images to achieve successful segmentation. However, these models tend to perform badly in cases where the query instances have a large variance from the support ones. To enhance model robustness against such intra-class variance, we propose a Double Recalibration Network (DRNet) with two recalibration modules, i.e., the Self-adapted Recalibration (SR) module and the Cross-attended Recalibration (CR) module. In particular, beyond learning robust feature embedding for pixel-wise comparison between support and query as in conventional methods, the DRNet further exploits semantic-aware knowledge embedded in the query image to help segment itself, which we call 'self-adapted recalibration'. More specifically, DRNet first employs guidance from the support set to roughly predict an incomplete but correct initial object region for the query image, and then reversely uses the feature embedding extracted from the incomplete object region to segment the query image. Also, we devise a CR module to refine the feature representation of the query image by propagating the underlying knowledge embedded in the support image's foreground to the query. Instead of foreground global pooling, we refine the response at each pixel in the query feature map by attending to all foreground pixels in the support feature map and taking the weighted average by their similarity; meanwhile, feature maps of the query image are also added back to weighted feature maps as a residual connection. Our DRNet can effectively address the intra-class variance under the few-shot setting with such two recalibration modules, and mine more accurate target regions for query images. We conduct extensive experiments on the popular benchmarks PASCAL- 5i and COCO- 20i . The DRNet with the best configuration achieves the mIoU of 63.6% and 64.9% on PASCAL- 5i and 44.7% and 49.6% on COCO- 20i for 1-shot and 5-shot settings respectively, significantly outperforming the state-of-the-arts without any bells and whistles. Code is available at: https://github.com/fangzy97/drnet.

8.
Article in English | MEDLINE | ID: mdl-36006880

ABSTRACT

Heterogeneous domain adaptation (HDA) is expected to achieve effective knowledge transfer from a label-rich source domain to a heterogeneous target domain with scarce labeled data. Most prior HDA methods strive to align the cross-domain feature distributions by learning domain invariant representations without considering the intrinsic semantic correlations among categories, which inevitably results in the suboptimal adaptation performance across domains. Therefore, to address this issue, we propose a novel semantic correlation transfer (SCT) method for HDA, which not only matches the marginal and conditional distributions between domains to mitigate the large domain discrepancy, but also transfers the category correlation knowledge underlying the source domain to target by maximizing the pairwise class similarity across source and target. Technically, the domainwise and classwise centroids (prototypes) are first computed and aligned according to the feature embeddings. Then, based on the derived classwise prototypes, we leverage the cosine similarity of each two classes in both domains to transfer the supervised source semantic correlation knowledge among different categories to target effectively. As a result, the feature transferability and category discriminability can be simultaneously improved during the adaptation process. Comprehensive experiments and ablation studies on standard HDA tasks, such as text-to-image, image-to-image, and text-to-text, have demonstrated the superiority of our proposed SCT against several state-of-the-art HDA methods.

9.
IEEE Trans Pattern Anal Mach Intell ; 44(8): 4093-4109, 2022 Aug.
Article in English | MEDLINE | ID: mdl-33646945

ABSTRACT

Domain adaptation (DA) attempts to transfer knowledge learned in the labeled source domain to the unlabeled but related target domain without requiring large amounts of target supervision. Recent advances in DA mainly proceed by aligning the source and target distributions. Despite the significant success, the adaptation performance still degrades accordingly when the source and target domains encounter a large distribution discrepancy. We consider this limitation may attribute to the insufficient exploration of domain-specialized features because most studies merely concentrate on domain-general feature learning in task-specific layers and integrate totally-shared convolutional networks (convnets) to generate common features for both domains. In this paper, we relax the completely-shared convnets assumption adopted by previous DA methods and propose Domain Conditioned Adaptation Network (DCAN), which introduces domain conditioned channel attention module with a multi-path structure to separately excite channel activation for each domain. Such a partially-shared convnets module allows domain-specialized features in low-level to be explored appropriately. Further, given the knowledge transferability varying along with convolutional layers, we develop Generalized Domain Conditioned Adaptation Network (GDCAN) to automatically determine whether domain channel activations should be separately modeled in each attention module. Afterward, the critical domain-specialized knowledge could be adaptively extracted according to the domain statistic gaps. As far as we know, this is the first work to explore the domain-wise convolutional channel activations separately for deep DA networks. Additionally, to effectively match high-level feature distributions across domains, we consider deploying feature adaptation blocks after task-specific layers, which can explicitly mitigate the domain discrepancy. Extensive experiments on four cross-domain benchmarks, including DomainNet, Office-Home, Office-31, and ImageCLEF, demonstrate the proposed approaches outperform the existing methods by a large margin, especially on the large-scale challenging dataset. The code and models are available at https://github.com/BIT-DA/GDCAN.

10.
IEEE Trans Pattern Anal Mach Intell ; 43(7): 2329-2344, 2021 Jul.
Article in English | MEDLINE | ID: mdl-31944945

ABSTRACT

Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain. Most existing works assume source and target data share the identical label space, which is often difficult to be satisfied in many real-world applications. With the emergence of big data, there is a more practical scenario called partial domain adaptation, where we are always accessible to a more large-scale source domain while working on a relative small-scale target domain. In this case, the conventional domain adaptation assumption should be relaxed, and the target label space tends to be a subset of the source label space. Intuitively, reinforcing the positive effects of the most relevant source subclasses and reducing the negative impacts of irrelevant source subclasses are of vital importance to address partial domain adaptation challenge. This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN) by plugging one residual block into the source network along with the task-specific feature layer, which effectively enhances the adaptation from source to target and explicitly weakens the influence from the irrelevant source classes. Specifically, the plugged residual block, which consists of several fully-connected layers, could deepen basic network and boost its feature representation capability correspondingly. Moreover, we design a weighted class-wise domain alignment loss to couple two domains by matching the feature distributions of shared classes between source and target. Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.

11.
IEEE Trans Neural Netw Learn Syst ; 31(11): 4842-4856, 2020 11.
Article in English | MEDLINE | ID: mdl-31940560

ABSTRACT

Visual domain adaptation aims to seek an effective transferable model for unlabeled target images by benefiting from the well-labeled source images following different distributions. Many recent efforts focus on extracting domain-invariant image representations via exploring target pseudo labels, predicted by the source classifier, to further mitigate the conditional distribution shift across domains. However, two essential factors are overlooked by most existing methods: 1) the learned transferable features should be not only domain invariant but also category discriminative; and 2) the target pseudo label is a two-edged sword to cross-domain alignment. In other words, the wrongly predicted target labels may hinder the class-wise domain matching. In this article, to address these two issues simultaneously, we propose a discriminative transfer feature and label consistency (DTLC) approach for visual domain adaptation problems, which can naturally unify cross-domain alignment with discriminative information preserved and label consistency of source and target data into one framework. To be specific, DTLC first incorporates class discriminative information by penalizing the maximum distance of data pair in the same class and the minimum distance of data pair sharing the different labels for each data into the distribution alignment of both domains. The target pseudo labels are then refined based on the label consistency within the domains. Thus, the transfer feature learning and coarse-to-fine target labels would be coupled to benefit each other in an iterative way. Comprehensive experiments on several visual cross-domain benchmarks verify that DTLC can gain remarkable margins over state-of-the-art (SOTA) nondeep visual domain adaptation methods and even be comparable to competitive deep domain adaptation ones.

12.
Sensors (Basel) ; 15(6): 12323-41, 2015 May 26.
Article in English | MEDLINE | ID: mdl-26016916

ABSTRACT

Energy consumption is a major concern in context-aware smartphone sensing. This paper first studies mobile device-based battery modeling, which adopts the kinetic battery model (KiBaM), under the scope of battery non-linearities with respect to variant loads. Second, this paper models the energy consumption behavior of accelerometers analytically and then provides extensive simulation results and a smartphone application to examine the proposed sensor model. Third, a Markov reward process is integrated to create energy consumption profiles, linking with sensory operations and their effects on battery non-linearity. Energy consumption profiles consist of different pairs of duty cycles and sampling frequencies during sensory operations. Furthermore, the total energy cost by each profile is represented by an accumulated reward in this process. Finally, three different methods are proposed on the evolution of the reward process, to present the linkage between different usage patterns on the accelerometer sensor through a smartphone application and the battery behavior. By doing this, this paper aims at achieving a fine efficiency in power consumption caused by sensory operations, while maintaining the accuracy of smartphone applications based on sensor usages. More importantly, this study intends that modeling the battery non-linearities together with investigating the effects of different usage patterns in sensory operations in terms of the power consumption and the battery discharge may lead to discovering optimal energy reduction strategies to extend the battery lifetime and help a continual improvement in context-aware mobile services.

SELECTION OF CITATIONS
SEARCH DETAIL
...