Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 28(7): 4062-4071, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38662561

RESUMO

In a clinical setting, the acquisition of certain medical image modality is often unavailable due to various considerations such as cost, radiation, etc. Therefore, unpaired cross-modality translation techniques, which involve training on the unpaired data and synthesizing the target modality with the guidance of the acquired source modality, are of great interest. Previous methods for synthesizing target medical images are to establish one-shot mapping through generative adversarial networks (GANs). As promising alternatives to GANs, diffusion models have recently received wide interests in generative tasks. In this paper, we propose a target-guided diffusion model (TGDM) for unpaired cross-modality medical image translation. For training, to encourage our diffusion model to learn more visual concepts, we adopted a perception prioritized weight scheme (P2W) to the training objectives. For sampling, a pre-trained classifier is adopted in the reverse process to relieve modality-specific remnants from source data. Experiments on both brain MRI-CT and prostate MRI-US datasets demonstrate that the proposed method achieves a visually realistic result that mimics a vivid anatomical section of the target organ. In addition, we have also conducted a subjective assessment based on the synthesized samples to further validate the clinical value of TGDM.


Assuntos
Encéfalo , Próstata , Humanos , Encéfalo/diagnóstico por imagem , Próstata/diagnóstico por imagem , Masculino , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Imagem Multimodal/métodos
2.
Artigo em Inglês | MEDLINE | ID: mdl-37022400

RESUMO

In many practical applications, massive data are observed from multiple sources, each of which contains multiple cohesive views, called hierarchical multiview (HMV) data, such as image-text objects with different types of visual and textual features. Naturally, the inclusion of source and view relationships offers a comprehensive view of the input HMV data and achieves an informative and correct clustering result. However, most existing multiview clustering (MVC) methods can only process single-source data with multiple views or multisource data with single type of feature, failing to consider all the views across multiple sources. Observing the rich closely related multivariate (i.e., source and view) information and the potential dynamic information flow interacting among them, in this article, a general hierarchical information propagation model is first built to address the above challenging problem. It describes the process from optimal feature subspace learning (OFSL) of each source to final clustering structure learning (CSL). Then, a novel self-guided method named propagating information bottleneck (PIB) is proposed to realize the model. It works in a circulating propagation fashion, so that the resulting clustering structure obtained from the last iteration can "self-guide" the OFSL of each source, and the learned subspaces are in turn used to conduct the subsequent CSL. We theoretically analyze the relationship between the cluster structures learned in the CSL phase and the preservation of relevant information propagated from the OFSL phase. Finally, a two-step alternating optimization method is carefully designed for optimization. Experimental results on various datasets show the superiority of the proposed PIB method over several state-of-the-art methods.

3.
IEEE Trans Cybern ; 52(6): 4260-4274, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33085626

RESUMO

Multiview clustering (MVC) has recently been the focus of much attention due to its ability to partition data from multiple views via view correlations. However, most MVC methods only learn either interfeature correlations or intercluster correlations, which may lead to unsatisfactory clustering performance. To address this issue, we propose a novel dual-correlated multivariate information bottleneck (DMIB) method for MVC. DMIB is able to explore both interfeature correlations (the relationship among multiple distinct feature representations from different views) and intercluster correlations (the close agreement among clustering results obtained from individual views). For the former, we integrate both view-shared feature correlations discovered by learning a shared discriminative feature subspace and view-specific feature information to fully explore the interfeature correlation. This allows us to attain multiple reliable local clustering results of different views. Following this, we explore the intercluster correlations by learning the shared mutual information over different local clusterings for an improved global partition. By integrating both correlations, we formulate the problem as a unified information maximization function and further design a two-step method for optimization. Moreover, we theoretically prove the convergence of the proposed algorithm, and discuss the relationships between our method and several existing clustering paradigms. The experimental results on multiple datasets demonstrate the superiority of DMIB compared to several state-of-the-art clustering methods.


Assuntos
Algoritmos , Aprendizagem , Análise por Conglomerados
4.
IEEE Trans Image Process ; 30: 7472-7485, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34449363

RESUMO

The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering by means of an additional guidance image. Where classical guided filters transfer structures using hand-designed functions, recent guided filters have been considerably advanced through parametric learning of deep networks. The state-of-the-art leverages deep networks to estimate the two core coefficients of the guided filter. In this work, we posit that simultaneously estimating both coefficients is suboptimal, resulting in halo artifacts and structure inconsistencies. Inspired by unsharp masking, a classical technique for edge enhancement that requires only a single coefficient, we propose a new and simplified formulation of the guided filter. Our formulation enjoys a filtering prior from a low-pass filter and enables explicit structure transfer by estimating a single coefficient. Based on our proposed formulation, we introduce a successive guided filtering network, which provides multiple filtering results from a single network, allowing for a trade-off between accuracy and efficiency. Extensive ablations, comparisons and analysis show the effectiveness and efficiency of our formulation and network, resulting in state-of-the-art results across filtering tasks like upsampling, denoising, and cross-modality filtering. Code is available at https://github.com/shizenglin/Unsharp-Mask-Guided-Filtering.

6.
IEEE Trans Pattern Anal Mach Intell ; 43(4): 1460-1466, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-32142419

RESUMO

Is recurrent network really necessary for learning a good visual representation for video based person re-identification (VPRe-id)? In this paper, we first show that the common practice of employing recurrent neural networks (RNNs) to aggregate temporal-spatial features may not be optimal. Specifically, with a diagnostic analysis, we show that the recurrent structure may not be effective learn temporal dependencies than what we expected and implicitly yields an orderless representation. Based on this observation, we then present a simple yet surprisingly powerful approach for VPRe-id, where we treat VPRe-id as an efficient orderless ensemble of image based person re-identification problem. More specifically, we divide videos into individual images and re-identify person with ensemble of image based rankers. Under the i.i.d. assumption, we provide an error bound that sheds light upon how could we improve VPRe-id. Our work also presents a promising way to bridge the gap between video and image based person re-identification. Comprehensive experimental evaluations demonstrate that the proposed solution achieves state-of-the-art performances on multiple widely used datasets (iLIDS-VID, PRID 2011, and MARS).

7.
IEEE Trans Pattern Anal Mach Intell ; 43(3): 982-998, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31562072

RESUMO

Nonlinear regression has been extensively employed in many computer vision problems (e.g., crowd counting, age estimation, affective computing). Under the umbrella of deep learning, two common solutions exist i) transforming nonlinear regression to a robust loss function which is jointly optimizable with the deep convolutional network, and ii) utilizing ensemble of deep networks. Although some improved performance is achieved, the former may be lacking due to the intrinsic limitation of choosing a single hypothesis and the latter may suffer from much larger computational complexity. To cope with those issues, we propose to regress via an efficient "divide and conquer" manner. The core of our approach is the generalization of negative correlation learning that has been shown, both theoretically and empirically, to work well for non-deep regression problems. Without extra parameters, the proposed method controls the bias-variance-covariance trade-off systematically and usually yields a deep regression ensemble where each base model is both "accurate" and "diversified." Moreover, we show that each sub-problem in the proposed method has less Rademacher Complexity and thus is easier to optimize. Extensive experiments on several diverse and challenging tasks including crowd counting, personality analysis, age estimation, and image super-resolution demonstrate the superiority over challenging baselines as well as the versatility of the proposed method. The source code and trained models are available on our project page: https://mmcheng.net/dncl/.

8.
Med Image Anal ; 58: 101537, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31446280

RESUMO

Knowledge of whole heart anatomy is a prerequisite for many clinical applications. Whole heart segmentation (WHS), which delineates substructures of the heart, can be very valuable for modeling and analysis of the anatomy and functions of the heart. However, automating this segmentation can be challenging due to the large variation of the heart shape, and different image qualities of the clinical data. To achieve this goal, an initial set of training data is generally needed for constructing priors or for training. Furthermore, it is difficult to perform comparisons between different methods, largely due to differences in the datasets and evaluation metrics used. This manuscript presents the methodologies and evaluation results for the WHS algorithms selected from the submissions to the Multi-Modality Whole Heart Segmentation (MM-WHS) challenge, in conjunction with MICCAI 2017. The challenge provided 120 three-dimensional cardiac images covering the whole heart, including 60 CT and 60 MRI volumes, all acquired in clinical environments with manual delineation. Ten algorithms for CT data and eleven algorithms for MRI data, submitted from twelve groups, have been evaluated. The results showed that the performance of CT WHS was generally better than that of MRI WHS. The segmentation of the substructures for different categories of patients could present different levels of challenge due to the difference in imaging and variations of heart shapes. The deep learning (DL)-based methods demonstrated great potential, though several of them reported poor results in the blinded evaluation. Their performance could vary greatly across different network structures and training strategies. The conventional algorithms, mainly based on multi-atlas segmentation, demonstrated good performance, though the accuracy and computational efficiency could be limited. The challenge, including provision of the annotated training data and the blinded evaluation for submitted algorithms on the test data, continues as an ongoing benchmarking resource via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/mmwhs/).


Assuntos
Algoritmos , Coração/anatomia & histologia , Imageamento por Ressonância Magnética , Tomografia Computadorizada por Raios X , Conjuntos de Dados como Assunto , Humanos , Processamento de Imagem Assistida por Computador/métodos
9.
IEEE Trans Neural Netw Learn Syst ; 30(6): 1867-1880, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30387747

RESUMO

The balance of neighborhood space around a central point is an important concept in cluster analysis. It can be used to effectively detect cluster boundary objects. The existing neighborhood analysis methods focus on the distribution of data, i.e., analyzing the characteristic of the neighborhood space from a single perspective, and could not obtain rich data characteristics. In this paper, we analyze the high-dimensional neighborhood space from multiple perspectives. By simulating each dimension of a data point's k nearest neighbors space ( k NNs) as a lever, we apply the lever principle to compute the balance fulcrum of each dimension after proving its inevitability and uniqueness. Then, we model the distance between the projected coordinate of the data point and the balance fulcrum on each dimension and construct the DHBlan coefficient to measure the balance of the neighborhood space. Based on this theoretical model, we propose a simple yet effective cluster boundary detection algorithm called Lever. Experiments on both low- and high-dimensional data sets validate the effectiveness and efficiency of our proposed algorithm.

10.
Neural Netw ; 83: 21-31, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27543927

RESUMO

Pooling is a key mechanism in deep convolutional neural networks (CNNs) which helps to achieve translation invariance. Numerous studies, both empirically and theoretically, show that pooling consistently boosts the performance of the CNNs. The conventional pooling methods are operated on activation values. In this work, we alternatively propose rank-based pooling. It is derived from the observations that ranking list is invariant under changes of activation values in a pooling region, and thus rank-based pooling operation may achieve more robust performance. In addition, the reasonable usage of rank can avoid the scale problems encountered by value-based methods. The novel pooling mechanism can be regarded as an instance of weighted pooling where a weighted sum of activations is used to generate the pooling output. This pooling mechanism can also be realized as rank-based average pooling (RAP), rank-based weighted pooling (RWP) and rank-based stochastic pooling (RSP) according to different weighting strategies. As another major contribution, we present a novel criterion to analyze the discriminant ability of various pooling methods, which is heavily under-researched in machine learning and computer vision community. Experimental results on several image benchmarks show that rank-based pooling outperforms the existing pooling methods in classification performance. We further demonstrate better performance on CIFAR datasets by integrating RSP into Network-in-Network.


Assuntos
Redes Neurais de Computação , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...