Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE J Biomed Health Inform ; 28(3): 1516-1527, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38206781

RESUMO

Breast lesion segmentation in ultrasound images is essential for computer-aided breast-cancer diagnosis. To improve the segmentation performance, most approaches design sophisticated deep-learning models by mining the patterns of foreground lesions and normal backgrounds simultaneously or by unilaterally enhancing foreground lesions via various focal losses. However, the potential of normal backgrounds is underutilized, which could reduce false positives by compacting the feature representation of all normal backgrounds. From a novel viewpoint of bilateral enhancement, we propose a negative-positive cross-attention network to concentrate on normal backgrounds and foreground lesions, respectively. Derived from the complementing opposites of bipolarity in TaiChi, the network is denoted as TaiChiNet, which consists of the negative normal-background and positive foreground-lesion paths. To transmit the information across the two paths, a cross-attention module, a complementary MLP-head, and a complementary loss are built for deep-layer features, shallow-layer features, and mutual-learning supervision, separately. To the best of our knowledge, this is the first work to formulate breast lesion segmentation as a mutual supervision task from the foreground-lesion and normal-background views. Experimental results have demonstrated the effectiveness of TaiChiNet on two breast lesion segmentation datasets with a lightweight architecture. Furthermore, extensive experiments on the thyroid nodule segmentation and retinal optic cup/disc segmentation datasets indicate the application potential of TaiChiNet.


Assuntos
Neoplasias da Mama , Disco Óptico , Humanos , Feminino , Ultrassonografia , Neoplasias da Mama/diagnóstico por imagem , Diagnóstico por Computador , Conhecimento , Processamento de Imagem Assistida por Computador
2.
J Stat Plan Inference ; 228: 34-45, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38264292

RESUMO

Expression quantitative trait locus (eQTL) analysis is a useful tool to identify genetic loci that are associated with gene expression levels. Large collaborative efforts such as the Genotype-Tissue Expression (GTEx) project provide valuable resources for eQTL analysis in different tissues. Most existing methods, however, either focus on one tissue at a time, or analyze multiple tissues to identify eQTLs jointly present in multiple tissues. There is a lack of powerful methods to identify eQTLs in a target tissue while effectively borrowing strength from auxiliary tissues. In this paper, we propose a novel statistical framework to improve the eQTL detection efficacy in the tissue of interest with auxiliary information from other tissues. This framework can enhance the power of the hypothesis test for eQTL effects by incorporating shared and specific effects from multiple tissues into the test statistics. We also devise data-driven and distributed computing approaches for efficient implementation of eQTL detection when the number of tissues is large. Numerical studies in simulation demonstrate the efficacy of the proposed method, and the real data analysis of the GTEx example provides novel insights into eQTL findings in different tissues.

3.
IEEE Trans Med Imaging ; 43(5): 1664-1676, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38109240

RESUMO

Structural magnetic resonance imaging (sMRI) has been widely applied in computer-aided Alzheimer's disease (AD) diagnosis, owing to its capabilities in providing detailed brain morphometric patterns and anatomical features in vivo. Although previous works have validated the effectiveness of incorporating metadata (e.g., age, gender, and educational years) for sMRI-based AD diagnosis, existing methods solely paid attention to metadata-associated correlation to AD (e.g., gender bias in AD prevalence) or confounding effects (e.g., the issue of normal aging and metadata-related heterogeneity). Hence, it is difficult to fully excavate the influence of metadata on AD diagnosis. To address these issues, we constructed a novel Multi-template Meta-information Regularized Network (MMRN) for AD diagnosis. Specifically, considering diagnostic variation resulting from different spatial transformations onto different brain templates, we first regarded different transformations as data augmentation for self-supervised learning after template selection. Since the confounding effects may arise from excessive attention to meta-information owing to its correlation with AD, we then designed the modules of weakly supervised meta-information learning and mutual information minimization to learn and disentangle meta-information from learned class-related representations, which accounts for meta-information regularization for disease diagnosis. We have evaluated our proposed MMRN on two public multi-center cohorts, including the Alzheimer's Disease Neuroimaging Initiative (ADNI) with 1,950 subjects and the National Alzheimer's Coordinating Center (NACC) with 1,163 subjects. The experimental results have shown that our proposed method outperformed the state-of-the-art approaches in both tasks of AD diagnosis, mild cognitive impairment (MCI) conversion prediction, and normal control (NC) vs. MCI vs. AD classification.


Assuntos
Doença de Alzheimer , Encéfalo , Imageamento por Ressonância Magnética , Doença de Alzheimer/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Idoso , Feminino , Masculino , Interpretação de Imagem Assistida por Computador/métodos , Idoso de 80 Anos ou mais , Algoritmos
4.
RSC Adv ; 13(22): 14797-14807, 2023 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-37197186

RESUMO

Fluorinated hard carbon materials have been considered to be a good candidate of cathode materials of Li/CFx batteries. However, the effect of the precursor structure of the hard carbon on the structure and electrochemical performance of fluorinated carbon cathode materials has yet to be fully studied. In this paper, a series of fluorinated hard carbon (FHC) materials are prepared by gas phase fluorination using saccharides with different degrees of polymerization as a carbon source, and their structure and electrochemical properties are studied. The experimental results show that the specific surface area, pore structure, and defect degree of hard carbon (HC) are enhanced as the polymerization degree (i.e. molecular weight) of the starting saccharide increases. At the same time, the F/C ratio increases after fluorination at the same temperature, and the contents of electrochemically inactive -CF2 and -CF3 groups also become higher. At the fluorination temperature of 500 °C, the obtained fluorinated glucose pyrolytic carbon shows good electrochemical properties, with a specific capacity of 876 mA h g-1, an energy density of 1872 W kg-1, and a power density of 3740 W kg-1. This study provides valuable insights and references for selecting suitable hard carbon precursors to develop high-performance fluorinated carbon cathode materials.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37022080

RESUMO

Medical image segmentation is a vital stage in medical image analysis. Numerous deep-learning methods are booming to improve the performance of 2-D medical image segmentation, owing to the fast growth of the convolutional neural network. Generally, the manually defined ground truth is utilized directly to supervise models in the training phase. However, direct supervision of the ground truth often results in ambiguity and distractors as complex challenges appear simultaneously. To alleviate this issue, we propose a gradually recurrent network with curriculum learning, which is supervised by gradual information of the ground truth. The whole model is composed of two independent networks. One is the segmentation network denoted as GREnet, which formulates 2-D medical image segmentation as a temporal task supervised by pixel-level gradual curricula in the training phase. The other is a curriculum-mining network. To a certain degree, the curriculum-mining network provides curricula with an increasing difficulty in the ground truth of the training set by progressively uncovering hard-to-segmentation pixels via a data-driven manner. Given that segmentation is a pixel-level dense-prediction challenge, to the best of our knowledge, this is the first work to function 2-D medical image segmentation as a temporal task with pixel-level curriculum learning. In GREnet, the naive UNet is adopted as the backbone, while ConvLSTM is used to establish the temporal link between gradual curricula. In the curriculum-mining network, UNet ++ supplemented by transformer is designed to deliver curricula through the outputs of the modified UNet ++ at different layers. Experimental results have demonstrated the effectiveness of GREnet on seven datasets, i.e., three lesion segmentation datasets in dermoscopic images, an optic disc and cup segmentation dataset and a blood vessel segmentation dataset in retinal images, a breast lesion segmentation dataset in ultrasound images, and a lung segmentation dataset in computed tomography (CT).

6.
J Appl Stat ; 49(16): 4122-4136, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36353303

RESUMO

With the rapid development of modern sensor technology, high-dimensional data streams appear frequently nowadays, bringing urgent needs for effective statistical process control (SPC) tools. In such a context, the online monitoring problem of high-dimensional and correlated binary data streams is becoming very important. Conventional SPC methods for monitoring multivariate binary processes may fail when facing high-dimensional applications due to high computational complexity and the lack of efficiency. In this paper, motivated by an application in extreme weather surveillance, we propose a novel pairwise approach that considers the most informative pairwise correlation between any two data streams. The information is then integrated into an exponential weighted moving average (EWMA) charting scheme to monitor abnormal mean changes in high-dimensional binary data streams. Extensive simulation study together with a real-data analysis demonstrates the efficiency and applicability of the proposed control chart.

7.
IEEE Trans Cybern ; 52(7): 7136-7150, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33382666

RESUMO

The core prerequisite of most modern trackers is a motion assumption, defined as predicting the current location in a limited search region centering at the previous prediction. For clarity, the central subregion of a search region is denoted as the tracking anchor (e.g., the location of the previous prediction in the current frame). However, providing accurate predictions in all frames is very challenging in the complex nature scenes. In addition, the target locations in consecutive frames often change violently under the attribute of fast motion. Both facts are likely to lead the previous prediction to an unbelievable tracking anchor, which will make the aforementioned prerequisite invalid and cause tracking drift. To enhance the reliability of tracking anchors, we propose a real-time multianchor visual tracking mechanism, called multianchor tracking (MAT). Instead of directly relying on the tracking anchor inherited from the previous prediction, MAT selects the best anchor from an anchor ensemble, which includes several objectness-based anchor proposals and the anchor inherited from the previous prediction. The objectness-based anchors provide several complementary selective search regions, and an entropy-minimization-based selection method is introduced to find the best anchor. Our approach offers two benefits: 1) selective search regions can increase the chance of tracking success with affordable computational load and 2) anchor selection introduces the best anchor for each frame, which breaks the limitation of solo depending on the previous prediction. The extensive experiments of nine base trackers upgraded by MAT on four challenging datasets demonstrate the effectiveness of MAT.


Assuntos
Interpretação de Imagem Assistida por Computador , Movimento , Reprodutibilidade dos Testes
8.
IEEE Trans Neural Netw Learn Syst ; 33(3): 1079-1092, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-33296312

RESUMO

The core component of most anomaly detectors is a self-supervised model, tasked with modeling patterns included in training samples and detecting unexpected patterns as the anomalies in testing samples. To cope with normal patterns, this model is typically trained with reconstruction constraints. However, the model has the risk of overfitting to training samples and being sensitive to hard normal patterns in the inference phase, which results in irregular responses at normal frames. To address this problem, we formulate anomaly detection as a mutual supervision problem. Due to collaborative training, the complementary information of mutual learning can alleviate the aforementioned problem. Based on this motivation, a SIamese generative network (SIGnet), including two subnetworks with the same architecture, is proposed to simultaneously model the patterns of the forward and backward frames. During training, in addition to traditional constraints on improving the reconstruction performance, a bidirectional consistency loss based on the forward and backward views is designed as the regularization term to improve the generalization ability of the model. Moreover, we introduce a consistency-based evaluation criterion to achieve stable scores at the normal frames, which will benefit detecting anomalies with fluctuant scores in the inference phase. The results on several challenging benchmark data sets demonstrate the effectiveness of our proposed method.

9.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3602-3613, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33534703

RESUMO

Imbalanced data distribution in crowd counting datasets leads to severe under-estimation and over-estimation problems, which has been less investigated in existing works. In this paper, we tackle this challenging problem by proposing a simple but effective locality-based learning paradigm to produce generalizable features by alleviating sample bias. Our proposed method is locality-aware in two aspects. First, we introduce a locality-aware data partition (LADP) approach to group the training data into different bins via locality-sensitive hashing. As a result, a more balanced data batch is then constructed by LADP. To further reduce the training bias and enhance the collaboration with LADP, a new data augmentation method called locality-aware data augmentation (LADA) is proposed where the image patches are adaptively augmented based on the loss. The proposed method is independent of the backbone network architectures, and thus could be smoothly integrated with most existing deep crowd counting approaches in an end-to-end paradigm to boost their performance. We also demonstrate the versatility of the proposed method by applying it for adversarial defense. Extensive experiments verify the superiority of the proposed method over the state of the arts.

10.
IEEE Trans Cybern ; 51(2): 829-838, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-31902791

RESUMO

Single-image dehazing has been an important topic given the commonly occurred image degradation caused by adverse atmosphere aerosols. The key to haze removal relies on an accurate estimation of global air-light and the transmission map. Most existing methods estimate these two parameters using separate pipelines which reduces the efficiency and accumulates errors, thus leading to a suboptimal approximation, hurting the model interpretability, and degrading the performance. To address these issues, this article introduces a novel generative adversarial network (GAN) for single-image dehazing. The network consists of a novel compositional generator and a novel deeply supervised discriminator. The compositional generator is a densely connected network, which combines fine-scale and coarse-scale information. Benefiting from the new generator, our method can directly learn the physical parameters from data and recover clean images from hazy ones in an end-to-end manner. The proposed discriminator is deeply supervised, which enforces that the output of the generator to look similar to the clean images from low-level details to high-level structures. To the best of our knowledge, this is the first end-to-end generative adversarial model for image dehazing, which simultaneously outputs clean images, transmission maps, and air-lights. Extensive experiments show that our method remarkably outperforms the state-of-the-art methods. Furthermore, to facilitate future research, we create the HazeCOCO dataset which is currently the largest dataset for single-image dehazing.

11.
RSC Adv ; 11(29): 17558-17573, 2021 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35480204

RESUMO

This study provides an enhanced corrosion resistance of epoxy resin (EP) by embedding fluorinated graphene (FG) into the epoxy matrix. FG with different fluorine contents was obtained by reacting nitrogen trifluoride (NF3) gas with GO and then incorporated into the EP matrix to fabricate the different composites. Through a series of characterization methods, the chemical composition and microstructures of FG were systematically analyzed, and its corrosion resistance was also studied. Results revealed that F atoms were bonded to the GO surface to form C-F covalent bonds, and an FG lamellar thickness less than 2 nm. The contact angle of the coatings increased with the incorporation of FG, and the coating resistance of FG2/EP coating was 3 orders of magnitude more than that of the EP coating after immersion for 4080 h. Thus, the incorporation of FG into epoxy matrix significantly enhanced its hydrophobic properties and barrier performance, which was beneficial to improving the long-term corrosion resistance of the coating.

12.
IEEE Trans Image Process ; 25(9): 4116-28, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-27305680

RESUMO

Despite the previous efforts of object proposals, the detection rates of the existing approaches are still not satisfactory enough. To address this, we propose Adobe Boxes to efficiently locate the potential objects with fewer proposals, in terms of searching the object adobes that are the salient object parts easy to be perceived. Because of the visual difference between the object and its surroundings, an object adobe obtained from the local region has a high probability to be a part of an object, which is capable of depicting the locative information of the proto-object. Our approach comprises of three main procedures. First, the coarse object proposals are acquired by employing randomly sampled windows. Then, based on local-contrast analysis, the object adobes are identified within the enlarged bounding boxes that correspond to the coarse proposals. The final object proposals are obtained by converging the bounding boxes to tightly surround the object adobes. Meanwhile, our object adobes can also refine the detection rate of most state-of-the-art methods as a refinement approach. The extensive experiments on four challenging datasets (PASCAL VOC2007, VOC2010, VOC2012, and ILSVRC2014) demonstrate that the detection rate of our approach generally outperforms the state-of-the-art methods, especially with relatively small number of proposals. The average time consumed on one image is about 48 ms, which nearly meets the real-time requirement.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...