Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
J Cancer Res Clin Oncol ; 149(17): 15511-15524, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37646827

RESUMO

PURPOSE: Skin disease is a prevalent type of physical ailment that can manifest in multitude of forms. Many internal diseases can be directly reflected on the skin, and if left unattended, skin diseases can potentially develop into skin cancer. Accurate and effective segmentation of skin lesions, especially melanoma, is critical for early detection and diagnosis of skin cancer. However, the complex color variations, boundary ambiguity, and scale variations in skin lesion regions present significant challenges for precise segmentation. METHODS: We propose a novel approach for melanoma segmentation using a dual-branch interactive U-Net architecture. Two distinct sampling strategies are simultaneously integrated into the network, creating a vertical dual-branch structure. Meanwhile, we introduce a novel dual-channel symmetrical convolution block (DCS-Conv), which employs a symmetrical design, enabling the network to exhibit a horizontal dual-branch structure. The combination of the vertical and horizontal distribution of the dual-branch structure enhances both the depth and width of the network, providing greater diversity and rich multiscale cascade features. Additionally, this paper introduces a novel module called the residual fuse-and-select module (RFS module), which leverages self-attention mechanisms to focus on the specific skin cancer features and reduce irrelevant artifacts, further improving the segmentation accuracy. RESULTS: We evaluated our approach on two publicly skin cancer datasets, ISIC2016 and PH2, and achieved state-of-the-art results, surpassing previous outcomes in terms of segmentation accuracy and overall performance. CONCLUSION: Our proposed approach holds tremendous potential to aid dermatologists in clinical decision-making.


Assuntos
Melanoma , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Pele , Melanoma/diagnóstico por imagem , Tomada de Decisão Clínica
2.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5427-5439, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37459266

RESUMO

With the development of image style transfer technologies, portrait style transfer has attracted growing attention in this research community. In this article, we present an asymmetric double-stream generative adversarial network (ADS-GAN) to solve the problems that caused by cartoonization and other style transfer techniques when they are applied to portrait photos, such as facial deformation, contours missing, and stiff lines. By observing the characteristics between source and target images, we propose an edge contour retention (ECR) regularized loss to constrain the local and global contours of generated portrait images to avoid the portrait deformation. In addition, a content-style feature fusion module is introduced for further learning of the target image style, which uses a style attention mechanism to integrate features and embeds style features into content features of portrait photos according to the attention weights. Finally, a guided filter is introduced in content encoder to smooth the textures and specific details of source image, thereby eliminating its negative impact on style transfer. We conducted overall unified optimization training on all components and got an ADS-GAN for unpaired artistic portrait style transfer. Qualitative comparisons and quantitative analyses demonstrate that the proposed method generates superior results than benchmark work in preserving the overall structure and contours of portrait; ablation and parameter study demonstrate the effectiveness of each component in our framework.

3.
IEEE J Biomed Health Inform ; 27(7): 3489-3500, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37023161

RESUMO

Medical image fusion technology is an essential component of computer-aided diagnosis, which aims to extract useful cross-modality cues from raw signals to generate high-quality fused images. Many advanced methods focus on designing fusion rules, but there is still room for improvement in cross-modal information extraction. To this end, we propose a novel encoder-decoder architecture with three technical novelties. First, we divide the medical images into two attributes, namely pixel intensity distribution attributes and texture attributes, and thus design two self-reconstruction tasks to mine as many specific features as possible. Second, we propose a hybrid network combining a CNN and a transformer module to model both long-range and short-range dependencies. Moreover, we construct a self-adaptive weight fusion rule that automatically measures salient features. Extensive experiments on a public medical image dataset and other multimodal datasets show that the proposed method achieves satisfactory performance.


Assuntos
Diagnóstico por Computador , Fontes de Energia Elétrica , Humanos , Armazenamento e Recuperação da Informação , Processamento de Imagem Assistida por Computador
4.
Appl Soft Comput ; 125: 109111, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35693545

RESUMO

COVID-19 spreads and contracts people rapidly, to diagnose this disease accurately and timely is essential for quarantine and medical treatment. RT-PCR plays a crucial role in diagnosing the COVID-19, whereas computed tomography (CT) delivers a faster result when combining artificial assistance. Developing a Deep Learning classification model for detecting the COVID-19 through CT images is conducive to assisting doctors in consultation. We proposed a feature complement fusion network (FCF) for detecting COVID-19 through lung CT scan images. This framework can extract both local features and global features by CNN extractor and ViT extractor severally, which successfully complement the deficiency problem of the receptive field of the other. Due to the attention mechanism in our designed feature complement Transformer (FCT), extracted local and global feature embeddings achieve a better representation. We combined a supervised with a weakly supervised strategy to train our model, which can promote CNN to guide the VIT to converge faster. Finally, we got a 99.34% accuracy on our test set, which surpasses the current state-of-art popular classification model. Moreover, this proposed structure can easily extend to other classification tasks when changing other proper extractors.

5.
Big Data ; 10(6): 515-527, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-34981961

RESUMO

Employing feature vectors extracted from the target detector has been shown to be effective in improving the performance of image captioning. However, it is considered that existing framework suffers from the deficiency of insufficient information extraction, such as positional relationships; it is very important to judge the relationship between objects. To fill this gap, we present a dual position relationship transformer (DPR) for image captioning; the architecture improves the image information extraction and description coding steps: it first calculates the relative position (RP) and absolute position (AP) between objects, and integrates the dual position relationship information into self-attention. Specifically, convolutional neural network (CNN) and faster R-CNN are applied to extract image features and target detection, then to calculate the RP and AP of the generated object boxes. The former is expressed in coordinate form, and the latter is calculated by sinusoidal encoding. In addition, to better model the sequence and time relationship in the description, DPR adopts long short-term memory to encode text vector. We conduct extensive experiments on the Microsoft COCO: Common Objects in Context (MSCOCO) image captioning data set that shows that our method achieves superior performance that Consensus-based Image Description Evaluation (CIDEr) increased to 114.6 after training 30 epochs and runs 2 times faster, compared with other competitive methods. The ablation study verifies the effectiveness of our proposed module.


Assuntos
Armazenamento e Recuperação da Informação , Redes Neurais de Computação
6.
IEEE Trans Neural Netw Learn Syst ; 33(10): 5200-5214, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33852392

RESUMO

With the booming of deep learning, massive attention has been paid to developing neural models for multilabel text categorization (MLTC). Most of the works concentrate on disclosing word-label relationship, while less attention is taken in exploiting global clues, particularly with the relationship of document-label. To address this limitation, we propose an effective collaborative representation learning (CRL) model in this article. CRL consists of a factorization component for generating shallow representations of documents and a neural component for deep text-encoding and classification. We have developed strategies for jointly training those two components, including an alternating-least-squares-based approach for factorizing the pointwise mutual information (PMI) matrix of label-document and multitask learning (MTL) strategy for the neural component. According to the experimental results on six data sets, CRL can explicitly take advantage of the relationship of document-label and achieve competitive classification performance in comparison with some state-of-the-art deep methods.

7.
Biomed Res Int ; 2020: 6265708, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32352003

RESUMO

Computed tomography (CT) images show structural features, while magnetic resonance imaging (MRI) images represent brain tissue anatomy but do not contain any functional information. How to effectively combine the images of the two modes has become a research challenge. In this paper, a new framework for medical image fusion is proposed which combines convolutional neural networks (CNNs) and non-subsampled shearlet transform (NSST) to simultaneously cover the advantages of them both. This method effectively retains the functional information of the CT image and reduces the loss of brain structure information and spatial distortion of the MRI image. In our fusion framework, the initial weights integrate the pixel activity information from two source images that is generated by a dual-branch convolutional network and is decomposed by NSST. Firstly, the NSST is performed on the source images and the initial weights to obtain their low-frequency and high-frequency coefficients. Then, the first component of the low-frequency coefficients is fused by a novel fusion strategy, which simultaneously copes with two key issues in the fusion processing which are named energy conservation and detail extraction. The second component of the low-frequency coefficients is fused by the strategy that is designed according to the spatial frequency of the weight map. Moreover, the high-frequency coefficients are fused by the high-frequency components of the initial weight. Finally, the final image is reconstructed by the inverse NSST. The effectiveness of the proposed method is verified using pairs of multimodality images, and the sufficient experiments indicate that our method performs well especially for medical image fusion.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Modelos Teóricos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos
8.
Biomed Res Int ; 2020: 4071508, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32420339

RESUMO

Apoptosis proteins are strongly related to many diseases and play an indispensable role in maintaining the dynamic balance between cell death and division in vivo. Obtaining localization information on apoptosis proteins is necessary in understanding their function. To date, few researchers have focused on the problem of apoptosis data imbalance before classification, while this data imbalance is prone to misclassification. Therefore, in this work, we introduce a method to resolve this problem and to enhance prediction accuracy. Firstly, the features of the protein sequence are captured by combining Improving Pseudo-Position-Specific Scoring Matrix (IM-Psepssm) with the Bidirectional Correlation Coefficient (Bid-CC) algorithm from position-specific scoring matrix. Secondly, different features of fusion and resampling strategies are used to reduce the impact of imbalance on apoptosis protein datasets. Finally, the eigenvector adopts the Support Vector Machine (SVM) to the training classification model, and the prediction accuracy is evaluated by jackknife cross-validation tests. The experimental results indicate that, under the same feature vector, adopting resampling methods remarkably boosts many significant indicators in the unsampling method for predicting the localization of apoptosis proteins in the ZD98, ZW225, and CL317 databases. Additionally, we also present new user-friendly local software for readers to apply; the codes and software can be freely accessed at https://github.com/ruanxiaoli/Im-Psepssm.


Assuntos
Proteínas Reguladoras de Apoptose , Biologia Computacional/métodos , Matrizes de Pontuação de Posição Específica , Análise de Sequência de Proteína/métodos , Algoritmos , Animais , Apoptose , Proteínas Reguladoras de Apoptose/química , Proteínas Reguladoras de Apoptose/genética , Bases de Dados de Proteínas , Máquina de Vetores de Suporte
9.
Neural Netw ; 124: 308-318, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32036228

RESUMO

In recommender systems, matrix factorization and its variants have grown up to be dominant in collaborative filtering due to their simplicity and effectiveness. In matrix factorization based methods, dot product which is actually used as a measure of distance from users to items, does not satisfy the inequality property, and thus may fail to capture the inner grained preference information and further limits the performance of recommendations. Metric learning produces distance functions that capture the essential relationships among rating data and has been successfully explored in collaborative recommendations. However, without the global statistical information of user-user pairs and item-item pairs, it makes the model easy to achieve a suboptimal metric. For this, we present a co-occurrence embedding regularized metric learning model (CRML) for collaborative recommendations. We consider the optimization problem as a multi-task learning problem which includes optimizing a primary task of metric learning and two auxiliary tasks of representation learning. In particular, we develop an effective approach for learning the embedding representations of both users and items, and then exploit the strategy of soft parameter sharing to optimize the model parameters. Empirical experiments on four datasets demonstrate that the CRML model can enhance the naive metric learning model and significantly outperforms the state-of-the-art methods in terms of accuracy of collaborative recommendations.


Assuntos
Aprendizado de Máquina
10.
Med Biol Eng Comput ; 57(12): 2553-2565, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31621050

RESUMO

Apoptosis proteins are related to many diseases. Obtaining the subcellular localization information of apoptosis proteins is helpful to understand the mechanism of diseases and to develop new drugs. At present, the researchers mainly focus on the primary protein sequences, so there is still room for improvement in the prediction accuracy of the subcellular localization of apoptosis proteins. In this paper, a new method named ERT-ECT-PSSM-IS is proposed to predict apoptosis proteins based on the position-specific scoring matrix (PSSM). First, the local and global features of different directions are extracted by evolutionary row transformation (ERT) and cross-covariance of evolutionary column transformation (ECT) based on PSSM (ERT-ECT-PSSM). Second, an improved isometric mapping algorithm (I-SMA) is used to eliminate redundant features. Finally, we adopt a support vector machine (SVM) to classify our results, and the prediction accuracy is evaluated by jackknife cross-validation tests. The experimental results show that the proposed method not only extracts more abundant feature expression but also has better predictive performance and robustness for the subcellular localization of apoptosis proteins in ZD98, ZW225, and CL317 databases. Graphical abstract Framework of the proposed prediction model.


Assuntos
Proteínas Reguladoras de Apoptose/metabolismo , Apoptose/fisiologia , Algoritmos , Biologia Computacional/métodos , Matrizes de Pontuação de Posição Específica , Máquina de Vetores de Suporte
11.
Med Biol Eng Comput ; 57(4): 887-900, 2019 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-30471068

RESUMO

The aim of medical image fusion is to improve the clinical diagnosis accuracy, so the fused image is generated by preserving salient features and details of the source images. This paper designs a novel fusion scheme for CT and MRI medical images based on convolutional neural networks (CNNs) and a dual-channel spiking cortical model (DCSCM). Firstly, non-subsampled shearlet transform (NSST) is utilized to decompose the source image into a low-frequency coefficient and a series of high-frequency coefficients. Secondly, the low-frequency coefficient is fused by the CNN framework, where weight map is generated by a series of feature maps and an adaptive selection rule, and then the high-frequency coefficients are fused by DCSCM, where the modified average gradient of the high-frequency coefficients is adopted as the input stimulus of DCSCM. Finally, the fused image is reconstructed by inverse NSST. Experimental results indicate that the proposed scheme performs well in both subjective visual performance and objective evaluation and has superiorities in detail retention and visual effect over other current typical ones. Graphical abstract A schematic diagram of the CT and MRI medical image fusion framework using convolutional neural network and a dual-channel spiking cortical model.


Assuntos
Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Modelos Neurológicos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X , Humanos , Neurônios/fisiologia
12.
Neural Comput ; 30(7): 1775-1800, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29894654

RESUMO

As the optical lenses for cameras always have limited depth of field, the captured images with the same scene are not all in focus. Multifocus image fusion is an efficient technology that can synthesize an all-in-focus image using several partially focused images. Previous methods have accomplished the fusion task in spatial or transform domains. However, fusion rules are always a problem in most methods. In this letter, from the aspect of focus region detection, we propose a novel multifocus image fusion method based on a fully convolutional network (FCN) learned from synthesized multifocus images. The primary novelty of this method is that the pixel-wise focus regions are detected through a learning FCN, and the entire image, not just the image patches, are exploited to train the FCN. First, we synthesize 4500 pairs of multifocus images by repeatedly using a gaussian filter for each image from PASCAL VOC 2012, to train the FCN. After that, a pair of source images is fed into the trained FCN, and two score maps indicating the focus property are generated. Next, an inversed score map is averaged with another score map to produce an aggregative score map, which take full advantage of focus probabilities in two score maps. We implement the fully connected conditional random field (CRF) on the aggregative score map to accomplish and refine a binary decision map for the fusion task. Finally, we exploit the weighted strategy based on the refined decision map to produce the fused image. To demonstrate the performance of the proposed method, we compare its fused results with several start-of-the-art methods not only on a gray data set but also on a color data set. Experimental results show that the proposed method can achieve superior fusion performance in both human visual quality and objective assessment.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Humanos , Redes Neurais de Computação
13.
J Mol Graph Model ; 76: 342-355, 2017 09.
Artigo em Inglês | MEDLINE | ID: mdl-28763687

RESUMO

DNA sequence similarity/dissimilarity analysis is a fundamental task in computational biology, which is used to analyze the similarity of different DNA sequences for learning their evolutionary relationships. In past decades, a large number of similarity analysis methods for DNA sequence have been proposed due to the ever-growing demands. In order to learn the advances of DNA sequence similarity analysis, we make a survey and try to promote the development of this field. In this paper, we first introduce the related knowledge of DNA similarities analysis, including the data sets, similarities distance and output data. Then, we review recent algorithmic developments for DNA similarity analysis to represent a survey of the art in this field. At last, we summarize the corresponding tendencies and challenges in this research field. This survey concludes that although various DNA similarity analysis methods have been proposed, there still exist several further improvements or potential research directions in this field.


Assuntos
Sequência de Bases , Biologia Computacional , DNA/química , Homologia de Sequência do Ácido Nucleico , Algoritmos , Animais , Composição de Bases , Biologia Computacional/métodos , Humanos , Filogenia , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...