Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Molecules ; 29(11)2024 May 24.
Article in English | MEDLINE | ID: mdl-38893359

ABSTRACT

The combinatorial therapy with multiple drugs may lead to unexpected drug-drug interactions (DDIs) and result in adverse reactions to patients. Predicting DDI events can mitigate the potential risks of combinatorial therapy and enhance drug safety. In recent years, deep models based on heterogeneous graph representation learning have attracted widespread interest in DDI event prediction and have yielded satisfactory results, but there is still room for improvement in prediction performance. In this study, we proposed a meta-path-based heterogeneous graph contrastive learning model, MPHGCL-DDI, for DDI event prediction. The model constructs two contrastive views based on meta-paths: an average graph view and an augmented graph view. The former represents that there are connections between drugs, while the latter reveals how the drugs connect with each other. We defined three levels of data augmentation schemes in the augmented graph view and adopted a combination of three losses in the model training phase: multi-relation prediction loss, unsupervised contrastive loss and supervised contrastive loss. Furthermore, the model incorporates indirect drug information, protein-protein interactions (PPIs), to reveal latent relations of drugs. We evaluated MPHGCL-DDI on three different tasks of two datasets. Experimental results demonstrate that MPHGCL-DDI surpasses several state-of-the-art methods in performance.


Subject(s)
Drug Interactions , Humans , Algorithms , Deep Learning , Machine Learning
2.
Materials (Basel) ; 16(15)2023 Jul 31.
Article in English | MEDLINE | ID: mdl-37570097

ABSTRACT

Graphene-based laminar membranes exhibit remarkable ion sieving properties, but their monovalent ion selectivity is still low and much less than the natural ion channels. Inspired by the elementary structure/function relationships of biological ion channels embedded in biomembranes, a new strategy is proposed herein to mimic biological K+ channels by using the graphene laminar membrane (GLM) composed of two-dimensional (2D) angstrom(Å)-scale channels to support a simple model of semi-biomembrane, namely oil/water (O/W) interface. It is found that K+ is strongly preferred over Na+ and Li+ for transferring across the GLM-supported water/1,2-dichloroethane (W/DCE) interface within the same potential window (-0.1-0.6 V), although the monovalent ion selectivity of GLM under the aqueous solution is still low (K+/Na+~1.11 and K+/Li+~1.35). Moreover, the voltammetric responses corresponding to the ion transfer of NH4+ observed at the GLM-supported W/DCE interface also show that NH4+ can often pass through the biological K+ channels due to their comparable hydration-free energies and cation-π interactions. The underlying mechanism of as-observed K+ selective voltammetric responses is discussed and found to be consistent with the energy balance of cationic partial-dehydration (energetic costs) and cation-π interaction (energetic gains) as involved in biological K+ channels.

3.
IEEE Trans Cybern ; 53(11): 6776-6787, 2023 Nov.
Article in English | MEDLINE | ID: mdl-36044511

ABSTRACT

Automatic tumor or lesion segmentation is a crucial step in medical image analysis for computer-aided diagnosis. Although the existing methods based on convolutional neural networks (CNNs) have achieved the state-of-the-art performance, many challenges still remain in medical tumor segmentation. This is because, although the human visual system can detect symmetries in 2-D images effectively, regular CNNs can only exploit translation invariance, overlooking further inherent symmetries existing in medical images, such as rotations and reflections. To solve this problem, we propose a novel group equivariant segmentation framework by encoding those inherent symmetries for learning more precise representations. First, kernel-based equivariant operations are devised on each orientation, which allows it to effectively address the gaps of learning symmetries in existing approaches. Then, to keep segmentation networks globally equivariant, we design distinctive group layers with layer-wise symmetry constraints. Finally, based on our novel framework, extensive experiments conducted on real-world clinical data demonstrate that a group equivariant Res-UNet (called GER-UNet) outperforms its regular CNN-based counterpart and the state-of-the-art segmentation methods in the tasks of hepatic tumor segmentation, COVID-19 lung infection segmentation, and retinal vessel detection. More importantly, the newly built GER-UNet also shows potential in reducing the sample complexity and the redundancy of filters, upgrading current segmentation CNNs, and delineating organs on other medical imaging modalities.


Subject(s)
COVID-19 , Neoplasms , Humans , COVID-19/diagnostic imaging , Neural Networks, Computer , Diagnosis, Computer-Assisted , Image Processing, Computer-Assisted/methods
4.
Methods ; 202: 40-53, 2022 06.
Article in English | MEDLINE | ID: mdl-34029714

ABSTRACT

Automatic medical image segmentation plays an important role as a diagnostic aid in the identification of diseases and their treatment in clinical settings. Recently proposed methods based on Convolutional Neural Networks (CNNs) have demonstrated their potential in image processing tasks, including some medical image analysis tasks. Those methods can learn various feature representations with numerous weight-shared convolutional kernels, however, the missed diagnosis rate of regions of interest (ROIs) is still high in medical image segmentation. Two crucial factors behind this shortcoming, which have been overlooked, are small ROIs from medical images and the limited context information from the existing network models. In order to reduce the missed diagnosis rate of ROIs from medical images, we propose a new segmentation framework which enhances the representative capability of small ROIs (particularly in deep layers) and explicitly learns global contextual dependencies in multi-scale feature spaces. In particular, the local features and their global dependencies from each feature space are adaptively aggregated based on both the spatial and the channel dimensions. Moreover, some visualization comparisons of the learned features from our framework further boost neural networks' interpretability. Experimental results show that, in comparison to some popular medical image segmentation and general image segmentation methods, our proposed framework achieves the state-of-the-art performance on the liver tumor segmentation task with 91.18% Sensitivity, the COVID-19 lung infection segmentation task with 75.73% Sensitivity and the retinal vessel detection task with 82.68% Sensitivity. Moreover, it is possible to integrate (parts of) the proposed framework into most of the recently proposed Fully CNN-based models, in order to improve their effectiveness in medical image segmentation tasks.


Subject(s)
COVID-19 , Liver Neoplasms , Algorithms , COVID-19/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Neural Networks, Computer
5.
Neural Netw ; 140: 203-222, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33780873

ABSTRACT

Compared with the traditional analysis of computed tomography scans, automatic liver tumor segmentation can supply precise tumor volumes and reduce the inter-observer variability in estimating the tumor size and the tumor burden, which could further assist physicians to make better therapeutic choices for hepatic diseases and monitoring treatment. Among current mainstream segmentation approaches, multi-layer and multi-kernel convolutional neural networks (CNNs) have attracted much attention in diverse biomedical/medical image segmentation tasks with remarkable performance. However, an arbitrary stacking of feature maps makes CNNs quite inconsistent in imitating the cognition and the visual attention of human beings for a specific visual task. To mitigate the lack of a reasonable feature selection mechanism in CNNs, we exploit a novel and effective network architecture, called Tumor Attention Networks (TA-Net), for mining adaptive features by embedding Tumor Attention layers with multi-functional modules to assist the liver tumor segmentation task. In particular, each tumor attention layer can adaptively highlight valuable tumor features and suppress unrelated ones among feature maps from 3D and 2D perspectives. Moreover, an analysis of visualization results illustrates the effectiveness of our tumor attention modules and the interpretability of CNNs for liver tumor segmentation. Furthermore, we explore different arrangements of skip connections in information fusion. A deep ablation study is also conducted to illustrate the effects of different attention strategies for hepatic tumors. The results of extensive experiments demonstrate that the proposed TA-Net increases the liver tumor segmentation performance with a lower computation cost and a small parameter overhead over the state-of-the-art methods, under various evaluation metrics on clinical benchmark data. In addition, two additional medical image datasets are used to evaluate generalization capability of TA-Net, including the comparison with general semantic segmentation methods and a non-tumor segmentation task. All the program codes have been released at https://github.com/shuchao1212/TA-Net.


Subject(s)
Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Neural Networks, Computer , Tomography, X-Ray Computed/methods , Humans , Image Processing, Computer-Assisted/standards , Tomography, X-Ray Computed/standards
6.
Eur J Nucl Med Mol Imaging ; 47(10): 2248-2268, 2020 09.
Article in English | MEDLINE | ID: mdl-32222809

ABSTRACT

PURPOSE: Unlike the normal organ segmentation task, automatic tumor segmentation is a more challenging task because of the existence of similar visual characteristics between tumors and their surroundings, especially on computed tomography (CT) images with severe low contrast resolution, as well as the diversity and individual characteristics of data acquisition procedures and devices. Consequently, most of the recently proposed methods have become increasingly difficult to be applied on a different tumor dataset with good results, and moreover, some tumor segmentors usually fail to generalize beyond those datasets and modalities used in their original evaluation experiments. METHODS: In order to alleviate some of the problems with the recently proposed methods, we propose a novel unified and end-to-end adversarial learning framework for automatic segmentation of any kinds of tumors from CT scans, called CTumorGAN, consisting of a Generator network and a Discriminator network. Specifically, the Generator attempts to generate segmentation results that are close to their corresponding golden standards, while the Discriminator aims to distinguish between generated samples and real tumor ground truths. More importantly, we deliberately design different modules to take into account the well-known obstacles, e.g., severe class imbalance, small tumor localization, and the label noise problem with poor expert annotation quality, and then use these modules to guide the CTumorGAN training process by utilizing multi-level supervision more effectively. RESULTS: We conduct a comprehensive evaluation on diverse loss functions for tumor segmentation and find that mean square error is more suitable for the CT tumor segmentation task. Furthermore, extensive experiments with multiple evaluation criteria on three well-established datasets, including lung tumor, kidney tumor, and liver tumor databases, also demonstrate that our CTumorGAN achieves stable and competitive performance compared with the state-of-the-art approaches for CT tumor segmentation. CONCLUSION: In order to overcome those key challenges arising from CT datasets and solve some of the main problems existing in the current deep learning-based methods, we propose a novel unified CTumorGAN framework, which can be effectively generalized to address any kinds of tumor datasets with superior performance.


Subject(s)
Liver Neoplasms , Lung Neoplasms , Databases, Factual , Humans , Image Processing, Computer-Assisted , Tomography, X-Ray Computed
7.
Molecules ; 24(20)2019 Oct 11.
Article in English | MEDLINE | ID: mdl-31614686

ABSTRACT

Drug side-effects have become a major public health concern as they are the underlying cause of over a million serious injuries and deaths each year. Therefore, it is of critical importance to detect side-effects as early as possible. Existing computational methods mainly utilize the drug chemical profile and the drug biological profile to predict the side-effects of a drug. In the utilized drug biological profile information, they only focus on drug-target interactions and neglect the modes of action of drugs on target proteins. In this paper, we develop a new method for predicting potential side-effects of drugs based on more comprehensive drug information in which the modes of action of drugs on target proteins are integrated. Drug information of multiple types is modeled as a signed heterogeneous information network. We propose a signed heterogeneous information network embedding framework for learning drug embeddings and predicting side-effects of drugs. We use two bias random walk procedures to obtain drug sequences and train a Skip-gram model to learn drug embeddings. We experimentally demonstrate the performance of the proposed method by comparison with state-of-the-art methods. Furthermore, the results of a case study support our hypothesis that modes of action of drugs on target proteins are meaningful in side-effect prediction.


Subject(s)
Computational Biology , Drug-Related Side Effects and Adverse Reactions/prevention & control , Molecular Targeted Therapy , Algorithms , Drug Discovery , Drug Interactions , Humans , Proteins/antagonists & inhibitors
SELECTION OF CITATIONS
SEARCH DETAIL
...