Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 1878, 2024 01 22.
Article in English | MEDLINE | ID: mdl-38253642

ABSTRACT

Mass spectrometry-coupled cellular thermal shift assay (MS-CETSA), a biophysical principle-based technique that measures the thermal stability of proteins at the proteome level inside the cell, has contributed significantly to the understanding of drug mechanisms of action and the dissection of protein interaction dynamics in different cellular states. One of the barriers to the wide applications of MS-CETSA is that MS-CETSA experiments must be performed on the specific cell lines of interest, which is typically time-consuming and costly in terms of labeling reagents and mass spectrometry time. In this study, we aim to predict CETSA features in various cell lines by introducing a computational framework called CycleDNN based on deep neural network technology. For a given set of n cell lines, CycleDNN comprises n auto-encoders. Each auto-encoder includes an encoder to convert CETSA features from one cell line into latent features in a latent space [Formula: see text]. It also features a decoder that transforms the latent features back into CETSA features for another cell line. In such a way, the proposed CycleDNN creates a cyclic prediction of CETSA features across different cell lines. The prediction loss, cycle-consistency loss, and latent space regularization loss are used to guide the model training. Experimental results on a public CETSA dataset demonstrate the effectiveness of our proposed approach. Furthermore, we confirm the validity of the predicted MS-CETSA data from our proposed CycleDNN through validation in protein-protein interaction prediction.


Subject(s)
Deep Learning , Biophysics , Cell Line , Dissection , Mass Spectrometry
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1659-1662, 2022 07.
Article in English | MEDLINE | ID: mdl-36085889

ABSTRACT

The Cellular Thermal Shift Assay (CETSA) is a biophysical assay based on the principle of ligand-induced thermal stabilization of target proteins. This technology has revolutionized cell-based target engagement studies and has been used as guidance for drug design. Although many ap-plications of CETSA data have been explored, the correlations between CETSA data and protein-protein interactions (PPI) have barely been touched. In this study, we conduct the first exploration study applying CETSA data for PPI prediction. We use a machine learning method, Decision Tree, to predict PPI scores using proteins' CETSA features. It shows promising results that the predicted PPI scores closely match the ground-truth PPI scores. Furthermore, for a small number of protein pairs, whose PPI score predictions mismatch the ground truth, we use iterative clustering strategy to gradually reduce the number of these pairs. At the end of iterative clustering, the remaining protein pairs may have some unusual properties and are of scientific value for further biological investigation. Our study has demonstrated that PPI is a brand-new application of CETSA data. At the same time, it also manifests that CETSA data can be used as a new data source for PPI exploration study.


Subject(s)
Biological Assay , Research Design , Biophysics , Cluster Analysis , Protein Domains
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 1647-1650, 2022 07.
Article in English | MEDLINE | ID: mdl-36085941

ABSTRACT

Cellular Thermal Shift Assay (CETSA) has been widely used in drug discovery, cancer cell biology, immunology, etc. One of the barriers for CETSA applications is that CETSA experiments have to be conducted on various cell lines, which is extremely time-consuming and costly. In this study, we make an effort to explore the translation of CETSA features cross cell lines, i.e., known CETSA feature of a given protein in one cell line, can we automatically predict the CETSA feature of this protein in another cell line, and vice versa? Inspired by pix2pix and CycleGAN, which perform well on image-to-image translation cross various domains in computer vision, we propose a novel deep neural network model called CycleDNN for CETSA feature translation cross cell lines. Given cell lines A and B, the proposed CycleDNN consists of two auto-encoders, the first one encodes the CETSA feature from cell line A into Z in the latent space [Formula: see text], then decodes Z into the CETSA feature in cell line B., Similarly, the second one translates the CETSA feature from cell line B to cell line A through the latent space [Formula: see text]. In such a way, the two auto-encoders form a cyclic feature translation between cell lines. The reconstructed loss, cycle-consistency loss, and latent vector regularization loss are used to guide the training of the model. The experimental results on a public CETSA dataset demonstrate the effectiveness of the proposed approach.


Subject(s)
Drug Discovery , Neural Networks, Computer , Cell Line , Drug Discovery/methods , Proteins , Research Design
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2132-2135, 2022 07.
Article in English | MEDLINE | ID: mdl-36086010

ABSTRACT

A glioma is a malignant brain tumor that seriously affects cognitive functions and lowers patients' life quality. Segmentation of brain glioma is challenging because of inter-class ambiguities in tumor regions. Recently, deep learning approaches have achieved outstanding performance in the automatic segmentation of brain glioma. However, existing al-gorithms fail to exploit channel-wise feature interdependence to select semantic attributes for glioma segmentation. In this study, we implement a novel deep neural network that integrates residual channel attention modules to calibrate intermediate features for glioma segmentation. The proposed channel at-tention mechanism adaptively weights feature channel-wise to optimize the latent representation of gliomas. We evaluate our method on the established dataset BraTS2017. Experimental results indicate the superiority of our method. Clinical relevance - While existing glioma segmentation approaches do not leverage channel-wise feature dependence for feature selection our method can generate segmentation masks with higher accuracies and provide more insights on graphic patterns in brain MRI images for further clinical reference.


Subject(s)
Brain Neoplasms , Glioma , Brain , Brain Neoplasms/diagnostic imaging , Disease Progression , Glioma/diagnostic imaging , Humans , Magnetic Resonance Imaging/methods , Neural Networks, Computer
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 467-470, 2022 07.
Article in English | MEDLINE | ID: mdl-36086340

ABSTRACT

Intracranial arteries are critical blood vessels that supply the brain with oxygenated blood. Intracranial artery labels provide valuable guidance and navigation to numerous clinical applications and disease diagnoses. Various machine learning algorithms have been carried out for automation in the anatomical labeling of cerebral arteries. However, the task remains challenging because of the high complexity and variations of intracranial arteries. This study investigates a novel graph convolutional neural network with deep feature fusion for cerebral artery labeling. We introduce stacked graph convolutions in an encoder-core-decoder architecture, extracting high-level representations from graph nodes and their neighbors. Furthermore, we efficiently aggregate intermediate features from different hierarchies to enhance the proposed model's representation capability and labeling performance. We perform extensive experiments on public datasets, in which the results prove the superiority of our approach over baselines by a clear margin. Clinical relevance- The graph convolutions and feature fusion in our approach effectively extract graph information, which provides more accurate intracranial artery label predictions than existing methods and better facilitates medical research and disease diagnosis.


Subject(s)
Algorithms , Neural Networks, Computer , Arteries , Brain , Machine Learning
6.
IEEE Trans Cybern ; 52(12): 13458-13471, 2022 Dec.
Article in English | MEDLINE | ID: mdl-34919527

ABSTRACT

Traffic prediction based on massive speed data collected from traffic sensors plays an important role in traffic management. However, it is still challenging to obtain satisfactory performance due to the complex and dynamic spatial-temporal correlations among the data. Recently, many research works have demonstrated the effectiveness of graph neural networks (GNNs) for spatial-temporal modeling. However, such models are restricted by conditional distribution during training, and may not perform well when the target is outside the primary region of interest in the distribution. In this article, we address this problem with a stagewise learning mechanism, in which we redefine speed prediction as a conditional distribution learning followed by speed regression. We first perform a conditional distribution learning for each observed speed class, and then obtain speed prediction by optimizing regression learning, based on the learned conditional distribution. To effectively learn the conditional distribution, we introduce a mean-residue loss, consisting of two parts: 1) a mean loss, which penalizes the differences between the mean of the estimated conditional distribution and the ground truth and 2) a residue loss, which penalizes residue errors of the long tails in the distribution. To optimize the subsequent regression based on distribution information, we combine the mean absolute error (MAE) as another part of the loss function. We also incorporate a GNN-based architecture with our proposed learning mechanism. Mean-residue loss is employed to supervise the hidden speed representation in the network at each time interval, followed by a shared layer to recalibrate the hidden temporal dependencies in the conditional distribution. The experimental results based on three public traffic datasets have demonstrated that the effectiveness of the proposed method outperforms state-of-the-art methods.


Subject(s)
Neural Networks, Computer
7.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2115-2118, 2021 11.
Article in English | MEDLINE | ID: mdl-34891706

ABSTRACT

Diabetic retinopathy (DR) is one of the most common eye conditions among diabetic patients. However, vision loss occurs primarily in the late stages of DR, and the symptoms of visual impairment, ranging from mild to severe, can vary greatly, adding to the burden of diagnosis and treatment in clinical practice. Deep learning methods based on retinal images have achieved remarkable success in automatic DR grading, but most of them neglect that the presence of diabetes usually affects both eyes, and ophthalmologists usually compare both eyes concurrently for DR diagnosis, leaving correlations between left and right eyes unexploited. In this study, simulating the diagnostic process, we propose a two-stream binocular network to capture the subtle correlations between left and right eyes, in which, paired images of eyes are fed into two identical subnetworks separately during training. We design a contrastive grading loss to learn binocular correlation for five-class DR detection, which maximizes inter-class dissimilarity while minimizing the intra-class difference. Experimental results on the EyePACS dataset show the superiority of the proposed binocular model, outperforming monocular methods by a large margin.Clinical relevance- Compared to conventional DR grading methods based on monocular images, our approach can provide more accurate predictions and extract graphical patterns from retinal images of both eyes for clinical reference.


Subject(s)
Diabetes Mellitus , Diabetic Retinopathy , Diabetic Retinopathy/diagnosis , Fundus Oculi , Humans
8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 1954-1957, 2020 07.
Article in English | MEDLINE | ID: mdl-33018385

ABSTRACT

Water quality has a direct impact on industry, agriculture, and public health. Algae species are common indicators of water quality. It is because algal communities are sensitive to changes in their habitats, giving valuable knowledge on variations in water quality. However, water quality analysis requires professional inspection of algal detection and classification under microscopes, which is very time-consuming and tedious. In this paper, we propose a novel multi-target deep learning framework for algal detection and classification. Extensive experiments were carried out on a large-scale colored microscopic algal dataset. Experimental results demonstrate that the proposed method leads to the promising performance on algal detection, class identification and genus identification.


Subject(s)
Deep Learning , Plants , Agriculture , Microscopy , Water Quality
SELECTION OF CITATIONS
SEARCH DETAIL
...