Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Article in English | MEDLINE | ID: mdl-33587705

ABSTRACT

Biomedical interaction networks have incredible potential to be useful in the prediction of biologically meaningful interactions, identification of network biomarkers of disease, and the discovery of putative drug targets. Recently, graph neural networks have been proposed to effectively learn representations for biomedical entities and achieved state-of-the-art results in biomedical interaction prediction. These methods only consider information from immediate neighbors but cannot learn a general mixing of features from neighbors at various distances. In this paper, we present a higher-order graph convolutional network (HOGCN)to aggregate information from the higher-order neighborhood for biomedical interaction prediction. Specifically, HOGCN collects feature representations of neighbors at various distances and learns their linear mixing to obtain informative representations of biomedical entities. Experiments on four interaction networks, including protein-protein, drug-drug, drug-target, and gene-disease interactions, show that HOGCN achieves more accurate and calibrated predictions. HOGCN performs well on noisy, sparse interaction networks when feature representations of neighbors at various distances are considered. Moreover, a set of novel interaction predictions are validated by literature-based case studies.


Subject(s)
Neural Networks, Computer , Proteins , Case-Control Studies
2.
BMC Med Inform Decis Mak ; 20(1): 162, 2020 07 17.
Article in English | MEDLINE | ID: mdl-32680493

ABSTRACT

BACKGROUND: One of the most challenging tasks for bladder cancer diagnosis is to histologically differentiate two early stages, non-invasive Ta and superficially invasive T1, the latter of which is associated with a significantly higher risk of disease progression. Indeed, in a considerable number of cases, Ta and T1 tumors look very similar under microscope, making the distinction very difficult even for experienced pathologists. Thus, there is an urgent need for a favoring system based on machine learning (ML) to distinguish between the two stages of bladder cancer. METHODS: A total of 1177 images of bladder tumor tissues stained by hematoxylin and eosin were collected by pathologists at University of Rochester Medical Center, which included 460 non-invasive (stage Ta) and 717 invasive (stage T1) tumors. Automatic pipelines were developed to extract features for three invasive patterns characteristic to the T1 stage bladder cancer (i.e., desmoplastic reaction, retraction artifact, and abundant pinker cytoplasm), using imaging processing software ImageJ and CellProfiler. Features extracted from the images were analyzed by a suite of machine learning approaches. RESULTS: We extracted nearly 700 features from the Ta and T1 tumor images. Unsupervised clustering analysis failed to distinguish hematoxylin and eosin images of Ta vs. T1 tumors. With a reduced set of features, we successfully distinguished 1177 Ta or T1 images with an accuracy of 91-96% by six supervised learning methods. By contrast, convolutional neural network (CNN) models that automatically extract features from images produced an accuracy of 84%, indicating that feature extraction driven by domain knowledge outperforms CNN-based automatic feature extraction. Further analysis revealed that desmoplastic reaction was more important than the other two patterns, and the number and size of nuclei of tumor cells were the most predictive features. CONCLUSIONS: We provide a ML-empowered, feature-centered, and interpretable diagnostic system to facilitate the accurate staging of Ta and T1 diseases, which has a potential to apply to other types of cancer.


Subject(s)
Urinary Bladder Neoplasms , Humans , Image Processing, Computer-Assisted , Machine Learning , Neoplasm Staging , Neural Networks, Computer , Urinary Bladder Neoplasms/pathology
3.
BMC Syst Biol ; 13(Suppl 2): 38, 2019 04 05.
Article in English | MEDLINE | ID: mdl-30953525

ABSTRACT

BACKGROUND: The topological landscape of gene interaction networks provides a rich source of information for inferring functional patterns of genes or proteins. However, it is still a challenging task to aggregate heterogeneous biological information such as gene expression and gene interactions to achieve more accurate inference for prediction and discovery of new gene interactions. In particular, how to generate a unified vector representation to integrate diverse input data is a key challenge addressed here. RESULTS: We propose a scalable and robust deep learning framework to learn embedded representations to unify known gene interactions and gene expression for gene interaction predictions. These low- dimensional embeddings derive deeper insights into the structure of rapidly accumulating and diverse gene interaction networks and greatly simplify downstream modeling. We compare the predictive power of our deep embeddings to the strong baselines. The results suggest that our deep embeddings achieve significantly more accurate predictions. Moreover, a set of novel gene interaction predictions are validated by up-to-date literature-based database entries. CONCLUSION: The proposed model demonstrates the importance of integrating heterogeneous information about genes for gene network inference. GNE is freely available under the GNU General Public License and can be downloaded from GitHub ( https://github.com/kckishan/GNE ).


Subject(s)
Computational Biology/methods , Deep Learning , Gene Regulatory Networks , Models, Genetic
4.
JMIR Med Educ ; 2(1): e7, 2016 Jun 01.
Article in English | MEDLINE | ID: mdl-27731861

ABSTRACT

BACKGROUND: Lack of access to health and medical education resources for doctors in the developing world is a serious global health problem. In Rwanda, with a population of 11 million, there is only one medical school, hence a shortage in well-trained medical staff. The growth of interactive health technologies has played a role in the improvement of health care in developed countries and has offered alternative ways to offer continuous medical education while improving patient's care. However, low and middle-income countries (LMIC) like Rwanda have struggled to implement medical education technologies adapted to local settings in medical practice and continuing education. Developing a user-centered mobile computing approach for medical and health education programs has potential to bring continuous medical education to doctors in rural and urban areas of Rwanda and influence patient care outcomes. OBJECTIVE: The aim of this study is to determine user requirements, currently available resources, and perspectives for potential medical education technologies in Rwanda. METHODS: Information baseline and needs assessments data collection were conducted in all 44 district hospitals (DHs) throughout Rwanda. The research team collected qualitative data through interviews with 16 general practitioners working across Rwanda and 97 self-administered online questionnaires for rural areas. Data were collected and analyzed to address two key questions: (1) what are the currently available tools for the use of mobile-based technology for medical education in Rwanda, and (2) what are user's requirements for the creation of a mobile medical education technology in Rwanda? RESULTS: General practitioners from different hospitals highlighted that none of the available technologies avail local resources such as the Ministry of Health (MOH) clinical treatment guidelines. Considering the number of patients that doctors see in Rwanda, an average of 32 patients per day, there is need for a locally adapted mobile education app that utilizes specific Rwandan medical education resources. Based on our results, we propose a mobile medical education app that could provide many benefits such as rapid decision making with lower error rates, increasing the quality of data management and accessibility, and improving practice efficiency and knowledge. In areas where Internet access is limited, the proposed mobile medical education app would need to run on a mobile device without Internet access. CONCLUSIONS: A user-centered design approach was adopted, starting with a needs assessment with representative end users, which provided recommendations for the development of a mobile medical education app specific to Rwanda. Specific app features were identified through the needs assessment and it was evident that there will be future benefits to ongoing incorporation of user-centered design methods to better inform the software development and improve its usability. Results of the user-centered design reported here can inform other medical education technology developments in LMIC to ensure that technologies developed are usable by all stakeholders.

5.
Comput Vis Image Underst ; 151: 138-152, 2016 Oct.
Article in English | MEDLINE | ID: mdl-36046501

ABSTRACT

Experts have a remarkable capability of locating, perceptually organizing, identifying, and categorizing objects in images specific to their domains of expertise. In this article, we present a hierarchical probabilistic framework to discover the stereotypical and idiosyncratic viewing behaviors exhibited with expertise-specific groups. Through these patterned eye movement behaviors we are able to elicit the domain-specific knowledge and perceptual skills from the subjects whose eye movements are recorded during diagnostic reasoning processes on medical images. Analyzing experts' eye movement patterns provides us insight into cognitive strategies exploited to solve complex perceptual reasoning tasks. An experiment was conducted to collect both eye movement and verbal narrative data from three groups of subjects with different levels or no medical training (eleven board-certified dermatologists, four dermatologists in training and thirteen undergraduates) while they were examining and describing 50 photographic dermatological images. We use a hidden Markov model to describe each subject's eye movement sequence combined with hierarchical stochastic processes to capture and differentiate the discovered eye movement patterns shared by multiple subjects within and among the three groups. Independent experts' annotations of diagnostic conceptual units of thought in the transcribed verbal narratives are time-aligned with discovered eye movement patterns to help interpret the patterns' meanings. By mapping eye movement patterns to thought units, we uncover the relationships between visual and linguistic elements of their reasoning and perceptual processes, and show the manner in which these subjects varied their behaviors while parsing the images. We also show that inferred eye movement patterns characterize groups of similar temporal and spatial properties, and specify a subset of distinctive eye movement patterns which are commonly exhibited across multiple images. Based on the combinations of the occurrences of these eye movement patterns, we are able to categorize the images from the perspective of experts' viewing strategies in a novel way. In each category, images share similar lesion distributions and configurations. Our results show that modeling with multi-modal data, representative of physicians' diagnostic viewing behaviors and thought processes, is feasible and informative to gain insights into physicians' cognitive strategies, as well as medical image understanding.

6.
Int J Data Sci Anal ; 2(3-4): 95-105, 2016 Dec.
Article in English | MEDLINE | ID: mdl-36908375

ABSTRACT

Image grouping in knowledge-rich domains is challenging, since domain knowledge and human expertise are key to transform image pixels into meaningful content. Manually marking and annotating images is not only labor-intensive but also ineffective. Furthermore, most traditional machine learning approaches cannot bridge this gap for the absence of experts' input. We thus present an interactive machine learning paradigm that allows experts to become an integral part of the learning process. This paradigm is designed for automatically computing and quantifying interpretable grouping of dermatological images. In this way, the computational evolution of an image grouping model, its visualization, and expert interactions form a loop to improve image grouping. In our paradigm, dermatologists encode their domain knowledge about the medical images by grouping a small subset of images via a carefully designed interface. Our learning algorithm automatically incorporates these manually specified connections as constraints for reorganizing the whole image dataset. Performance evaluation shows that this paradigm effectively improves image grouping based on expert knowledge.

7.
Artif Intell Med ; 62(2): 79-90, 2014 Oct.
Article in English | MEDLINE | ID: mdl-25174882

ABSTRACT

OBJECTIVES: Extracting useful visual clues from medical images allowing accurate diagnoses requires physicians' domain knowledge acquired through years of systematic study and clinical training. This is especially true in the dermatology domain, a medical specialty that requires physicians to have image inspection experience. Automating or at least aiding such efforts requires understanding physicians' reasoning processes and their use of domain knowledge. Mining physicians' references to medical concepts in narratives during image-based diagnosis of a disease is an interesting research topic that can help reveal experts' reasoning processes. It can also be a useful resource to assist with design of information technologies for image use and for image case-based medical education systems. METHODS AND MATERIALS: We collected data for analyzing physicians' diagnostic reasoning processes by conducting an experiment that recorded their spoken descriptions during inspection of dermatology images. In this paper we focus on the benefit of physicians' spoken descriptions and provide a general workflow for mining medical domain knowledge based on linguistic data from these narratives. The challenge of a medical image case can influence the accuracy of the diagnosis as well as how physicians pursue the diagnostic process. Accordingly, we define two lexical metrics for physicians' narratives--lexical consensus score and top N relatedness score--and evaluate their usefulness by assessing the diagnostic challenge levels of corresponding medical images. We also report on clustering medical images based on anchor concepts obtained from physicians' medical term usage. These analyses are based on physicians' spoken narratives that have been preprocessed by incorporating the Unified Medical Language System for detecting medical concepts. RESULTS: The image rankings based on lexical consensus score and on top 1 relatedness score are well correlated with those based on challenge levels (Spearman correlation>0.5 and Kendall correlation>0.4). Clustering results are largely improved based on our anchor concept method (accuracy>70% and mutual information>80%). CONCLUSIONS: Physicians' spoken narratives are valuable for the purpose of mining the domain knowledge that physicians use in medical image inspections. We also show that the semantic metrics introduced in the paper can be successfully applied to medical image understanding and allow discussion of additional uses of these metrics.


Subject(s)
Data Mining , Diagnostic Imaging , Linguistics , Humans
8.
AMIA Annu Symp Proc ; : 962, 2008 Nov 06.
Article in English | MEDLINE | ID: mdl-18999126

ABSTRACT

Clinical decision support systems (CDSS) assist physicians and other medical professionals in tasks such as differential diagnosis. End users may use different decision-making strategies depending on medical training. Study of eye movements reveals information processing strategies that are executed at a level below consciousness. Eye tracking of student physician assistants and medical residents, while using a visual diagnostic CDSS in diagnostic tasks, showed adoption of distinct strategies and informed recommendations for effective user interface design.


Subject(s)
Decision Support Systems, Clinical , Decision Support Techniques , Eye Movements , Software Design , Software , Task Performance and Analysis , User-Computer Interface , Humans , New York
9.
Photodermatol Photoimmunol Photomed ; 19(6): 272-80, 2003 Dec.
Article in English | MEDLINE | ID: mdl-14617101

ABSTRACT

BACKGROUND: Thalidomide is an anti-inflammatory pharmacologic agent that has been utilized as a therapy for a number of dermatologic diseases. Its anti-inflammatory properties have been attributed to its ability to antagonize tumor necrosis factor-alfa (TNF-alpha) production by monocytes. However, its mechanism of action in the skin is not known. PURPOSE: To test our hypothesis that thalidomide may antagonize TNF-alpha production in the skin, we used a mouse model for acute ultraviolet-B (UVB) exposure, a known stimulus for inducing this cytokine. RESULTS: A single bolus dose of thalidomide (either 100 or 400 mg/kg) given immediately before UVB exposure (40-120 mJ/cm2) inhibited, in a dose-dependent manner, sunburn cell formation (i.e. keratinocyte (KC) apoptosis as defined by histologic appearance and confirmed by terminal transferase mediated biotinylated dUTP nick end labelling staining) in mouse skin biopsy specimens. However, this agent did not affect the formation of cyclobutane pyrimidine dimers, a measure of UVB-induced DNA damage, which is an early event associated with apoptosis. RNase protection assays confirmed that high (400 mg/kg), but not low (100 mg/kg), doses of thalidomide inhibited the UVB-induced increase in steady-state TNF-alpha mRNA. Additionally, our in vitro data using neonatal mouse KCs showed that thalidomide prevented UVB-induced cell death (JAM assay). The antiapoptotic effects of thalidomide can be reversed by the addition of exogenous recombinant mouse TNF-alpha and hence reconstituting UVB-induced programmed cell death. The inhibition of sunburn cell formation by low-dose thalidomide in the absence of TNF-alpha inhibition suggests that other, unidentified mechanisms of apoptosis inhibition are active. CONCLUSIONS: These data suggest that the anti-inflammatory effects of thalidomide can affect UVB injury, and may, in part, explain its action in photosensitivity diseases such as cutaneous lupus erythematosus.


Subject(s)
Anti-Inflammatory Agents, Non-Steroidal/pharmacology , Erythema/immunology , Keratinocytes/drug effects , Thalidomide/pharmacology , Tumor Necrosis Factor-alpha/drug effects , Animals , Anti-Inflammatory Agents, Non-Steroidal/administration & dosage , Apoptosis/drug effects , Disease Models, Animal , Dose-Response Relationship, Drug , Erythema/pathology , Female , Injections, Intravenous , Mice , Mice, Inbred BALB C , RNA, Messenger , Thalidomide/administration & dosage , Tumor Necrosis Factor-alpha/antagonists & inhibitors , Tumor Necrosis Factor-alpha/genetics , Ultraviolet Rays
SELECTION OF CITATIONS
SEARCH DETAIL
...