RESUMO
We consider the problem of active feature elicitation in which, given some examples with all the features (say, the full Electronic Health Record), and many examples with some of the features (say, demographics), the goal is to identify the set of examples on which more information (say, lab tests) need to be collected. The observation is that some set of features may be more expensive, personal or cumbersome to collect. We propose a classifier-independent, similarity metric-independent, general active learning approach which identifies examples that are dissimilar to the ones with the full set of data and acquire the complete set of features for these examples. Motivated by four real clinical tasks, our extensive evaluation demonstrates the effectiveness of this approach. To demonstrate the generalization capabilities of the proposed approach, we consider different divergence metrics and classifiers and present consistent results across the domains.
RESUMO
Accurate depth assessment of burn wounds is a critical task to provide the right treatment and care. Currently, laser Doppler imaging is able to provide better accuracy compared to the standard clinical evaluation. However, its clinical applicability is limited by factors like scanning distance, time, and cost. Precise diagnosis of burns requires adequate structural and functional details. In this work, we evaluated the combined potential of two non-invasive optical modalities, optical coherence tomography (OCT) and Raman spectroscopy (RS), to identify degrees of burn wounds (superficial partial-thickness (SPT), deep partial-thickness (DPT), and full-thickness (FT)). OCT provides morphological information, whereas, RS provides biochemical aspects. OCT images and Raman spectra were obtained from burns created on ex-vivo porcine skin. Algorithms were developed to segment skin region and extract textural features from OCT images, and derive spectral wave features from RS. These computed features were fed into machine learning classifiers for categorization of burns. Histological results obtained from trichrome staining were used as ground-truth. The combined performance of RS-OCT reported an overall average accuracy of 85% and ROC-AUC=0.94, in distinguishing the burn wounds. The significant performance on ex vivo skin motivates to assess the feasibility of combined RS-OCT in in vivo models.
Assuntos
Queimaduras/diagnóstico por imagem , Pele/diagnóstico por imagem , Análise Espectral Raman/métodos , Tomografia de Coerência Óptica/métodos , Animais , Queimaduras/classificação , Queimaduras/patologia , Pele/patologia , SuínosRESUMO
We develop a pipeline to mine complex drug interactions by combining different similarities and interaction types (molecular, structural, phenotypic, genomic etc). Our goal is to learn an optimal kernel from these heterogeneous similarities in a supervised manner. We formulate an extensible framework that can easily integrate new interaction types into a rich model. The core of our pipeline features a novel kernel-learning approach that tunes the weights of the heterogeneous similarities, and fuses them into a Similarity-based Kernel for Identifying Drug-Drug interactions and Discovery, or SKID3. Experimental evaluation on the DrugBank database shows that SKID3 effectively combines similarities generated from chemical reaction pathways (which generally improve precision) and molecular and structural fingerprints (which generally improve recall) into a single kernel that gets the best of both worlds, and consequently demonstrates the best performance.
RESUMO
We investigate the viability of statistical relational machine learning algorithms for the task of identifying malignancy of renal masses using radiomics-based imaging features. Features characterizing the texture, signal intensity, and other relevant metrics of the renal mass were extracted from multiphase contrast-enhanced computed tomography images. The recently developed formalism of relational functional gradient boosting (RFGB) was used to learn human-interpretable models for classification. Experimental results demonstrate that RFGB outperforms many standard machine learning approaches as well as the current diagnostic gold standard of visual qualification by radiologists.