Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38753475

RESUMO

In volume rendering, transfer functions are used to classify structures of interest, and to assign optical properties such as color and opacity. They are commonly defined as 1D or 2D functions that map simple features to these optical properties. As the process of designing a transfer function is typically tedious and unintuitive, several approaches have been proposed for their interactive specification. In this paper, we present a novel method to define transfer functions for volume rendering by leveraging the feature extraction capabilities of self-supervised pre-trained vision transformers. To design a transfer function, users simply select the structures of interest in a slice viewer, and our method automatically selects similar structures based on the high-level features extracted by the neural network. Contrary to previous learning-based transfer function approaches, our method does not require training of models and allows for quick inference, enabling an interactive exploration of the volume data. Our approach reduces the amount of necessary annotations by interactively informing the user about the current classification, so they can focus on annotating the structures of interest that still require annotation. In practice, this allows users to design transfer functions within seconds, instead of minutes. We compare our method to existing learning-based approaches in terms of annotation and compute time, as well as with respect to segmentation accuracy. Our accompanying video showcases the interactivity and effectiveness of our method.

2.
Sci Rep ; 13(1): 20260, 2023 11 20.
Artigo em Inglês | MEDLINE | ID: mdl-37985685

RESUMO

Deep learning in medical imaging has the potential to minimize the risk of diagnostic errors, reduce radiologist workload, and accelerate diagnosis. Training such deep learning models requires large and accurate datasets, with annotations for all training samples. However, in the medical imaging domain, annotated datasets for specific tasks are often small due to the high complexity of annotations, limited access, or the rarity of diseases. To address this challenge, deep learning models can be pre-trained on large image datasets without annotations using methods from the field of self-supervised learning. After pre-training, small annotated datasets are sufficient to fine-tune the models for a specific task. The most popular self-supervised pre-training approaches in medical imaging are based on contrastive learning. However, recent studies in natural image processing indicate a strong potential for masked autoencoder approaches. Our work compares state-of-the-art contrastive learning methods with the recently introduced masked autoencoder approach "SparK" for convolutional neural networks (CNNs) on medical images. Therefore, we pre-train on a large unannotated CT image dataset and fine-tune on several CT classification tasks. Due to the challenge of obtaining sufficient annotated training data in medical imaging, it is of particular interest to evaluate how the self-supervised pre-training methods perform when fine-tuning on small datasets. By experimenting with gradually reducing the training dataset size for fine-tuning, we find that the reduction has different effects depending on the type of pre-training chosen. The SparK pre-training method is more robust to the training dataset size than the contrastive methods. Based on our results, we propose the SparK pre-training for medical imaging tasks with only small annotated datasets.


Assuntos
Aprendizado Profundo , Humanos , Diagnóstico por Imagem , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Radiografia , Aprendizado de Máquina Supervisionado
3.
J Med Imaging (Bellingham) ; 10(4): 044007, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37600751

RESUMO

Purpose: Semantic segmentation is one of the most significant tasks in medical image computing, whereby deep neural networks have shown great success. Unfortunately, supervised approaches are very data-intensive, and obtaining reliable annotations is time-consuming and expensive. Sparsely labeled approaches, such as bounding boxes, have shown some success in reducing the annotation time. However, in 3D volume data, each slice must still be manually labeled. Approach: We evaluate approaches that reduce the annotation effort by reducing the number of slices that need to be labeled in a 3D volume. In a two-step process, a similarity metric is used to select slices that should be annotated by a trained radiologist. In the second step, a predictor is used to predict the segmentation mask for the rest of the slices. We evaluate different combinations of selectors and predictors on medical CT and MRI volumes. Thus we can determine that combination works best, and how far slice annotations can be reduced. Results: Our results show that for instance for the Medical Segmentation Decathlon-heart dataset, some selector, and predictor combinations allow for a Dice score 0.969 when only annotating 20% of slices per volume. Experiments on other datasets show a similarly positive trend. Conclusions: We evaluate a method that supports experts during the labeling of 3D medical volumes. Our approach makes it possible to drastically reduce the number of slices that need to be manually labeled. We present a recommendation in which selector predictor combination to use for different tasks and goals.

4.
Rofo ; 195(9): 797-803, 2023 09.
Artigo em Inglês, Alemão | MEDLINE | ID: mdl-37160147

RESUMO

BACKGROUND: Artificial intelligence is playing an increasingly important role in radiology. However, more and more often it is no longer possible to reconstruct decisions, especially in the case of new and powerful methods from the field of deep learning. The resulting models fulfill their function without the users being able to understand the internal processes and are used as so-called black boxes. Especially in sensitive areas such as medicine, the explainability of decisions is of paramount importance in order to verify their correctness and to be able to evaluate alternatives. For this reason, there is active research going on to elucidate these black boxes. METHOD: This review paper presents different approaches for explainable artificial intelligence with their advantages and disadvantages. Examples are used to illustrate the introduced methods. This study is intended to enable the reader to better assess the limitations of the corresponding explanations when meeting them in practice and strengthen the integration of such solutions in new research projects. RESULTS AND CONCLUSION: Besides methods to analyze black-box models for explainability, interpretable models offer an interesting alternative. Here, explainability is part of the process and the learned model knowledge can be verified with expert knowledge. KEY POINTS: · The use of artificial intelligence in radiology offers many possibilities to provide safer and more efficient medical care. This includes, but is not limited to support during image acquisition and processing or for diagnosis.. · Complex models can achieve high accuracy, but make it difficult to understand data processing.. · If the explainability is already taken into account during the planning of the model, methods can be developed that are powerful and interpretable at the same time.. CITATION FORMAT: · Gallée L, Kniesel H, Ropinski T et al. Artificial intelligence in radiology - beyond the black box. Fortschr Röntgenstr 2023; 195: 797 - 803.


Assuntos
Inteligência Artificial , Radiologia , Radiografia , Conhecimento
5.
Artigo em Inglês | MEDLINE | ID: mdl-37027532

RESUMO

Neural networks have shown great success in extracting geometric information from color images. Especially, monocular depth estimation networks are increasingly reliable in real-world scenes. In this work we investigate the applicability of such monocular depth estimation networks to semi-transparent volume rendered images. As depth is notoriously difficult to define in a volumetric scene without clearly defined surfaces, we consider different depth computations that have emerged in practice, and compare state-of-the-art monocular depth estimation approaches for these different interpretations during an evaluation considering different degrees of opacity in the renderings. Additionally, we investigate how these networks can be extended to further obtain color and opacity information, in order to create a layered representation of the scene based on a single color image. This layered representation consists of spatially separated semi-transparent intervals that composite to the original input rendering. In our experiments we show that existing approaches to monocular depth estimation can be adapted to perform well on semi-transparent volume renderings, which has several applications in the area of scientific visualization, like re-composition with additional objects and labels or additional shading.

6.
IEEE Trans Vis Comput Graph ; 29(10): 4198-4214, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35749328

RESUMO

Cryo-electron tomography (cryo-ET) is a new 3D imaging technique with unprecedented potential for resolving submicron structural details. Existing volume visualization methods, however, are not able to reveal details of interest due to low signal-to-noise ratio. In order to design more powerful transfer functions, we propose leveraging soft segmentation as an explicit component of visualization for noisy volumes. Our technical realization is based on semi-supervised learning, where we combine the advantages of two segmentation algorithms. First, the weak segmentation algorithm provides good results for propagating sparse user-provided labels to other voxels in the same volume and is used to generate dense pseudo-labels. Second, the powerful deep-learning-based segmentation algorithm learns from these pseudo-labels to generalize the segmentation to other unseen volumes, a task that the weak segmentation algorithm fails at completely. The proposed volume visualization uses deep-learning-based segmentation as a component for segmentation-aware transfer function design. Appropriate ramp parameters can be suggested automatically through frequency distribution analysis. Furthermore, our visualization uses gradient-free ambient occlusion shading to further suppress the visual presence of noise, and to give structural detail the desired prominence. The cryo-ET data studied in our technical experiments are based on the highest-quality tilted series of intact SARS-CoV-2 virions. Our technique shows the high impact in target sciences for visual data analysis of very noisy volumes that cannot be visualized with existing techniques.

7.
IEEE Trans Vis Comput Graph ; 29(12): 5468-5482, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36288226

RESUMO

Exploring high-dimensional data is a common task in many scientific disciplines. To address this task, two-dimensional embeddings, such as tSNE and UMAP, are widely used. While these determine the 2D position of data items, effectively encoding the first two dimensions, suitable visual encodings can be employed to communicate higher-dimensional features. To investigate such encodings, we have evaluated two commonly used glyph types, namely flower glyphs and star glyphs. To evaluate their capabilities for communicating higher-dimensional features in two-dimensional embeddings, we ran a large set of crowd-sourced user studies using real-world data obtained from data.gov. During these studies, participants completed a broad set of relevant tasks derived from related research. This article describes the evaluated glyph designs, details our tasks, and the quantitative study setup before discussing the results. Finally, we will present insights and provide guidance on the choice of glyph encodings when exploring high-dimensional data.

8.
JMIR Mhealth Uhealth ; 10(6): e32910, 2022 06 23.
Artigo em Inglês | MEDLINE | ID: mdl-35737429

RESUMO

BACKGROUND: Smart sensors have been developed as diagnostic tools for rehabilitation to cover an increasing number of geriatric patients. They promise to enable an objective assessment of complex movement patterns. OBJECTIVE: This research aimed to identify and analyze the conflicting ethical values associated with smart sensors in geriatric rehabilitation and provide ethical guidance on the best use of smart sensors to all stakeholders, including technology developers, health professionals, patients, and health authorities. METHODS: On the basis of a systematic literature search of the scientific databases PubMed and ScienceDirect, we conducted a qualitative document analysis to identify evidence-based practical implications of ethical relevance. We included 33 articles in the analysis. The practical implications were extracted inductively. Finally, we carried out an ethical analysis based on the 4 principles of biomedical ethics: autonomy, beneficence, nonmaleficence, and justice. The results are reported in categories based on these 4 principles. RESULTS: We identified 8 conflicting aims for using smart sensors. Gains in autonomy come at the cost of patient privacy. Smart sensors at home increase the independence of patients but may reduce social interactions. Independent measurements performed by patients may result in lower diagnostic accuracy. Although smart sensors could provide cost-effective and high-quality diagnostics for most patients, minorities could end up with suboptimal treatment owing to their underrepresentation in training data and studies. This could lead to algorithmic biases that would not be recognized by medical professionals when treating patients. CONCLUSIONS: The application of smart sensors has the potential to improve the rehabilitation of geriatric patients in several ways. It is important that patients do not have to choose between autonomy and privacy and are well informed about the insights that can be gained from the data. Smart sensors should support and not replace interactions with medical professionals. Patients and medical professionals should be educated about the correct application and the limitations of smart sensors. Smart sensors should include an adequate representation of minorities in their training data and should be covered by health insurance to guarantee fair access.


Assuntos
Confidencialidade , Privacidade , Idoso , Análise Ética , Humanos , Tecnologia
9.
IEEE Trans Vis Comput Graph ; 27(6): 2980-2991, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-33556010

RESUMO

To convey neural network architectures in publications, appropriate visualizations are of great importance. While most current deep learning papers contain such visualizations, these are usually handcrafted just before publication, which results in a lack of a common visual grammar, significant time investment, errors, and ambiguities. Current automatic network visualization tools focus on debugging the network itself and are not ideal for generating publication visualizations. Therefore, we present an approach to automate this process by translating network architectures specified in Keras into visualizations that can directly be embedded into any publication. To do so, we propose a visual grammar for convolutional neural networks (CNNs), which has been derived from an analysis of such figures extracted from all ICCV and CVPR papers published between 2013 and 2019. The proposed grammar incorporates visual encoding, network layout, layer aggregation, and legend generation. We have further realized our approach in an online system available to the community, which we have evaluated through expert feedback, and a quantitative study. It not only reduces the time needed to generate network visualizations for publications, but also enables a unified and unambiguous visualization design.

10.
IEEE Trans Vis Comput Graph ; 27(10): 3913-3925, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-32406840

RESUMO

To enhance depth perception and thus data comprehension, additional depth cues are often used in 3D visualizations of complex vascular structures. There is a variety of different approaches described in the literature, ranging from chromadepth color coding over depth of field to glyph-based encodings. Unfortunately, the majority of existing approaches suffers from the same problem: As these cues are directly applied to the geometry's surface, the display of additional information on the vessel wall, such as other modalities or derived attributes, is impaired. To overcome this limitation we propose Void Space Surfaces which utilizes empty space in between vessel branches to communicate depth and their relative positioning. This allows us to enhance the depth perception of vascular structures without interfering with the spatial data and potentially superimposed parameter information. With this article, we introduce Void Space Surfaces, describe their technical realization, and show their application to various vessel trees. Moreover, we report the outcome of two user studies which we have conducted in order to evaluate the perceptual impact of Void Space Surfaces compared to existing vessel visualization techniques and discuss expert feedback.

11.
Cell Microbiol ; 23(2): e13280, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-33073426

RESUMO

Detailed analysis of secondary envelopment of the herpesvirus human cytomegalovirus (HCMV) by transmission electron microscopy (TEM) is crucial for understanding the formation of infectious virions. Here, we present a convolutional neural network (CNN) that automatically recognises cytoplasmic capsids and distinguishes between three HCMV capsid envelopment stages in TEM images. 315 TEM images containing 2,610 expert-labelled capsids of the three classes were available for CNN training. To overcome the limitation of small training datasets and thus poor CNN performance, we used a deep learning method, the generative adversarial network (GAN), to automatically increase our labelled training dataset with 500 synthetic images and thus to 9,192 labelled capsids. The synthetic TEM images were added to the ground truth dataset to train the Faster R-CNN deep learning-based object detector. Training with 315 ground truth images yielded an average precision (AP) of 53.81% for detection, whereas the addition of 500 synthetic training images increased the AP to 76.48%. This shows that generation and additional use of synthetic labelled images for detector training is an inexpensive way to improve detector performance. This work combines the gold standard of secondary envelopment research with state-of-the-art deep learning technology to speed up automatic image analysis even when large labelled training datasets are not available.


Assuntos
Capsídeo/ultraestrutura , Citomegalovirus/ultraestrutura , Aprendizado Profundo , Infecções por Herpesviridae/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Vírion/ultraestrutura , Algoritmos , Citomegalovirus/metabolismo , Infecções por Herpesviridae/virologia , Humanos , Aprendizado de Máquina , Microscopia Eletrônica de Transmissão , Redes Neurais de Computação , Vírion/metabolismo
12.
IEEE Trans Vis Comput Graph ; 27(2): 1268-1278, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-33048686

RESUMO

We present a novel deep learning based technique for volumetric ambient occlusion in the context of direct volume rendering. Our proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function. The proposed neural network only needs to be executed upon change of this global information, and thus supports real-time volume interaction. Accordingly, we demonstrate DVAO's ability to predict volumetric ambient occlusion, such that it can be applied interactively within direct volume rendering. To achieve the best possible results, we propose and analyze a variety of transfer function representations and injection strategies for deep neural networks. Based on the obtained results we also give recommendations applicable in similar volume learning scenarios. Lastly, we show that DVAO generalizes to a variety of modalities, despite being trained on computed tomography data only.

13.
F1000Res ; 9: 295, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33552475

RESUMO

Research software has become a central asset in academic research. It optimizes existing and enables new research methods, implements and embeds research knowledge, and constitutes an essential research product in itself. Research software must be sustainable in order to understand, replicate, reproduce, and build upon existing research or conduct new research effectively. In other words, software must be available, discoverable, usable, and adaptable to new needs, both now and in the future. Research software therefore requires an environment that supports sustainability. Hence, a change is needed in the way research software development and maintenance are currently motivated, incentivized, funded, structurally and infrastructurally supported, and legally treated. Failing to do so will threaten the quality and validity of research. In this paper, we identify challenges for research software sustainability in Germany and beyond, in terms of motivation, selection, research software engineering personnel, funding, infrastructure, and legal aspects. Besides researchers, we specifically address political and academic decision-makers to increase awareness of the importance and needs of sustainable research software practices. In particular, we recommend strategies and measures to create an environment for sustainable research software, with the ultimate goal to ensure that software-driven research is valid, reproducible and sustainable, and that software is recognized as a first class citizen in research. This paper is the outcome of two workshops run in Germany in 2019, at deRSE19 - the first International Conference of Research Software Engineers in Germany - and a dedicated DFG-supported follow-up workshop in Berlin.


Assuntos
Conhecimento , Pesquisadores , Software , Previsões , Alemanha , Humanos
14.
IEEE Trans Vis Comput Graph ; 26(11): 3241-3254, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-31180858

RESUMO

The complexity of today's visualization applications demands specific visualization systems tailored for the development of these applications. Frequently, such systems utilize levels of abstraction to improve the application development process, for instance by providing a data flow network editor. Unfortunately, these abstractions result in several issues, which need to be circumvented through an abstraction-centered system design. Often, a high level of abstraction hides low level details, which makes it difficult to directly access the underlying computing platform, which would be important to achieve an optimal performance. Therefore, we propose a layer structure developed for modern and sustainable visualization systems allowing developers to interact with all contained abstraction levels. We refer to this interaction capabilities as usage abstraction levels, since we target application developers with various levels of experience. We formulate the requirements for such a system, derive the desired architecture, and present how the concepts have been exemplary realized within the Inviwo visualization system. Furthermore, we address several specific challenges that arise during the realization of such a layered architecture, such as communication between different computing platforms, performance centered encapsulation, as well as layer-independent development by supporting cross layer documentation and debugging capabilities.

15.
J Microsc ; 277(1): 12-22, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31859366

RESUMO

Detecting crossovers in cryo-electron microscopy images of protein fibrils is an important step towards determining the morphological composition of a sample. Currently, the crossover locations are picked by hand, which introduces errors and is a time-consuming procedure. With the rise of deep learning in computer vision tasks, the automation of such problems has become more and more applicable. However, because of insufficient quality of raw data and missing labels, neural networks alone cannot be applied successfully to target the given problem. Thus, we propose an approach combining conventional computer vision techniques and deep learning to automatically detect fibril crossovers in two-dimensional cryo-electron microscopy image data and apply it to murine amyloid protein A fibrils, where we first use direct image processing methods to simplify the image data such that a convolutional neural network can be applied to the remaining segmentation problem. LAY DESCRIPTION: The ability of protein to form fibrillary structures underlies important cellular functions but can also give rise to disease, such as in a group of disorders, termed amyloid diseases. These diseases are characterised by the formation of abnormal protein filaments, so-called amyloid fibrils, that deposit inside the tissue. Many amyloid fibrils are helically twisted, which leads to periodic variations in the apparent width of the fibril, when observing amyloid fibrils using microscopy techniques like cryogenic electron microscopy (cryo-EM). Due to the two-dimensional projection, parts of the fibril orthogonal to the projection plane appear narrower than parts parallel to the plane. The parts of small width are called crossovers. The distance between two adjacent crossovers is an important characteristic for the analysis of amyloid fibrils, because it is informative about the fibril morphology and because it can be determined from raw data by eye. A given protein can typically form different fibril morphologies. The morphology can vary depending on the chemical and physical conditions of fibril formation, but even when fibrils are formed under identical solution conditions, different morphologies may be present in a sample. As the crossovers allow to define fibril morphologies in a heterogeneous sample, detecting crossovers is an important first step in the sample analysis. In the present paper, we introduce a method for the automated detection of fibril crossovers in cryo-EM image data. The data consists of greyscale images, each showing an unknown number of potentially overlapping fibrils. In a first step, techniques from image analysis and pattern detection are employed to detect single fibrils in the raw data. Then, a convolutional neural network is used to find the locations of crossovers on each single fibril. As these predictions may contain errors, further postprocessing steps assess the quality and may slightly alter or reject the predicted crossovers.


Assuntos
Amiloide/ultraestrutura , Microscopia Crioeletrônica/métodos , Processamento de Imagem Assistida por Computador/métodos , Aprendizado de Máquina , Animais , Camundongos , Redes Neurais de Computação , Conformação Proteica , Reprodutibilidade dos Testes
16.
IEEE Trans Vis Comput Graph ; 25(8): 2514-2528, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29994478

RESUMO

We discuss the concept of directness in the context of spatial interaction with visualization. In particular, we propose a model that allows practitioners to analyze and describe the spatial directness of interaction techniques, ultimately to be able to better understand interaction issues that may affect usability. To reach these goals, we distinguish between different types of directness. Each type of directness depends on a particular mapping between different spaces, for which we consider the data space, the visualization space, the output space, the user space, the manipulation space, and the interaction space. In addition to the introduction of the model itself, we also show how to apply it to several real-world interaction scenarios in visualization, and thus discuss the resulting types of spatial directness, without recommending either more direct or more indirect interaction techniques. In particular, we will demonstrate descriptive and evaluative usage of the proposed model, and also briefly discuss its generative usage.

17.
Artigo em Inglês | MEDLINE | ID: mdl-30207955

RESUMO

The analysis of protein-ligand interactions is a time-intensive task. Researchers have to analyze multiple physico-chemical properties of the protein at once and combine them to derive conclusions about the protein-ligand interplay. Typically, several charts are inspected, and 3D animations can be played side-by-side to obtain a deeper understanding of the data. With the advances in simulation techniques, larger and larger datasets are available, with up to hundreds of thousands of steps. Unfortunately, such large trajectories are very difficult to investigate with traditional approaches. Therefore, the need for special tools that facilitate inspection of these large trajectories becomes substantial. In this paper, we present a novel system for visual exploration of very large trajectories in an interactive and user-friendly way. Several visualization motifs are automatically derived from the data to give the user the information about interactions between protein and ligand. Our system offers specialized widgets to ease and accelerate data inspection and navigation to interesting parts of the simulation. The system is suitable also for simulations where multiple ligands are involved. We have tested the usefulness of our tool on a set of datasets obtained from protein engineers, and we describe the expert feedback.

18.
IEEE Trans Vis Comput Graph ; 24(1): 873-882, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-28866536

RESUMO

High-resolution manometry is an imaging modality which enables the categorization of esophageal motility disorders. Spatio-temporal pressure data along the esophagus is acquired using a tubular device and multiple test swallows are performed by the patient. Current approaches visualize these swallows as individual instances, despite the fact that aggregated metrics are relevant in the diagnostic process. Based on the current Chicago Classification, which serves as the gold standard in this area, we introduce a visualization supporting an efficient and correct diagnosis. To reach this goal, we propose a novel decision graph representing the Chicago Classification with workflow optimization in mind. Based on this graph, we are further able to prioritize the different metrics used during diagnosis and can exploit this prioritization in the actual data visualization. Thus, different disorders and their related parameters are directly represented and intuitively influence the appearance of our visualization. Within this paper, we introduce our novel visualization, justify the design decisions, and provide the results of a user study we performed with medical students as well as a domain expert. On top of the presented visualization, we further discuss how to derive a visual signature for individual patients that allows us for the first time to perform an intuitive comparison between subjects, in the form of small multiples.


Assuntos
Gráficos por Computador , Interpretação de Imagem Assistida por Computador/métodos , Manometria/métodos , Adulto , Visualização de Dados , Transtornos da Motilidade Esofágica/diagnóstico , Esôfago/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
19.
IEEE Trans Vis Comput Graph ; 23(1): 731-740, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27875187

RESUMO

Molecular simulations are used in many areas of biotechnology, such as drug design and enzyme engineering. Despite the development of automatic computational protocols, analysis of molecular interactions is still a major aspect where human comprehension and intuition are key to accelerate, analyze, and propose modifications to the molecule of interest. Most visualization algorithms help the users by providing an accurate depiction of the spatial arrangement: the atoms involved in inter-molecular contacts. There are few tools that provide visual information on the forces governing molecular docking. However, these tools, commonly restricted to close interaction between atoms, do not consider whole simulation paths, long-range distances and, importantly, do not provide visual cues for a quick and intuitive comprehension of the energy functions (modeling intermolecular interactions) involved. In this paper, we propose visualizations designed to enable the characterization of interaction forces by taking into account several relevant variables such as molecule-ligand distance and the energy function, which is essential to understand binding affinities. We put emphasis on mapping molecular docking paths obtained from Molecular Dynamics or Monte Carlo simulations, and provide time-dependent visualizations for different energy components and particle resolutions: atoms, groups or residues. The presented visualizations have the potential to support domain experts in a more efficient drug or enzyme design process.

20.
IEEE Trans Vis Comput Graph ; 22(1): 718-27, 2016 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-26390480

RESUMO

Today molecular simulations produce complex data sets capturing the interactions of molecules in detail. Due to the complexity of this time-varying data, advanced visualization techniques are required to support its visual analysis. Current molecular visualization techniques utilize ambient occlusion as a global illumination approximation to improve spatial comprehension. Besides these shadow-like effects, interreflections are also known to improve the spatial comprehension of complex geometric structures. Unfortunately, the inherent computational complexity of interreflections would forbid interactive exploration, which is mandatory in many scenarios dealing with static and time-varying data. In this paper, we introduce a novel analytic approach for capturing interreflections of molecular structures in real-time. By exploiting the knowledge of the underlying space filling representations, we are able to reduce the required parameters and can thus apply symbolic regression to obtain an analytic expression for interreflections. We show how to obtain the data required for the symbolic regression analysis, and how to exploit our analytic solution to enhance interactive molecular visualizations.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...