Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 10 de 10
Filter
Add more filters










Publication year range
1.
Sci Data ; 11(1): 663, 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38909050

ABSTRACT

The development of platforms for distributed analytics has been driven by a growing need to comply with various governance-related or legal constraints. Among these platforms, the so-called Personal Health Train (PHT) is one representative that has emerged over the recent years. However, in projects that require data from sites featuring different PHT infrastructures, institutions are facing challenges emerging from the combination of multiple PHT ecosystems, including data governance, regulatory compliance, or the modification of existing workflows. In these scenarios, the interoperability of the platforms is preferable. In this work, we introduce a conceptual framework for the technical interoperability of the PHT covering five essential requirements: Data integration, unified station identifiers, mutual metadata, aligned security protocols, and business logic. We evaluated our concept in a feasibility study that involves two distinct PHT infrastructures: PHT-meDIC and PADME. We analyzed data on leukodystrophy from patients in the University Hospitals of Tübingen and Leipzig, and patients with differential diagnoses at the University Hospital Aachen. The results of our study demonstrate the technical interoperability between these two PHT infrastructures, allowing researchers to perform analyses across the participating institutions. Our method is more space-efficient compared to the multi-homing strategy, and it shows only a minimal time overhead.


Subject(s)
Health Information Interoperability , Hereditary Central Nervous System Demyelinating Diseases , Humans , Data Analysis
2.
ALTEX ; 41(1): 50-56, 2024 01 09.
Article in English | MEDLINE | ID: mdl-37528748

ABSTRACT

Adverse outcome pathways (AOPs) provide evidence for demonstrating and assessing causality between measurable toxicological mechanisms and human or environmental adverse effects. AOPs have gained increasing attention over the past decade and are believed to provide the necessary steppingstone for more effective risk assessment of chemicals and materials and moving beyond the need for animal testing. However, as with all types of data and knowledge today, AOPs need to be reusable by machines, i.e., machine-actionable, in order to reach their full impact potential. Machine-actionability is supported by the FAIR principles, which guide findability, accessibility, interoperability, and reusability of data and knowledge. Here, we describe why AOPs need to be FAIR and touch on aspects such as the improved visibility and the increased trust that FAIRification of AOPs provides.


New approach methodologies (NAMs) can detect biological phenomena that occur before they add up to serious problems like cancer, infertility, death, and others. NAMs detect key events (KE) along well-proven and agreed adverse outcome pathways (AOP). If a substance tests positive in a NAM for an upstream KE, this signals an early warning that actual adversity might follow. However, what if the knowledge about these AOPs is a well-kept secret? And what if decision-makers find AOPs too exotic to apply in risk assessment? This is where FAIR comes in! FAIR stands for making information findable, accessible, interoperable and re-useable. It aims to increase availability, usefulness, and trustworthiness of data. Here, we show that by interpreting the FAIR principles beyond a purely technical level, AOPs can ring in a new era of 3Rs applicability ‒ by increasing their visibility and making their creation process more transparent and reproducible.


Subject(s)
Adverse Outcome Pathways , Animals , Humans , Risk Assessment
3.
J Biomed Semantics ; 14(1): 21, 2023 Dec 11.
Article in English | MEDLINE | ID: mdl-38082345

ABSTRACT

BACKGROUND: The FAIR principles recommend the use of controlled vocabularies, such as ontologies, to define data and metadata concepts. Ontologies are currently modelled following different approaches, sometimes describing conflicting definitions of the same concepts, which can affect interoperability. To cope with that, prior literature suggests organising ontologies in levels, where domain specific (low-level) ontologies are grounded in domain independent high-level ontologies (i.e., foundational ontologies). In this level-based organisation, foundational ontologies work as translators of intended meaning, thus improving interoperability. Despite their considerable acceptance in biomedical research, there are very few studies testing foundational ontologies. This paper describes a systematic literature mapping that was conducted to understand how foundational ontologies are used in biomedical research and to find empirical evidence supporting their claimed (dis)advantages. RESULTS: From a set of 79 selected papers, we identified that foundational ontologies are used for several purposes: ontology construction, repair, mapping, and ontology-based data analysis. Foundational ontologies are claimed to improve interoperability, enhance reasoning, speed up ontology development and facilitate maintainability. The complexity of using foundational ontologies is the most commonly cited downside. Despite being used for several purposes, there were hardly any experiments (1 paper) testing the claims for or against the use of foundational ontologies. In the subset of 49 papers that describe the development of an ontology, it was observed a low adherence to ontology construction (16 papers) and ontology evaluation formal methods (4 papers). CONCLUSION: Our findings have two main implications. First, the lack of empirical evidence about the use of foundational ontologies indicates a need for evaluating the use of such artefacts in biomedical research. Second, the low adherence to formal methods illustrates how the field could benefit from a more systematic approach when dealing with the development and evaluation of ontologies. The understanding of how foundational ontologies are used in the biomedical field can drive future research towards the improvement of ontologies and, consequently, data FAIRness. The adoption of formal methods can impact the quality and sustainability of ontologies, and reusing these methods from other fields is encouraged.


Subject(s)
Biological Ontologies , Biomedical Research , Vocabulary, Controlled
4.
SN Comput Sci ; 4(1): 14, 2023.
Article in English | MEDLINE | ID: mdl-36274815

ABSTRACT

Scientific advances, especially in the healthcare domain, can be accelerated by making data available for analysis. However, in traditional data analysis systems, data need to be moved to a central processing unit that performs analyses, which may be undesirable, e.g. due to privacy regulations in case these data contain personal information. This paper discusses the Personal Health Train (PHT) approach in which data processing is brought to the (personal health) data rather than the other way around, allowing (private) data accessed to be controlled, and to observe ethical and legal concerns. This paper introduces the PHT architecture and discusses the data staging solution that allows processing to be delegated to components spawned in a private cloud environment in case the (health) organisation hosting the data has limited resources to execute the required processing. This paper shows the feasibility and suitability of the solution with a relatively simple, yet representative, case study of data analysis of Covid-19 infections, which is performed by components that are created on demand and run in the Amazon Web Services platform. This paper also shows that the performance of our solution is acceptable, and that our solution is scalable. This paper demonstrates that the PHT approach enables data analysis with controlled access, preserving privacy and complying with regulations such as GDPR, while the solution is deployed in a private cloud environment.

5.
Front Big Data ; 5: 883341, 2022.
Article in English | MEDLINE | ID: mdl-35647536

ABSTRACT

Although all the technical components supporting fully orchestrated Digital Twins (DT) currently exist, what remains missing is a conceptual clarification and analysis of a more generalized concept of a DT that is made FAIR, that is, universally machine actionable. This methodological overview is a first step toward this clarification. We present a review of previously developed semantic artifacts and how they may be used to compose a higher-order data model referred to here as a FAIR Digital Twin (FDT). We propose an architectural design to compose, store and reuse FDTs supporting data intensive research, with emphasis on privacy by design and their use in GDPR compliant open science.

6.
Stud Health Technol Inform ; 279: 144-146, 2021 May 07.
Article in English | MEDLINE | ID: mdl-33965931

ABSTRACT

BACKGROUND: Integration of heterogenous resources is key for Rare Disease research. Within the EJP RD, common Application Programming Interface specifications are proposed for discovery of resources and data records. This is not sufficient for automated processing between RD resources and meeting the FAIR principles. OBJECTIVE: To design a solution to improve FAIR for machines for the EJP RD API specification. METHODS: A FAIR Data Point is used to expose machine-actionable metadata of digital resources and it is configured to store its content to a semantic database to be FAIR at the source. RESULTS: A solution was designed based on grlc server as middleware to implement the EJP RD API specification on top of the FDP. CONCLUSION: grlc reduces potential API implementation overhead faced by maintainers who use FAIR at the source.


Subject(s)
Rare Diseases , Software , Databases, Factual , Humans , Internet , Metadata , Semantics
8.
Sci Data ; 6(1): 174, 2019 09 20.
Article in English | MEDLINE | ID: mdl-31541130

ABSTRACT

Transparent evaluations of FAIRness are increasingly required by a wide range of stakeholders, from scientists to publishers, funding agencies and policy makers. We propose a scalable, automatable framework to evaluate digital resources that encompasses measurable indicators, open source tools, and participation guidelines, which come together to accommodate domain relevant community-defined FAIR assessments. The components of the framework are: (1) Maturity Indicators - community-authored specifications that delimit a specific automatically-measurable FAIR behavior; (2) Compliance Tests - small Web apps that test digital resources against individual Maturity Indicators; and (3) the Evaluator, a Web application that registers, assembles, and applies community-relevant sets of Compliance Tests against a digital resource, and provides a detailed report about what a machine "sees" when it visits that resource. We discuss the technical and social considerations of FAIR assessments, and how this translates to our community-driven infrastructure. We then illustrate how the output of the Evaluator tool can serve as a roadmap to assist data stewards to incrementally and realistically improve the FAIRness of their resources.

10.
F1000Res ; 62017.
Article in English | MEDLINE | ID: mdl-29123641

ABSTRACT

The availability of high-throughput molecular profiling techniques has provided more accurate and informative data for regular clinical studies. Nevertheless, complex computational workflows are required to interpret these data. Over the past years, the data volume has been growing explosively, requiring robust human data management to organise and integrate the data efficiently. For this reason, we set up an ELIXIR implementation study, together with the Translational research IT (TraIT) programme, to design a data ecosystem that is able to link raw and interpreted data. In this project, the data from the TraIT Cell Line Use Case (TraIT-CLUC) are used as a test case for this system. Within this ecosystem, we use the European Genome-phenome Archive (EGA) to store raw molecular profiling data; tranSMART to collect interpreted molecular profiling data and clinical data for corresponding samples; and Galaxy to store, run and manage the computational workflows. We can integrate these data by linking their repositories systematically. To showcase our design, we have structured the TraIT-CLUC data, which contain a variety of molecular profiling data types, for storage in both tranSMART and EGA. The metadata provided allows referencing between tranSMART and EGA, fulfilling the cycle of data submission and discovery; we have also designed a data flow from EGA to Galaxy, enabling reanalysis of the raw data in Galaxy. In this way, users can select patient cohorts in tranSMART, trace them back to the raw data and perform (re)analysis in Galaxy. Our conclusion is that the majority of metadata does not necessarily need to be stored (redundantly) in both databases, but that instead FAIR persistent identifiers should be available for well-defined data ontology levels: study, data access committee, physical sample, data sample and raw data file. This approach will pave the way for the stable linkage and reuse of data.

SELECTION OF CITATIONS
SEARCH DETAIL
...