Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22
Filter
1.
Nucleic Acids Res ; 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38967009

ABSTRACT

Knowledge about transcription factor binding and regulation, target genes, cis-regulatory modules and topologically associating domains is not only defined by functional associations like biological processes or diseases but also has a determinative genome location aspect. Here, we exploit these location and functional aspects together to develop new strategies to enable advanced data querying. Many databases have been developed to provide information about enhancers, but a schema that allows the standardized representation of data, securing interoperability between resources, has been lacking. In this work, we use knowledge graphs for the standardized representation of enhancers and topologically associating domains, together with data about their target genes, transcription factors, location on the human genome, and functional data about diseases and gene ontology annotations. We used this schema to integrate twenty-five enhancer datasets and two domain datasets, creating the most powerful integrative resource in this field to date. The knowledge graphs have been implemented using the Resource Description Framework and integrated within the open-access BioGateway knowledge network, generating a resource that contains an interoperable set of knowledge graphs (enhancers, TADs, genes, proteins, diseases, GO terms, and interactions between domains). We show how advanced queries, which combine functional and location restrictions, can be used to develop new hypotheses about functional aspects of gene expression regulation.

2.
J Biomed Inform ; 139: 104297, 2023 03.
Article in English | MEDLINE | ID: mdl-36736448

ABSTRACT

SNOMED CT postcoordination is an underused mechanism that can help to implement advanced systems for the automatic extraction and encoding of clinical information from text. It allows defining non-existing SNOMED CT concepts by their relationships with existing ones. Manually building postcoordinated expressions is a difficult task. It requires a deep knowledge of the terminology and the support of specialized tools that barely exist. In order to support the building of postcoordinated expressions, we have implemented KGE4SCT: a method that suggests the corresponding SNOMED CT postcoordinated expression for a given clinical term. We leverage on the SNOMED CT ontology and its graph-like structure and use knowledge graph embeddings (KGEs). The objective of such embeddings is to represent in a vector space knowledge graph components (e.g. entities and relations) in a way that captures the structure of the graph. Then, we use vector similarity and analogies for obtaining the postcoordinated expression of a given clinical term. We obtained a semantic type accuracy of 98%, relationship accuracy of 90%, and analogy accuracy of 60%, with an overall completeness of postcoordination of 52% for the Spanish SNOMED CT version. We have also applied it to the English SNOMED CT version and outperformed state of the art methods in both, corpus generation for language model training for this task (improvement of 6% for analogy accuracy), and automatic postcoordination of SNOMED CT expressions, with an increase of 17% for partial conversion rate.


Subject(s)
Semantics , Systematized Nomenclature of Medicine , Pattern Recognition, Automated , Language , Natural Language Processing
3.
J Biomed Semantics ; 13(1): 19, 2022 07 15.
Article in English | MEDLINE | ID: mdl-35841031

ABSTRACT

BACKGROUND: Ontology matching should contribute to the interoperability aspect of FAIR data (Findable, Accessible, Interoperable, and Reusable). Multiple data sources can use different ontologies for annotating their data and, thus, creating the need for dynamic ontology matching services. In this experimental study, we assessed the performance of ontology matching systems in the context of a real-life application from the rare disease domain. Additionally, we present a method for analyzing top-level classes to improve precision. RESULTS: We included three ontologies (NCIt, SNOMED CT, ORDO) and three matching systems (AgreementMakerLight 2.0, FCA-Map, LogMap 2.0). We evaluated the performance of the matching systems against reference alignments from BioPortal and the Unified Medical Language System Metathesaurus (UMLS). Then, we analyzed the top-level ancestors of matched classes, to detect incorrect mappings without consulting a reference alignment. To detect such incorrect mappings, we manually matched semantically equivalent top-level classes of ontology pairs. AgreementMakerLight 2.0, FCA-Map, and LogMap 2.0 had F1-scores of 0.55, 0.46, 0.55 for BioPortal and 0.66, 0.53, 0.58 for the UMLS respectively. Using vote-based consensus alignments increased performance across the board. Evaluation with manually created top-level hierarchy mappings revealed that on average 90% of the mappings' classes belonged to top-level classes that matched. CONCLUSIONS: Our findings show that the included ontology matching systems automatically produced mappings that were modestly accurate according to our evaluation. The hierarchical analysis of mappings seems promising when no reference alignments are available. All in all, the systems show potential to be implemented as part of an ontology matching service for querying FAIR data. Future research should focus on developing methods for the evaluation of mappings used in such mapping services, leading to their implementation in a FAIR data ecosystem.


Subject(s)
Biological Ontologies , Ecosystem , Consensus , Information Storage and Retrieval , Systematized Nomenclature of Medicine , Unified Medical Language System
4.
PLoS One ; 13(12): e0209547, 2018.
Article in English | MEDLINE | ID: mdl-30589855

ABSTRACT

SNOMED CT provides about 300,000 codes with fine-grained concept definitions to support interoperability of health data. Coding clinical texts with medical terminologies it is not a trivial task and is prone to disagreements between coders. We conducted a qualitative analysis to identify sources of disagreements on an annotation experiment which used a subset of SNOMED CT with some restrictions. A corpus of 20 English clinical text fragments from diverse origins and languages was annotated independently by two domain medically trained annotators following a specific annotation guideline. By following this guideline, the annotators had to assign sets of SNOMED CT codes to noun phrases, together with concept and term coverage ratings. Then, the annotations were manually examined against a reference standard to determine sources of disagreements. Five categories were identified. In our results, the most frequent cause of inter-annotator disagreement was related to human issues. In several cases disagreements revealed gaps in the annotation guidelines and lack of training of annotators. The reminder issues can be influenced by some SNOMED CT features.


Subject(s)
Data Curation , Systematized Nomenclature of Medicine , Evaluation Studies as Topic , Guidelines as Topic , Humans
5.
Stud Health Technol Inform ; 247: 666-670, 2018.
Article in English | MEDLINE | ID: mdl-29678044

ABSTRACT

Organised repositories of published scientific literature represent a rich source for research in knowledge representation. MEDLINE, one of the largest and most popular biomedical literature databases, provides metadata for over 24 million articles each of which is indexed using the MeSH controlled vocabulary. In order to reuse MeSH annotations for knowledge construction, we processed this data and extracted the most relevant patterns of assigned descriptors over time. The patterns consist of UMLS semantic groups related to the MeSH headings together with their associated MeSH subheadings. Then, we connected the patterns with the most frequent predicates in their corresponding MEDLINE abstracts. Thereafter, we conducted a time series analysis of the extracted patterns from MEDLINE records and their associated predicates in order to study the evolution of manual MeSH indexing. The results show an increasing diversity of the assigned MESH terms over time, along with the increase of scientific publication per year. We obtained evidence of consistency of the relevant predicates associated with the extracted patterns. Moreover, for the most frequent patterns some predicates predominate over others such as Treats between substances and disorders, Causes between pairs of disorders, or Interacts between pairs of substances.


Subject(s)
Data Mining , MEDLINE , Medical Subject Headings , Databases, Factual , Humans , Semantics
6.
Stud Health Technol Inform ; 235: 446-450, 2017.
Article in English | MEDLINE | ID: mdl-28423832

ABSTRACT

SNOMED CT supports post-coordination, a technique to combine clinical concepts to ontologically define more complex concepts. This technique follows the validity restrictions defined in the SNOMED CT Concept Model. Pre-coordinated expressions are compositional expressions already in SNOMED CT, whereas post-coordinated expressions extend its content. In this project we aim to evaluate the suitability of existing pre-coordinated expressions to provide the patterns for composing typical clinical information based on a defined list of sets of interrelated SNOMED CT concepts. The method produces a 9.3% precision and a 95.9% recall. As a consequence, further investigations are needed to develop heuristics for the selection of the most meaningful matched patterns to improve the precision.


Subject(s)
Information Storage and Retrieval , Systematized Nomenclature of Medicine , Vocabulary, Controlled
7.
Stud Health Technol Inform ; 228: 582-6, 2016.
Article in English | MEDLINE | ID: mdl-27577450

ABSTRACT

Big data resources are difficult to process without a scaled hardware environment that is specifically adapted to the problem. The emergence of flexible cloud-based virtualization techniques promises solutions to this problem. This paper demonstrates how a billion of lines can be processed in a reasonable amount of time in a cloud-based environment. Our use case addresses the accumulation of concept co-occurrence data in MEDLINE annotation as a series of MapReduce jobs, which can be scaled and executed in the cloud. Besides showing an efficient way solving this problem, we generated an additional resource for the scientific community to be used for advanced text mining approaches.


Subject(s)
Cloud Computing , MEDLINE , Medical Subject Headings , Data Mining , Humans , MEDLINE/statistics & numerical data
8.
Stud Health Technol Inform ; 228: 765-9, 2016.
Article in English | MEDLINE | ID: mdl-27577489

ABSTRACT

The construction and publication of predications form scientific literature databases like MEDLINE is necessary due to the large amount of resources available. The main goal is to infer meaningful predicates between relevant co-occurring MeSH concepts manually annotated from MEDLINE records. The resulting predications are formed as subject-predicate-object triples. We exploit the content of MRCOC file to extract the MeSH indexing terms (main headings and subheadings) of MEDLINE. The predications were inferred by combining the semantic predicates from SemMedDB, the clustering of MeSH terms by their associated MeSH subheadings and the frequency of relevant terms in the abstracts of MEDLINE records. The inferring process also obtains and associates a weight to each generated predication. As a result, we published the generated dataset of predications using the Linked Data principles to make it available for future projects.


Subject(s)
MEDLINE , Medical Subject Headings , Cluster Analysis , Semantics
9.
J Biomed Semantics ; 7: 32, 2016 Jun 03.
Article in English | MEDLINE | ID: mdl-27255189

ABSTRACT

BACKGROUND: Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. METHODS: We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. RESULTS: The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. CONCLUSIONS: We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.


Subject(s)
Biological Ontologies , Biomedical Research , Databases, Factual , Semantics , Electronic Health Records , Humans , Internet
10.
Stud Health Technol Inform ; 216: 716-20, 2015.
Article in English | MEDLINE | ID: mdl-26262145

ABSTRACT

The massive accumulation of biomedical knowledge is reflected by the growth of the literature database MEDLINE with over 23 million bibliographic records. All records are manually indexed by MeSH descriptors, many of them refined by MeSH subheadings. We use subheading information to cluster types of MeSH descriptor co-occurrences in MEDLINE by processing co-occurrence information provided by the UMLS. The goal is to infer plausible predicates to each resulting cluster. In an initial experiment this was done by grouping disease-pharmacologic substance co-occurrences into six clusters. Then, a domain expert manually performed the assignment of meaningful predicates to the clusters. The mean accuracy of the best ten generated biomedical facts of each cluster was 85%. This result supports the evidence of the potential of MeSH subheadings for extracting plausible medical predications from MEDLINE.


Subject(s)
Knowledge Bases , MEDLINE/statistics & numerical data , Medical Subject Headings , Natural Language Processing , Periodicals as Topic/statistics & numerical data , Cluster Analysis , Data Mining/methods , Machine Learning , Terminology as Topic
11.
Stud Health Technol Inform ; 210: 165-9, 2015.
Article in English | MEDLINE | ID: mdl-25991123

ABSTRACT

Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources. Such heterogeneity makes difficult not only the generation of research-oriented dataset but also its exploitation. In recent years, the Open Data paradigm has proposed new ways for making data available in ways that sharing and integration are facilitated. Open Data approaches may pursue the generation of content readable only by humans and by both humans and machines, which are the ones of interest in our work. The Semantic Web provides a natural technological space for data integration and exploitation and offers a range of technologies for generating not only Open Datasets but also Linked Datasets, that is, open datasets linked to other open datasets. According to the Berners-Lee's classification, each open dataset can be given a rating between one and five stars attending to can be given to each dataset. In the last years, we have developed and applied our SWIT tool, which automates the generation of semantic datasets from heterogeneous data sources. SWIT produces four stars datasets, given that fifth one can be obtained by being the dataset linked from external ones. In this paper, we describe how we have applied the tool in two projects related to health care records and orthology data, as well as the major lessons learned from such efforts.


Subject(s)
Biological Ontologies , Biomedical Research/classification , Databases, Factual , Information Storage and Retrieval/methods , Internet , Natural Language Processing , Semantics , Software , Spain , Terminology as Topic
12.
Stud Health Technol Inform ; 210: 597-601, 2015.
Article in English | MEDLINE | ID: mdl-25991218

ABSTRACT

Translating huge medical terminologies like SNOMED CT is costly and time consuming. We present a methodology that acquires substring substitution rules for single words, based on the known similarity between medical words and their translations, due to their common Latin / Greek origin. Character translation rules are automatically acquired from pairs of English words and their automated translations to German. Using a training set with single words extracted from SNOMED CT as input we obtained a list of 268 translation rules. The evaluation of these rules improved the translation of 60% of words compared to Google Translate and 55% of translated words that exactly match the right translations. On a subset of words where machine translation had failed, our method improves translation in 56% of cases, with 27% exactly matching the gold standard.


Subject(s)
Algorithms , Natural Language Processing , Pattern Recognition, Automated/methods , Semantics , Systematized Nomenclature of Medicine , Translating , Europe , Forms and Records Control/methods , Germany , Machine Learning , Medical Record Linkage/methods , Terminology as Topic
13.
BMC Med Inform Decis Mak ; 15: 12, 2015 Feb 22.
Article in English | MEDLINE | ID: mdl-25880555

ABSTRACT

BACKGROUND: Every year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics. METHODS: We developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles. RESULTS: Our methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners. CONCLUSIONS: The ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach.


Subject(s)
Biological Ontologies , Decision Support Systems, Clinical , Drug-Related Side Effects and Adverse Reactions/prevention & control , Pharmacogenetics/methods , Practice Guidelines as Topic , Precision Medicine/methods , Artificial Intelligence , Clinical Decision-Making , Humans
14.
Stud Health Technol Inform ; 205: 261-5, 2014.
Article in English | MEDLINE | ID: mdl-25160186

ABSTRACT

The availability of pharmacogenomic data of individual patients can significantly improve physicians' prescribing behavior, lead to a reduced incidence of adverse drug events and an improvement of effectiveness of treatment. The Medicine Safety Code (MSC) initiative is an effort to improve the ability of clinicians and patients to share pharmacogenomic data and to use it at the point of care. The MSC is a standardized two-dimensional barcode that captures individual pharmacogenomic data. The system is backed by a web service that allows the decoding and interpretation of anonymous MSCs without requiring the installation of dedicated software. The system is based on a curated, ontology-based knowledge base representing pharmacogenomic definitions and clinical guidelines. The MSC system performed well in preliminary tests. To evaluate the system in realistic health care settings and to translate it into practical applications, the future participation of stakeholders in clinical institutions, researchers, pharmaceutical companies, genetic testing providers, health IT companies and health insurance organizations will be essential.


Subject(s)
Drug-Related Side Effects and Adverse Reactions/genetics , Drug-Related Side Effects and Adverse Reactions/prevention & control , Electronic Health Records/standards , Health Records, Personal , Information Storage and Retrieval/standards , Pharmacogenetics/standards , Precision Medicine/standards , Adverse Drug Reaction Reporting Systems/standards , Databases, Genetic/standards , Humans , Internationality , Patient Identification Systems/standards
15.
Stud Health Technol Inform ; 205: 584-8, 2014.
Article in English | MEDLINE | ID: mdl-25160253

ABSTRACT

With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies.


Subject(s)
Algorithms , Artificial Intelligence , Manuscripts, Medical as Topic , Natural Language Processing , Pattern Recognition, Automated/methods , Software , Vocabulary, Controlled , Semantics
16.
Stud Health Technol Inform ; 205: 1018-22, 2014.
Article in English | MEDLINE | ID: mdl-25160342

ABSTRACT

The semantic interoperability of clinical information requires methods able to transform heterogeneous data sources from both technological and structural perspectives, into representations that facilitate the sharing of meaning. The SemanticHealthNet (SHN) project proposes using semantic content patterns for representing clinical information based on a model of meaning, preventing users from a deep knowledge on ontology and description logics formalism. In this work we propose a flexible transformation method that uses semantic content patterns to guide the mapping between the source data and a target domain ontology. As use case we show how one of the semantic content patterns proposed in SHN can be used to transform heterogeneous data about medication administration.


Subject(s)
Biological Ontologies , Information Storage and Retrieval/methods , Medical Informatics/methods , Medication Systems, Hospital/organization & administration , Natural Language Processing , Pattern Recognition, Automated/methods , Semantics , Artificial Intelligence
17.
Stud Health Technol Inform ; 198: 25-31, 2014.
Article in English | MEDLINE | ID: mdl-24825681

ABSTRACT

The availability of pharmacogenomic data of individual patients can significantly improve physicians' prescribing behavior, lead to a reduced incidence of adverse drug events and an improvement of effectiveness of treatment. The Medicine Safety Code (MSC) initiative is an effort to improve the ability of clinicians and patients to share pharmacogenomic data and to use it at the point of care. The MSC is a standardized two-dimensional barcode that captures individual pharmacogenomic data. The system is backed by a web service that allows the decoding and interpretation of anonymous MSCs without requiring the installation of dedicated software. The system is based on a curated, ontology-based knowledge base representing pharmacogenomic definitions and clinical guidelines. The MSC system performed well in preliminary tests. To evaluate the system in realistic health care settings and to translate it into practical applications, the future participation of stakeholders in clinical institutions, medical researchers, pharmaceutical companies, genetic testing providers, health IT companies and health insurance organizations will be essential.


Subject(s)
Biological Ontologies , DNA Barcoding, Taxonomic/methods , Drug-Related Side Effects and Adverse Reactions/genetics , Electronic Health Records/organization & administration , Patient Identification Systems/methods , Pharmacogenetics/organization & administration , Precision Medicine/methods , Adverse Drug Reaction Reporting Systems/organization & administration , Confidentiality , Databases, Genetic , Decision Support Systems, Clinical/organization & administration , Drug-Related Side Effects and Adverse Reactions/prevention & control , Humans , Internationality
18.
PLoS One ; 9(5): e93769, 2014.
Article in English | MEDLINE | ID: mdl-24787444

ABSTRACT

BACKGROUND: The development of genotyping and genetic sequencing techniques and their evolution towards low costs and quick turnaround have encouraged a wide range of applications. One of the most promising applications is pharmacogenomics, where genetic profiles are used to predict the most suitable drugs and drug dosages for the individual patient. This approach aims to ensure appropriate medical treatment and avoid, or properly manage, undesired side effects. RESULTS: We developed the Medicine Safety Code (MSC) service, a novel pharmacogenomics decision support system, to provide physicians and patients with the ability to represent pharmacogenomic data in computable form and to provide pharmacogenomic guidance at the point-of-care. Pharmacogenomic data of individual patients are encoded as Quick Response (QR) codes and can be decoded and interpreted with common mobile devices without requiring a centralized repository for storing genetic patient data. In this paper, we present the first fully functional release of this system and describe its architecture, which utilizes Web Ontology Language 2 (OWL 2) ontologies to formalize pharmacogenomic knowledge and to provide clinical decision support functionalities. CONCLUSIONS: The MSC system provides a novel approach for enabling the implementation of personalized medicine in clinical routine.


Subject(s)
Biological Ontologies , Cell Phone , Decision Support Techniques , Pharmacogenetics/methods , Point-of-Care Systems , Genotype , Humans , Medication Errors/prevention & control
19.
J Med Syst ; 36 Suppl 1: S11-23, 2012 Nov.
Article in English | MEDLINE | ID: mdl-23149630

ABSTRACT

Genome sequencing projects generate vast amounts of data of a wide variety of types and complexities, and at a growing pace. Traditionally, the annotation of such sequences was difficult to share with other researchers. Despite the fact that this has improved with the development and application of biological ontologies, such annotation efforts remain isolated since the amount of information that can be used from other annotation projects is limited. In addition to this, they do not benefit from the translational information available for the genomic sequences. In this paper, we describe a system that supports genome annotation processes by providing useful information about orthologous genes and the genetic disorders which can be associated with a gene identified in a sequence. The seamless integration of such data will be facilitated by an ontological infrastructure which, following best practices in ontology engineering, will reuse existing biological ontologies like Sequence Ontology or Ontological Gene Orthology.


Subject(s)
Chromosome Mapping/methods , Genetic Diseases, Inborn/genetics , Information Systems/organization & administration , Databases, Genetic , Humans
20.
J Biomed Inform ; 45(4): 746-62, 2012 Aug.
Article in English | MEDLINE | ID: mdl-22142945

ABSTRACT

Possibly the most important requirement to support co-operative work among health professionals and institutions is the ability of sharing EHRs in a meaningful way, and it is widely acknowledged that standardization of data and concepts is a prerequisite to achieve semantic interoperability in any domain. Different international organizations are working on the definition of EHR architectures but the lack of tools that implement them hinders their broad adoption. In this paper we present ResearchEHR, a software platform whose objective is to facilitate the practical application of EHR standards as a way of reaching the desired semantic interoperability. This platform is not only suitable for developing new systems but also for increasing the standardization of existing ones. The work reported here describes how the platform allows for the edition, validation, and search of archetypes, converts legacy data into normalized, archetypes extracts, is able to generate applications from archetypes and finally, transforms archetypes and data extracts into other EHR standards. We also include in this paper how ResearchEHR has made possible the application of the CEN/ISO 13606 standard in a real environment and the lessons learnt with this experience.


Subject(s)
Database Management Systems , Electronic Health Records/standards , Semantics , Humans , Reproducibility of Results , Systems Integration
SELECTION OF CITATIONS
SEARCH DETAIL
...