Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
J Pathol Inform ; 6: 37, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26167381

RESUMO

BACKGROUND: Ontology is one strategy for promoting interoperability of heterogeneous data through consistent tagging. An ontology is a controlled structured vocabulary consisting of general terms (such as "cell" or "image" or "tissue" or "microscope") that form the basis for such tagging. These terms are designed to represent the types of entities in the domain of reality that the ontology has been devised to capture; the terms are provided with logical definitions thereby also supporting reasoning over the tagged data. AIM: This paper provides a survey of the biomedical imaging ontologies that have been developed thus far. It outlines the challenges, particularly faced by ontologies in the fields of histopathological imaging and image analysis, and suggests a strategy for addressing these challenges in the example domain of quantitative histopathology imaging. RESULTS AND CONCLUSIONS: The ultimate goal is to support the multiscale understanding of disease that comes from using interoperable ontologies to integrate imaging data with clinical and genomics data.

2.
J Biomed Semantics ; 5(Suppl 1 Proceedings of the Bio-Ontologies Spec Interest G): S3, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25093072

RESUMO

BACKGROUND: With the advent of inexpensive assay technologies, there has been an unprecedented growth in genomics data as well as the number of databases in which it is stored. In these databases, sample annotation using ontologies and controlled vocabularies is becoming more common. However, the annotation is rarely available as Linked Data, in a machine-readable format, or for standardized queries using SPARQL. This makes large-scale reuse, or integration with other knowledge bases very difficult. METHODS: To address this challenge, we have developed the second generation of our eXframe platform, a reusable framework for creating online repositories of genomics experiments. This second generation model now publishes Semantic Web data. To accomplish this, we created an experiment model that covers provenance, citations, external links, assays, biomaterials used in the experiment, and the data collected during the process. The elements of our model are mapped to classes and properties from various established biomedical ontologies. Resource Description Framework (RDF) data is automatically produced using these mappings and indexed in an RDF store with a built-in Sparql Protocol and RDF Query Language (SPARQL) endpoint. CONCLUSIONS: Using the open-source eXframe software, institutions and laboratories can create Semantic Web repositories of their experiments, integrate it with heterogeneous resources and make it interoperable with the vast Semantic Web of biomedical knowledge.

3.
J Biomed Semantics ; 5: 28, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-26261718

RESUMO

BACKGROUND: Scientific publications are documentary representations of defeasible arguments, supported by data and repeatable methods. They are the essential mediating artifacts in the ecosystem of scientific communications. The institutional "goal" of science is publishing results. The linear document publication format, dating from 1665, has survived transition to the Web. Intractable publication volumes; the difficulty of verifying evidence; and observed problems in evidence and citation chains suggest a need for a web-friendly and machine-tractable model of scientific publications. This model should support: digital summarization, evidence examination, challenge, verification and remix, and incremental adoption. Such a model must be capable of expressing a broad spectrum of representational complexity, ranging from minimal to maximal forms. RESULTS: The micropublications semantic model of scientific argument and evidence provides these features. Micropublications support natural language statements; data; methods and materials specifications; discussion and commentary; challenge and disagreement; as well as allowing many kinds of statement formalization. The minimal form of a micropublication is a statement with its attribution. The maximal form is a statement with its complete supporting argument, consisting of all relevant evidence, interpretations, discussion and challenges brought forward in support of or opposition to it. Micropublications may be formalized and serialized in multiple ways, including in RDF. They may be added to publications as stand-off metadata. An OWL 2 vocabulary for micropublications is available at http://purl.org/mp. A discussion of this vocabulary along with RDF examples from the case studies, appears as OWL Vocabulary and RDF Examples in Additional file 1. CONCLUSION: Micropublications, because they model evidence and allow qualified, nuanced assertions, can play essential roles in the scientific communications ecosystem in places where simpler, formalized and purely statement-based models, such as the nanopublications model, will not be sufficient. At the same time they will add significant value to, and are intentionally compatible with, statement-based formalizations. We suggest that micropublications, generated by useful software tools supporting such activities as writing, editing, reviewing, and discussion, will be of great value in improving the quality and tractability of biomedical communications.

4.
J Biomed Semantics ; 4(1): 37, 2013 Nov 22.
Artigo em Inglês | MEDLINE | ID: mdl-24267948

RESUMO

BACKGROUND: Provenance is a critical ingredient for establishing trust of published scientific content. This is true whether we are considering a data set, a computational workflow, a peer-reviewed publication or a simple scientific claim with supportive evidence. Existing vocabularies such as Dublin Core Terms (DC Terms) and the W3C Provenance Ontology (PROV-O) are domain-independent and general-purpose and they allow and encourage for extensions to cover more specific needs. In particular, to track authoring and versioning information of web resources, PROV-O provides a basic methodology but not any specific classes and properties for identifying or distinguishing between the various roles assumed by agents manipulating digital artifacts, such as author, contributor and curator. RESULTS: We present the Provenance, Authoring and Versioning ontology (PAV, namespace http://purl.org/pav/): a lightweight ontology for capturing "just enough" descriptions essential for tracking the provenance, authoring and versioning of web resources. We argue that such descriptions are essential for digital scientific content. PAV distinguishes between contributors, authors and curators of content and creators of representations in addition to the provenance of originating resources that have been accessed, transformed and consumed. We explore five projects (and communities) that have adopted PAV illustrating their usage through concrete examples. Moreover, we present mappings that show how PAV extends the W3C PROV-O ontology to support broader interoperability. METHOD: The initial design of the PAV ontology was driven by requirements from the AlzSWAN project with further requirements incorporated later from other projects detailed in this paper. The authors strived to keep PAV lightweight and compact by including only those terms that have demonstrated to be pragmatically useful in existing applications, and by recommending terms from existing ontologies when plausible. DISCUSSION: We analyze and compare PAV with related approaches, namely Provenance Vocabulary (PRV), DC Terms and BIBFRAME. We identify similarities and analyze differences between those vocabularies and PAV, outlining strengths and weaknesses of our proposed model. We specify SKOS mappings that align PAV with DC Terms. We conclude the paper with general remarks on the applicability of PAV.

5.
Database (Oxford) ; 2013: bat064, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24048470

RESUMO

A vast amount of scientific information is encoded in natural language text, and the quantity of such text has become so great that it is no longer economically feasible to have a human as the first step in the search process. Natural language processing and text mining tools have become essential to facilitate the search for and extraction of information from text. This has led to vigorous research efforts to create useful tools and to create humanly labeled text corpora, which can be used to improve such tools. To encourage combining these efforts into larger, more powerful and more capable systems, a common interchange format to represent, store and exchange the data in a simple manner between different language processing systems and text mining tools is highly desirable. Here we propose a simple extensible mark-up language format to share text documents and annotations. The proposed annotation approach allows a large number of different annotations to be represented including sentences, tokens, parts of speech, named entities such as genes or diseases and relationships between named entities. In addition, we provide simple code to hold this data, read it from and write it back to extensible mark-up language files and perform some sample processing. We also describe completed as well as ongoing work to apply the approach in several directions. Code and data are available at http://bioc.sourceforge.net/. Database URL: http://bioc.sourceforge.net/


Assuntos
Pesquisa Biomédica , Mineração de Dados , Processamento de Linguagem Natural , Software , Humanos
6.
J Biomed Semantics ; 3 Suppl 1: S1, 2012 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-22541592

RESUMO

BACKGROUND: Our group has developed a useful shared software framework for performing, versioning, sharing and viewing Web annotations of a number of kinds, using an open representation model. METHODS: The Domeo Annotation Tool was developed in tandem with this open model, the Annotation Ontology (AO). Development of both the Annotation Framework and the open model was driven by requirements of several different types of alpha users, including bench scientists and biomedical curators from university research labs, online scientific communities, publishing and pharmaceutical companies.Several use cases were incrementally implemented by the toolkit. These use cases in biomedical communications include personal note-taking, group document annotation, semantic tagging, claim-evidence-context extraction, reagent tagging, and curation of textmining results from entity extraction algorithms. RESULTS: We report on the Domeo user interface here. Domeo has been deployed in beta release as part of the NIH Neuroscience Information Framework (NIF, http://www.neuinfo.org) and is scheduled for production deployment in the NIF's next full release.Future papers will describe other aspects of this work in detail, including Annotation Framework Services and components for integrating with external textmining services, such as the NCBO Annotator web service, and with other textmining applications using the Apache UIMA framework.

7.
J Biomed Semantics ; 2 Suppl 2: S4, 2011 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-21624159

RESUMO

BACKGROUND: There is currently a gap between the rich and expressive collection of published biomedical ontologies, and the natural language expression of biomedical papers consumed on a daily basis by scientific researchers. The purpose of this paper is to provide an open, shareable structure for dynamic integration of biomedical domain ontologies with the scientific document, in the form of an Annotation Ontology (AO), thus closing this gap and enabling application of formal biomedical ontologies directly to the literature as it emerges. METHODS: Initial requirements for AO were elicited by analysis of integration needs between biomedical web communities, and of needs for representing and integrating results of biomedical text mining. Analysis of strengths and weaknesses of previous efforts in this area was also performed. A series of increasingly refined annotation tools were then developed along with a metadata model in OWL, and deployed for feedback and additional requirements the ontology to users at a major pharmaceutical company and a major academic center. Further requirements and critiques of the model were also elicited through discussions with many colleagues and incorporated into the work. RESULTS: This paper presents Annotation Ontology (AO), an open ontology in OWL-DL for annotating scientific documents on the web. AO supports both human and algorithmic content annotation. It enables "stand-off" or independent metadata anchored to specific positions in a web document by any one of several methods. In AO, the document may be annotated but is not required to be under update control of the annotator. AO contains a provenance model to support versioning, and a set model for specifying groups and containers of annotation. AO is freely available under open source license at http://purl.org/ao/, and extensive documentation including screencasts is available on AO's Google Code page: http://code.google.com/p/annotation-ontology/ . CONCLUSIONS: The Annotation Ontology meets critical requirements for an open, freely shareable model in OWL, of annotation metadata created against scientific documents on the Web. We believe AO can become a very useful common model for annotation metadata on Web documents, and will enable biomedical domain ontologies to be used quite widely to annotate the scientific literature. Potential collaborators and those with new relevant use cases are invited to contact the authors.

8.
J Biomed Inform ; 41(5): 739-51, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18583197

RESUMO

Developing cures for highly complex diseases, such as neurodegenerative disorders, requires extensive interdisciplinary collaboration and exchange of biomedical information in context. Our ability to exchange such information across sub-specialties today is limited by the current scientific knowledge ecosystem's inability to properly contextualize and integrate data and discourse in machine-interpretable form. This inherently limits the productivity of research and the progress toward cures for devastating diseases such as Alzheimer's and Parkinson's. SWAN (Semantic Web Applications in Neuromedicine) is an interdisciplinary project to develop a practical, common, semantically structured, framework for biomedical discourse initially applied, but not limited, to significant problems in Alzheimer Disease (AD) research. The SWAN ontology has been developed in the context of building a series of applications for biomedical researchers, as well as in extensive discussions and collaborations with the larger bio-ontologies community. In this paper, we present and discuss the SWAN ontology of biomedical discourse. We ground its development theoretically, present its design approach, explain its main classes and their application, and show its relationship to other ongoing activities in biomedicine and bio-ontologies.


Assuntos
Pesquisa Biomédica/métodos , Sistemas de Gerenciamento de Base de Dados , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural , Animais , Humanos , Disseminação de Informação/métodos , Internet/provisão & distribuição , Bases de Conhecimento , Medicina/métodos , Doenças Neurodegenerativas/diagnóstico , Doenças Neurodegenerativas/fisiopatologia , Semântica , Vocabulário Controlado
9.
AMIA Annu Symp Proc ; : 146-50, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17238320

RESUMO

This paper presents Tempo, a framework for the definition, generation and execution of data processing components. Its architecture is organized on pipelines of modules assembled according to a specific meta-model with respect of contract based communication rules. Each pipeline wraps one or more data processing algorithms provided as reusable blocks in the default package. Such package can be extended with custom solutions through a plug-in mechanism. The Tempo components can be delivered both as web-services and as software library, and can be reused in different contexts by configuration through set of parameters. Although it has been initially tested in the medical field, Tempo is conceived as a general purpose framework. Until now, it has been integrated and tested within a medical guidelines implementation software tool and in a general purpose web application prototype as embedded module for the extraction of temporal patterns from generic time series.


Assuntos
Algoritmos , Processamento Eletrônico de Dados/métodos , Armazenamento e Recuperação da Informação/métodos , Software , Tempo
10.
Int J Med Inform ; 74(7-8): 553-62, 2005 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-16043084

RESUMO

This paper describes the architecture of the Guide Project, a proposal for innovation of Health Information Systems, putting together medical and organizational issues through the Separation of Concerns paradigm. In particular, we focus on one building block of the architecture: the Guideline Management System handling the whole life cycle of computerized Clinical Practice Guidelines. The communication between the Guideline Management System and the other components of the project architecture is message-based, according to specific contracts that allow an easy integration of the components developed by different parties and, in particular, with legacy systems (i.e. existing electronic patient records). In turn, the Guideline Management System components are organized in a distributed architecture: an editor to formalize guidelines, a repository to store and publish them, an enactment system to implement guidelines instances in a multi-user environment and a reporting system able to completely trace any individual physician's guideline-based decision process. The repository is organized in different levels that can be international, national, regional, down to the specific health care organization, according to the healthcare delivery policy of a country. Different organizations can get Clinical Practice Guidelines from the repository, adapt and introduce them in clinical practice.


Assuntos
Sistemas Computacionais , Sistemas de Apoio a Decisões Clínicas/organização & administração , Bases de Dados como Assunto , Itália , Conhecimento , Guias de Prática Clínica como Assunto/normas
11.
Stud Health Technol Inform ; 101: 75-87, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15537207

RESUMO

Guidelines are often based on a mixture of evidence-based and consensus-based recommendations. It is not straightforward that providing a series of "good" recommendations result in a guideline that is easily applicable, and it is not straightforward that acting according to such recommendations leads to an effective and efficient clinical practice. In this paper we summarize our experience in evaluating both the usability and the impact of a guideline for the acute/subacute stroke management. A computerised version of the guideline has been implemented and linked to the electronic patient record. We collected data on 386 patients. Our analysis highlighted a number of non-compliances. Some of them can be easily justified, while others depend only on physician resistance to behavioural changes and on cultural biases. From our results, health outcomes and costs are related to guideline compliance: a unit increase in the number of non-compliance results in a 7% increase of mortality at six months. Patients treated according to guidelines showed a 13% increase in treatment effectiveness at discharge, and an average cost of 2929 Euros vs 3694 Euros for the others.


Assuntos
Fidelidade a Diretrizes , Guias de Prática Clínica como Assunto , Custos e Análise de Custo , Medicina Baseada em Evidências , Humanos , Sistemas Computadorizados de Registros Médicos , Isquemia Miocárdica/diagnóstico , Acidente Vascular Cerebral/terapia , Interface Usuário-Computador
12.
Stud Health Technol Inform ; 107(Pt 1): 28-32, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15360768

RESUMO

This paper describes the architecture of NewGuide, a guide-line management system for handling the whole life cycle of a computerized clinical practice guideline. NewGuide components are organized in a distributed architecture: an editor to formalize guidelines, a repository to store them, an inference engine to implement guidelines instances in a multi-user environment, and a reporting system storing the guidelines logs in order to be able to completely trace any individual physician guideline-based decision process. There is a system "central level" that maintains official versions of the guidelines, and local Healthcare Organizations may download and implement them according to their needs. The architecture has been implemented using the Java 2 Enterprise Edition (J2EE) platform. Simple Object Access Protocol (SOAP) and a set of con-tracts are the key factors for the integration of NewGuide with healthcare legacy systems. They allow maintaining unchanged legacy user interfaces and connecting the system with what-ever electronic patient record. The system functionality will be illustrated in three different contexts: homecare-based pressure ulcer prevention, acute ischemic stroke treatment and heart failure management by general practitioners.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Guias de Prática Clínica como Assunto , Sistemas Computacionais , Insuficiência Cardíaca/terapia , Humanos , Sistemas Computadorizados de Registros Médicos , Úlcera por Pressão/prevenção & controle , Acidente Vascular Cerebral/terapia , Integração de Sistemas , Terapia Assistida por Computador
13.
Stud Health Technol Inform ; 102: 81-94, 2004.
Artigo em Inglês | MEDLINE | ID: mdl-15853265

RESUMO

Evidence-based medicine relies on the execution of clinical practice guidelines and protocols. A great deal of effort has been invested in the development of tools which can automate the representation and execution of the recommendations contained within such guidelines, by creating Computer Interpretable Guideline Models (CIGMs). Context-based task ontologies (CTOs), based on standard terminology systems like UMLS, form one of the core components of such models. We have created DAML+OIL-based CTOs for the tasks referred to in the WHO guideline for hypertension management, drawing comparisons also with other, related guidelines. The advantages of CTOs include: contextualization of ontologies, tailoring of ontologies to specific aspects of the phenomena of interest, division of the complex tasks involved in creating ontologies into different levels, and provision of a methodology by means of which the task recommendations contained within guidelines can be integrated into the clinical practices of a health care set-up.


Assuntos
Informática Médica , Guias de Prática Clínica como Assunto , Terminologia como Assunto , Humanos , Hipertensão/terapia , Unified Medical Language System , Organização Mundial da Saúde
14.
Med Inform Internet Med ; 28(2): 99-115, 2003 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-14692587

RESUMO

One of the principal challenges in the medical practice is the update of their knowledge. One of the prime roles of the Continuing Medical Education is to train the medical practitioners with the latest advances in health care, specialized to their needs. Online courses and classroom teaching with computer-based representations have become an established mode of delivering medical education. This paper deals with the modularized representation of a medical text concerning clinical practice guidelines. The proposed system takes into consideration the semantics of the Unified Medical Language System and is based upon the marking up and display of the knowledge using the XML and XSLT languages. This modularization of the concepts leads to the determination of the context of a portion or the whole document. Thus, after marking up using our system, the text components can be exchanged, modified or reconstructed, which, in turn, would help to maintain the updates in medical knowledge.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Educação Médica Continuada/métodos , Guias de Prática Clínica como Assunto , Linguagens de Programação , Unified Medical Language System , Humanos , Internet , National Library of Medicine (U.S.) , PubMed , Semântica , Estados Unidos
15.
Stud Health Technol Inform ; 95: 469-74, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-14664031

RESUMO

Medical knowledge in clinical practice guideline (GL) texts is the source of task-based computer-interpretable clinical guideline models (CIGMs). We have used Unified Medical Language System (UMLS) semantic types (STs) to understand the percentage of GL text which belongs to a particular ST. We also use UMLS semantic network together with the CIGM-specific ontology to derive a semantic meaning behind the GL text. In order to achieve this objective, we took nine GL texts from the National Guideline Clearinghouse (NGC) and marked up the text dealing with a particular ST. The STs we took into consideration were restricted taking into account the requirements of a task-based CIGM. We used DARPA Agent Markup Language and Ontology Inference Layer (DAML + OIL) to create the UMLS and CIGM specific semantic network. For the latter, as a bench test, we used the 1999 WHO-International Society of Hypertension Guidelines for the Management of Hypertension. We took into consideration the UMLS STs closest to the clinical tasks. The percentage of the GL text dealing with the ST "Health Care Activity" and subtypes "Laboratory Procedure", "Diagnostic Procedure" and "Therapeutic or Preventive Procedure" were measured. The parts of text belonging to other STs or comments were separated. A mapping of terms belonging to other STs was done to the STs under "HCA" for representation in DAML + OIL. As a result, we found that the three STs under "HCA" were the predominant STs present in the GL text. In cases where the terms of related STs existed, they were mapped into one of the three STs. The DAML + OIL representation was able to describe the hierarchy in task-based CIGMs. To conclude, we understood that the three STs could be used to represent the semantic network of the task-bases CIGMs. We identified some mapping operators which could be used for the mapping of other STs into these.


Assuntos
Informática Médica , Guias de Prática Clínica como Assunto , Semântica , Unified Medical Language System , Humanos , Hipertensão/diagnóstico , Hipertensão/terapia , Itália , Linguagens de Programação
16.
J Am Med Inform Assoc ; 10(1): 52-68, 2003.
Artigo em Inglês | MEDLINE | ID: mdl-12509357

RESUMO

OBJECTIVES: Many groups are developing computer-interpretable clinical guidelines (CIGs) for use during clinical encounters. CIGs use "Task-Network Models" for representation but differ in their approaches to addressing particular modeling challenges. We have studied similarities and differences between CIGs in order to identify issues that must be resolved before a consensus on a set of common components can be developed. DESIGN: We compared six models: Asbru, EON, GLIF, GUIDE, PRODIGY, and PROforma. Collaborators from groups that created these models represented, in their own formalisms, portions of two guidelines: American College of Chest Physicians cough guidelines [correction] and the Sixth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. MEASUREMENTS: We compared the models according to eight components that capture the structure of CIGs. The components enable modelers to encode guidelines as plans that organize decision and action tasks in networks. They also enable the encoded guidelines to be linked with patient data-a key requirement for enabling patient-specific decision support. RESULTS: We found consensus on many components, including plan organization, expression language, conceptual medical record model, medical concept model, and data abstractions. Differences were most apparent in underlying decision models, goal representation, use of scenarios, and structured medical actions. CONCLUSION: We identified guideline components that the CIG community could adopt as standards. Some of the participants are pursuing standardization of these components under the auspices of HL7.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Técnicas de Apoio para a Decisão , Guias de Prática Clínica como Assunto , Humanos , Linguagens de Programação , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...