Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
1.
Stud Health Technol Inform ; 235: 416-420, 2017.
Article in English | MEDLINE | ID: mdl-28423826

ABSTRACT

ArchMS is a framework that represents clinical information and knowledge using ontologies in OWL, which facilitates semantic interoperability and thereby the exploitation and secondary use of clinical data. However, it does not yet support the automated assessment of quality of care. CLIF is a stepwise method to formalize quality indicators. The method has been implemented in the CLIF tool which supports its users in generating computable queries based on a patient data model which can be based on archetypes. To enable the automated computation of quality indicators using ontologies and archetypes, we tested whether ArchMS and the CLIF tool can be integrated. We successfully automated the process of generating SPARQL queries from quality indicators that have been formalized with CLIF and integrated them into ArchMS. Hence, ontologies and archetypes can be combined for the execution of formalized quality indicators.


Subject(s)
Medical Informatics , Quality Indicators, Health Care , Semantics , Biological Ontologies , Humans , Knowledge
2.
Artif Intell Med ; 65(1): 29-34, 2015 Sep.
Article in English | MEDLINE | ID: mdl-25455563

ABSTRACT

OBJECTIVE: Intra-axiom redundancies are elements of concept definitions that are redundant as they are entailed by other elements of the concept definition. While such redundancies are harmless from a logical point of view, they make concept definitions hard to maintain, and they might lead to content-related problems when concepts evolve. The objective of this study is to develop a fully automated method to detect intra-axiom redundancies in OWL 2 EL and apply it to SNOMED Clinical Terms (SNOMED CT). MATERIALS AND METHODS: We developed a software program in which we implemented, adapted and extended readily existing rules for redundancy elimination. With this, we analysed occurence of redundancy in 11 releases of SNOMED CT (January 2009 to January 2014). We used the ELK reasoner to classify SNOMED CT, and Pellet for explanation of equivalence. We analysed the completeness and soundness of the results by an in-depth examination of the identified redundant elements in the July 2012 release of SNOMED CT. To determine if concepts with redundant elements lead to maintenance issues, we analysed a small sample of solved redundancies. RESULTS: Analyses showed that the amount of redundantly defined concepts in SNOMED CT is consistently around 35,000. In the July 2012 version of SNOMED CT, 35,010 (12%) of the 296,433 concepts contained redundant elements in their definitions. The results of applying our method are sound and complete with respect to our evaluation. Analysis of solved redundancies suggests that redundancies in concept definitions lead to inadequate maintenance of SNOMED CT. CONCLUSIONS: Our analysis revealed that redundant elements are continuously introduced and removed, and that redundant elements may be overlooked when concept definitions are corrected. Applying our redundancy detection method to remove intra-axiom redundancies from the stated form of SNOMED CT and to point knowledge modellers to newly introduced redundancies can support creating and maintaining a redundancy-free version of SNOMED CT.


Subject(s)
Artificial Intelligence/standards , Systematized Nomenclature of Medicine , Humans , Vocabulary, Controlled
3.
BMC Med Inform Decis Mak ; 14: 32, 2014 Apr 11.
Article in English | MEDLINE | ID: mdl-24721489

ABSTRACT

BACKGROUND: Our study aims to assess the influence of data quality on computed Dutch hospital quality indicators, and whether colorectal cancer surgery indicators can be computed reliably based on routinely recorded data from an electronic medical record (EMR). METHODS: Cross-sectional study in a department of gastrointestinal oncology in a university hospital, in which a set of 10 indicators is computed (1) based on data abstracted manually for the national quality register Dutch Surgical Colorectal Audit (DSCA) as reference standard and (2) based on routinely collected data from an EMR. All 75 patients for whom data has been submitted to the DSCA for the reporting year 2011 and all 79 patients who underwent a resection of a primary colorectal carcinoma in 2011 according to structured data in the EMR were included. Comparison of results, investigating the causes for any differences based on data quality analysis. Main outcome measures are the computability of quality indicators, absolute percentages of indicator results, data quality in terms of availability in a structured format, completeness and correctness. RESULTS: All indicators were fully computable based on the DSCA dataset, but only three based on EMR data, two of which were percentages. For both percentages, the difference in proportions computed based on the two datasets was significant.All required data items were available in a structured format in the DSCA dataset. Their average completeness was 86%, while the average completeness of these items in the EMR was 50%. Their average correctness was 87%. CONCLUSIONS: Our study showed that data quality can significantly influence indicator results, and that our EMR data was not suitable to reliably compute quality indicators. EMRs should be designed in a way so that the data required for audits can be entered directly in a structured and coded format.


Subject(s)
Quality Indicators, Health Care/standards , Registries , Research Design/standards , Carcinoma/epidemiology , Carcinoma/surgery , Clinical Audit/standards , Colorectal Neoplasms/epidemiology , Colorectal Neoplasms/surgery , Cross-Sectional Studies , Electronic Health Records/standards , Hospital Departments/standards , Humans , Netherlands
4.
J Am Med Inform Assoc ; 21(2): 285-91, 2014.
Article in English | MEDLINE | ID: mdl-24192317

ABSTRACT

OBJECTIVE: Ambiguous definitions of quality measures in natural language impede their automated computability and also the reproducibility, validity, timeliness, traceability, comparability, and interpretability of computed results. Therefore, quality measures should be formalized before their release. We have previously developed and successfully applied a method for clinical indicator formalization (CLIF). The objective of our present study is to test whether CLIF is generalizable--that is, applicable to a large set of heterogeneous measures of different types and from various domains. MATERIALS AND METHODS: We formalized the entire set of 159 Dutch quality measures for general practice, which contains structure, process, and outcome measures and covers seven domains. We relied on a web-based tool to facilitate the application of our method. Subsequently, we computed the measures on the basis of a large database of real patient data. RESULTS: Our CLIF method enabled us to fully formalize 100% of the measures. Owing to missing functionality, the accompanying tool could support full formalization of only 86% of the quality measures into Structured Query Language (SQL) queries. The remaining 14% of the measures required manual application of our CLIF method by directly translating the respective criteria into SQL. The results obtained by computing the measures show a strong correlation with results computed independently by two other parties. CONCLUSIONS: The CLIF method covers all quality measures after having been extended by an additional step. Our web tool requires further refinement for CLIF to be applied completely automatically. We therefore conclude that CLIF is sufficiently generalizable to be able to formalize the entire set of Dutch quality measures for general practice.


Subject(s)
Electronic Health Records , General Practice/standards , Quality Indicators, Health Care , Humans , Natural Language Processing , Netherlands , Practice Guidelines as Topic , Programming Languages
5.
Stud Health Technol Inform ; 192: 313-7, 2013.
Article in English | MEDLINE | ID: mdl-23920567

ABSTRACT

Today, clinical data is routinely recorded in vast amounts, but its reuse can be challenging. A secondary use that should ideally be based on previously collected clinical data is the computation of clinical quality indicators. In the present study, we attempted to retrieve all data from our hospital that is required to compute a set of quality indicators in the domain of colorectal cancer surgery. We categorised the barriers that we encountered in the scope of this project according to an existing framework, and provide recommendations on how to prevent or surmount these barriers. Assuming that our case is not unique, these recommendations might be applicable for the design, evaluation and optimisation of Electronic Health Records.


Subject(s)
Attitude of Health Personnel , Colorectal Neoplasms/surgery , Computer Literacy , Data Mining/methods , Electronic Health Records , Medical Record Linkage/methods , Colorectal Neoplasms/epidemiology , Humans , Netherlands/epidemiology , Prevalence
6.
Stud Health Technol Inform ; 180: 113-7, 2012.
Article in English | MEDLINE | ID: mdl-22874163

ABSTRACT

In order to be able to automatically calculate clinical quality indicators, we have proposed CLIF, a stepwise method for clinical quality indicator formalisation. Quality indicators are used for external accountability and hospital comparison. As clinical quality indicators are computed in a decentralised manner by the hospitals themselves, reproducibility of the formalisation method is essential to ensure the comparability of calculated values. Thus, we performed a case study to investigate the reproducibility of CLIF. Eight participants formalised the same sample quality indicator with the help of a web-based indicator-authoring tool that facilitates the application of CLIF. We analysed the results per step and concluded that the method itself leads to reproducible results. To further improve reproducibility, ambiguities in the indicator text must be clarified and trained experts are needed to encode clinical concepts and to specify the relations between concepts.


Subject(s)
Internet , Practice Guidelines as Topic , Quality Assurance, Health Care/methods , Quality Assurance, Health Care/standards , Quality Indicators, Health Care/standards , User-Computer Interface , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...