Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
J Biomed Inform ; 136: 104242, 2022 12.
Article in English | MEDLINE | ID: mdl-36372346

ABSTRACT

BACKGROUND: Unexpected variability across healthcare datasets may indicate data quality issues and thereby affect the credibility of these data for reutilization. No gold-standard reference dataset or methods for variability assessment are usually available for these datasets. In this study, we aim to describe the process of discovering data quality implications by applying a set of methods for assessing variability between sources and over time in a large hospital database. METHODS: We described and applied a set of multisource and temporal variability assessment methods in a large Portuguese hospitalization database, in which variation in condition-specific hospitalization ratios derived from clinically coded data were assessed between hospitals (sources) and over time. We identified condition-specific admissions using the Clinical Classification Software (CCS), developed by the Agency of Health Care Research and Quality. A Statistical Process Control (SPC) approach based on funnel plots of condition-specific standardized hospitalization ratios (SHR) was used to assess multisource variability, whereas temporal heat maps and Information-Geometric Temporal (IGT) plots were used to assess temporal variability by displaying temporal abrupt changes in data distributions. Results were presented for the 15 most common inpatient conditions (CCS) in Portugal. MAIN FINDINGS: Funnel plot assessment allowed the detection of several outlying hospitals whose SHRs were much lower or higher than expected. Adjusting SHR for hospital characteristics, beyond age and sex, considerably affected the degree of multisource variability for most diseases. Overall, probability distributions changed over time for most diseases, although heterogeneously. Abrupt temporal changes in data distributions for acute myocardial infarction and congestive heart failure coincided with the periods comprising the transition to the International Classification of Diseases, 10th revision, Clinical Modification, whereas changes in the Diagnosis-Related Groups software seem to have driven changes in data distributions for both acute myocardial infarction and liveborn admissions. The analysis of heat maps also allowed the detection of several discontinuities at hospital level over time, in some cases also coinciding with the aforementioned factors. CONCLUSIONS: This paper described the successful application of a set of reproducible, generalizable and systematic methods for variability assessment, including visualization tools that can be useful for detecting abnormal patterns in healthcare data, also addressing some limitations of common approaches. The presented method for multisource variability assessment is based on SPC, which is an advantage considering the lack of gold standard for such process. Properly controlling for hospital characteristics and differences in case-mix for estimating SHR is critical for isolating data quality-related variability among data sources. The use of IGT plots provides an advantage over common methods for temporal variability assessment due its suitability for multitype and multimodal data, which are common characteristics of healthcare data. The novelty of this work is the use of a set of methods to discover new data quality insights in healthcare data.


Subject(s)
Data Accuracy , Myocardial Infarction , Humans , Portugal , Hospitals , Hospitalization
2.
Int J Med Inform ; 156: 104584, 2021 12.
Article in English | MEDLINE | ID: mdl-34634526

ABSTRACT

INTRODUCTION: Administrative hospital databases represent an important tool for hospital financing in many national health systems and are also an important data source for clinical, epidemiological and health services research. Therefore, the data quality of such databases is of utmost importance. This paper aims to present a systematic review of root causes of data quality problems affecting administrative hospital data, creating a catalogue of potential issues for data quality analysts to explore. METHODS: The MEDLINE and Scopus databases were searched using inclusion criteria based on two following concept blocks: (1) administrative hospital databases and (2) data quality. Studies' titles and abstracts were screened by two reviewers independently. Three researchers independently selected the screened studies based on their full texts and then extracted the potential root causes inferred from them. These were subsequently classified according to the Ishikawa model based on 6 categories: "Personnel", "Material", "Method", "Machine", "Mission" and "Management". RESULTS: The result of our investigation and the contribution of this paper is a classification of the potential (105) root causes found through a systematic review of the 77 relevant studies we have identified and analyzed. The result was represented by an Ishikawa diagram. Most of the root causes (25.7%) were associated with the category "Personnel" - people's knowledge, preferences, education and culture, mostly related to clinical coders and health care providers activities. The quality of hospital documentation, within category "Material", and aspects related to financial incentives or disincentives, within category "Mission", were also frequently cited in the literature as relevant root causes for data quality issues. CONCLUSIONS: The resultant catalogue of root causes, systematized using the Ishikawa framework, provides a compilation of potential root causes of data quality issues to be considered prior to reusing these data and that can point to actions aimed at improving data quality.


Subject(s)
Data Accuracy , Documentation/standards , Hospital Administration , Delivery of Health Care , Health Personnel , Health Services Research , Hospitals , Humans
3.
Sensors (Basel) ; 18(9)2018 Sep 14.
Article in English | MEDLINE | ID: mdl-30223516

ABSTRACT

The Internet-of-Things (IoT) introduces several technical and managerial challenges when it comes to the use of data generated and exchanged by and between various Smart, Connected Products (SCPs) that are part of an IoT system (i.e., physical, intelligent devices with sensors and actuators). Added to the volume and the heterogeneous exchange and consumption of data, it is paramount to assure that data quality levels are maintained in every step of the data chain/lifecycle. Otherwise, the system may fail to meet its expected function. While Data Quality (DQ) is a mature field, existing solutions are highly heterogeneous. Therefore, we propose that companies, developers and vendors should align their data quality management mechanisms and artefacts with well-known best practices and standards, as for example, those provided by ISO 8000-61. This standard enables a process-approach to data quality management, overcoming the difficulties of isolated data quality activities. This paper introduces DAQUA-MASS, a methodology based on ISO 8000-61 for data quality management in sensor networks. The methodology consists of four steps according to the Plan-Do-Check-Act cycle by Deming.

SELECTION OF CITATIONS
SEARCH DETAIL