Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
EGEMS (Wash DC) ; 6(1): 17, 2018 Jul 19.
Article in English | MEDLINE | ID: mdl-30094289

ABSTRACT

OBJECTIVE: To understand the impact of varying measurement period on the calculation of electronic Clinical Quality Measures (eCQMs). BACKGROUND: eCQMs have increased in importance in value-based programs, but accurate and timely measurement has been slow. This has required flexibility in key measure characteristics, including measurement period, the timeframe the measurement covers. The effects of variable measurement periods on accuracy and variability are not clear. METHODS: 209 practices were asked to extract and submit four eCQMs from their Electronic Health Records on a quarterly basis using a 12-month measurement period. Quarterly submissions were collected via REDCap. The measurement periods of the survey data were categorized into non-standard (3, 6, 9 months and other) and standard periods (12 months). For comparison, patient-level data from three clinics were collected and calculated in an eCQM registry to measure the impact of varying measurement periods. We assessed the central tendency, shape of the distributions, and variability across the four measures. Analysis of variance (ANOVA) was conducted to analyze the differences among standard and non-standard measurement period means, and variation among these groups. RESULTS: Of 209 practices, 191 (91 percent) submitted data over three quarters. Of the 546 total submissions, 173 had non-standard measurement periods. Differences between measures with standard versus non-standard periods ranged from -3.3 percent to 14.2 percent between clinics (p < .05 for 3 of 4), using the patient-level data yielded deltas of -1.6 percent to 0.6 percent when comparing non-standard and standard periods. CONCLUSION: Variations in measurement periods were associated with variation in performance between clinics for 3 of the 4 eCQMs, but did not have significant differences when calculated within clinics. Variations from standard measurement periods may reflect poor data quality and accuracy.

2.
AMIA Annu Symp Proc ; 2017: 575-584, 2017.
Article in English | MEDLINE | ID: mdl-29854122

ABSTRACT

Clinical quality measures (CQMs) aim to identify gaps in care and to promote evidence-based guidelines. Official CQM definitions consist of a measure's logic and grouped, standardized codes to define key concepts. In this study, we used the official CQM update process to understand how CQMs' meanings change over time. First, we identified differences between the narrative description, logic, and the vocabulary specifications offour standardized CQMs' definitions in subsequent versions (2015, 2016, and 2017). Next, we implemented the various versions in a quality measure calculation registry to understand how the differences affected calculated prevalence of risk and measure performance. Global performance rates changed up to 5.32%, and an increase of up to 28% new patients was observed for key conditions between versions. Updates to definitions that change a measure's logic and choices to include/exclude codes in value set vocabularies changes measurement of quality and likely introduces variation by implementation.


Subject(s)
Quality Control , Quality Indicators, Health Care , Vocabulary, Controlled , Adolescent , Adult , Centers for Medicare and Medicaid Services, U.S. , Data Accuracy , Humans , Narration , United States
3.
EGEMS (Wash DC) ; 5(1): 19, 2017 Sep 04.
Article in English | MEDLINE | ID: mdl-29881739

ABSTRACT

OBJECTIVE: To understand the impact of distinct concept to value set mapping on the measurement of quality of care. BACKGROUND: Clinical quality measures (CQMs) intend to measure the quality of healthcare services provided, and to help promote evidence-based therapies. Most CQMs consist of grouped codes from vocabularies - or 'value sets' - that represent the unique identifiers (i.e., object identifiers), concepts (i.e., value set names), and concept definitions (i.e., code groups) that define a measure's specifications. In the development of a statin therapy CQM, two unique value sets were created by independent measure developers for the same global concepts. METHODS: We first identified differences between the two value set specifications of the same CQM. We then implemented the various versions in a quality measure calculation registry to understand how the differences affected calculated prevalence of risk and measure performance. RESULTS: Global performance rates only differed by 0.8%, but there were up to 2.3 times as many patients included with key conditions, and differing performance rates of 7.5% for patients with 'myocardial infarction' and 3.5% for those with 'ischemic vascular disease'. CONCLUSION: The decisions CQM developers make about which concepts and code groups to include or exclude in value set vocabularies can lead to inaccuracies in the measurement of quality of care. One solution is that developers could provide rationale for these decisions. Endorsements are needed to encourage system vendors, payers, informaticians, and clinicians to collaborate in the creation of more integrated terminology sets.

SELECTION OF CITATIONS
SEARCH DETAIL
...