Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Epidemiol Rev ; 44(1): 55-66, 2022 12 21.
Article in English | MEDLINE | ID: mdl-36065832

ABSTRACT

In clinical trials, harms (i.e., adverse events) are often reported by simply counting the number of people who experienced each event. Reporting only frequencies ignores other dimensions of the data that are important for stakeholders, including severity, seriousness, rate (recurrence), timing, and groups of related harms. Additionally, application of selection criteria to harms prevents most from being reported. Visualization of data could improve communication of multidimensional data. We replicated and compared the characteristics of 6 different approaches for visualizing harms: dot plot, stacked bar chart, volcano plot, heat map, treemap, and tendril plot. We considered binary events using individual participant data from a randomized trial of gabapentin for neuropathic pain. We assessed their value using a heuristic approach and a group of content experts. We produced all figures using R and share the open-source code on GitHub. Most original visualizations propose presenting individual harms (e.g., dizziness, somnolence) alone or alongside higher level (e.g., by body systems) summaries of harms, although they could be applied at either level. Visualizations can present different dimensions of all harms observed in trials. Except for the tendril plot, all other plots do not require individual participant data. The dot plot and volcano plot are favored as visualization approaches to present an overall summary of harms data. Our value assessment found the dot plot and volcano plot were favored by content experts. Using visualizations to report harms could improve communication. Trialists can use our provided code to easily implement these approaches.


Subject(s)
Data Visualization , Neuralgia , Humans , Gabapentin/adverse effects , Neuralgia/drug therapy , Neuralgia/chemically induced
2.
EGEMS (Wash DC) ; 4(1): 1244, 2016.
Article in English | MEDLINE | ID: mdl-27713905

ABSTRACT

OBJECTIVE: Harmonized data quality (DQ) assessment terms, methods, and reporting practices can establish a common understanding of the strengths and limitations of electronic health record (EHR) data for operational analytics, quality improvement, and research. Existing published DQ terms were harmonized to a comprehensive unified terminology with definitions and examples and organized into a conceptual framework to support a common approach to defining whether EHR data is 'fit' for specific uses. MATERIALS AND METHODS: DQ publications, informatics and analytics experts, managers of established DQ programs, and operational manuals from several mature EHR-based research networks were reviewed to identify potential DQ terms and categories. Two face-to-face stakeholder meetings were used to vet an initial set of DQ terms and definitions that were grouped into an overall conceptual framework. Feedback received from data producers and users was used to construct a draft set of harmonized DQ terms and categories. Multiple rounds of iterative refinement resulted in a set of terms and organizing framework consisting of DQ categories, subcategories, terms, definitions, and examples. The harmonized terminology and logical framework's inclusiveness was evaluated against ten published DQ terminologies. RESULTS: Existing DQ terms were harmonized and organized into a framework by defining three DQ categories: (1) Conformance (2) Completeness and (3) Plausibility and two DQ assessment contexts: (1) Verification and (2) Validation. Conformance and Plausibility categories were further divided into subcategories. Each category and subcategory was defined with respect to whether the data may be verified with organizational data, or validated against an accepted gold standard, depending on proposed context and uses. The coverage of the harmonized DQ terminology was validated by successfully aligning to multiple published DQ terminologies. DISCUSSION: Existing DQ concepts, community input, and expert review informed the development of a distinct set of terms, organized into categories and subcategories. The resulting DQ terms successfully encompassed a wide range of disparate DQ terminologies. Operational definitions were developed to provide guidance for implementing DQ assessment procedures. The resulting structure is an inclusive DQ framework for standardizing DQ assessment and reporting. While our analysis focused on the DQ issues often found in EHR data, the new terminology may be applicable to a wide range of electronic health data such as administrative, research, and patient-reported data. CONCLUSION: A consistent, common DQ terminology, organized into a logical framework, is an initial step in enabling data owners and users, patients, and policy makers to evaluate and communicate data quality findings in a well-defined manner with a shared vocabulary. Future work will leverage the framework and terminology to develop reusable data quality assessment and reporting methods.

SELECTION OF CITATIONS
SEARCH DETAIL
...