ABSTRACT
Concerns regarding both the limited generalizability and the slow pace of traditional randomized trials have led to calls for greater use of real-world evidence (RWE) in the evaluation of new treatments or products. The RWE label has been used to refer to a variety of departures from the methods of traditional randomized controlled trials. Recognizing this complexity and potential confusion, the National Academies of Science, Engineering, and Medicine convened a series of workshops to clarify and address questions regarding the use of RWE to evaluate new medical treatments. Those workshops identified three specific dimensions in which RWE studies might differ from traditional clinical trials: use of real-world data (data extracted from health system records or data captured by mobile devices), delivery of real-world treatment (open-label treatments delivered in community settings by community practitioners), and real-world treatment assignment (including nonrandomized comparisons and variations on random assignment such as before-after or stepped-wedge designs). For any RWE study, decisions regarding each of these dimensions depends on the specific research question, characteristics of the potential study settings, and characteristics of the settings where study results would be applied.
Subject(s)
Decision Making , Delivery of Health Care/methods , Evidence-Based Practice/methods , Research Design , Electronic Health Records , Humans , TherapeuticsABSTRACT
In research involving human subjects, large participation payments often are deemed undesirable because they may provide 'undue inducement' for potential participants to expose themselves to risk. However, although large incentives may encourage participation, they also may signal the riskiness of a study's procedures. In three experiments, we measured people's interest in participating in potentially risky research studies, and their perception of the risk associated with those studies, as functions of participation payment amounts. All experiments took place 2007-2008 with an on-line nationwide sample or a sample from a northeastern U.S. city. We tested whether people judge studies that offer higher participation payments to be riskier, and, if so, whether this increased perception of risk increases time and effort spent learning about the risks. We found that high participation payments increased willingness to participate, but, consistent with the idea that people infer riskiness from payment amount, high payments also increased perceived risk and time spent viewing risk information. Moreover, when a link between payment amount and risk level was made explicit in Experiment 3, the relationship between high payments and perceived risk strengthened. Research guidelines usually prohibit studies from offering participation incentives that compensate for risks, yet these experiments' results indicate that potential participants naturally assume that the magnitude of risks and incentives are related. This discrepancy between research guidelines and participants' assumptions about those guidelines has implications for informed consent in human subjects research.