Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
1.
J Clin Epidemiol ; 110: 74-81, 2019 06.
Article in English | MEDLINE | ID: mdl-30826377

ABSTRACT

OBJECTIVES: To provide recommendations for the selection of comparators for randomized controlled trials of health-related behavioral interventions. STUDY DESIGN AND SETTING: The National Institutes of Health Office of Behavioral and Social Science Research convened an expert panel to critically review the literature on control or comparison groups for behavioral trials and to develop strategies for improving comparator choices and for resolving controversies and disagreements about comparators. RESULTS: The panel developed a Pragmatic Model for Comparator Selection in Health-Related Behavioral Trials. The model indicates that the optimal comparator is the one that best serves the primary purpose of the trial but that the optimal comparator's limitations and barriers to its use must also be taken into account. CONCLUSION: We developed best practice recommendations for the selection of comparators for health-related behavioral trials. Use of the Pragmatic Model for Comparator Selection in Health-Related Behavioral Trials can improve the comparator selection process and help resolve disagreements about comparator choices.


Subject(s)
Mental Disorders/diagnosis , Mental Disorders/therapy , National Institutes of Health (U.S.)/standards , Practice Guidelines as Topic , Female , Humans , Male , Patient Selection , Psychotherapy/methods , Randomized Controlled Trials as Topic , Research Design , United States
2.
Trials ; 19(1): 557, 2018 Oct 16.
Article in English | MEDLINE | ID: mdl-30326967

ABSTRACT

BACKGROUND: Site performance is key to the success of large multicentre randomised trials. A standardised set of clear and accessible summaries of site performance could facilitate the timely identification and resolution of potential problems, minimising their impact. The aim of this study was to identify and agree a core set of key performance metrics for managing multicentre randomised trials. METHODS: We used a mixed methods approach to identify potential metrics and to achieve consensus about the final set, adapting methods that are recommended by the COMET Initiative for developing core outcome sets in health care. We used performance metrics identified from our systematic search and focus groups to create an online Delphi survey. We invited respondents to score each metric for inclusion in the final core set, over three survey rounds. Metrics scored as critical by ≥70% and unimportant by <15% of respondents were taken forward to a consensus meeting of representatives from key UK-based stakeholders. Participants in the consensus meeting discussed and voted on each metric, using anonymous electronic voting. Metrics with >50% of participants voting for inclusion were retained. RESULTS: Round 1 of the Delphi survey presented 28 performance metrics, and a further six were added in round 2. Of 294 UK-based stakeholders who registered for the Delphi survey, 211 completed all three rounds. At the consensus meeting, 17 metrics were discussed and voted on: 15 metrics were retained following survey round 3, plus two others that were preferred by consensus meeting participants. Consensus was reached on a final core set of eight performance metrics in three domains: (1) recruitment and retention, (2) data quality and (3) protocol compliance. A simple tool for visual reporting of the metrics is available from the Nottingham Clinical Trials Unit website. CONCLUSIONS: We have established a core set of metrics for measuring the performance of sites in multicentre randomised trials. These metrics could improve trial conduct by enabling researchers to identify and address problems before trials are adversely affected. Future work could evaluate the effectiveness of using the metrics and reporting tool.


Subject(s)
Delphi Technique , Multicenter Studies as Topic/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards , Consensus , Data Accuracy , Humans , Multicenter Studies as Topic/methods , Randomized Controlled Trials as Topic/methods , Stakeholder Participation
3.
Trials ; 19(1): 197, 2018 Mar 27.
Article in English | MEDLINE | ID: mdl-29580260

ABSTRACT

BACKGROUND: Non-retention of participants seriously affects the credibility of clinical trial results and significantly reduces the potential of a trial to influence clinical practice. Non-retention can be defined as instances where participants leave the study prematurely. Examples include withdrawal of consent and loss to follow-up and thus outcome data cannot be obtained. The majority of existing interventions targeting retention fail to describe any theoretical basis for the observed improvement, or lack of improvement. Moreover, most of these interventions lack involvement of participants in their conception and/or design, raising questions about their relevance and acceptability. Many of the causes of non-retention involve people performing a behaviour (e.g. not returning a questionnaire). Behaviour change is difficult, and the importance of a strong theoretical basis for interventions that aim to change behaviour is increasingly recognised. This research aims to develop and pilot theoretically informed, participant-centred, evidence-based behaviour change interventions to improve retention in trials. METHODS: This research will generate data through semi-structured interviews on stakeholders' perspectives of the reasons for trial non-retention. It will identify perceived barriers and enablers to trial retention using the Theoretical Domains Framework. The intervention development work will involve identification of behaviour change techniques, using recognised methodology, and co-production of retention interventions through discussion groups with end-users. An evaluation of intervention acceptability and feasibility will be conducted in focus groups. Finally, a ready-to-use evaluation framework to deploy in Studies Within A Trial as well as an explanatory retention framework will be developed for identifying and tackling modifiable issues to improve trial retention. DISCUSSION: We believe this to be one of the first studies to apply a theoretical lens to the development of interventions to improve trial retention that have been informed by, and are embedded within, participants' experiential accounts. By developing and identifying priority interventions this study will support efforts to reduce research waste.


Subject(s)
Health Knowledge, Attitudes, Practice , Patient Dropouts , Patient Selection , Randomized Controlled Trials as Topic/methods , Research Subjects/psychology , Attitude of Health Personnel , Humans , Interviews as Topic , Research Personnel/psychology , Risk Factors , Stakeholder Participation
4.
Trials ; 16: 484, 2015 Oct 27.
Article in English | MEDLINE | ID: mdl-26507504

ABSTRACT

BACKGROUND: The process of obtaining informed consent for participation in randomised controlled trials (RCTs) was established as a mechanism to protect participants against undue harm from research and allow people to recognise any potential risks or benefits associated with the research. A number of interventions have been put forward to improve this process. Outcomes reported in trials of interventions to improve the informed consent process for decisions about trial participation tend to focus on the 'understanding' of trial information. However, the operationalization of understanding as a concept, the tools used to measure it and the timing of the measurements are heterogeneous. A lack of clarity exists regarding which outcomes matter (to whom) and why. This inconsistency between studies results in difficulties when making comparisons across studies as evidenced in two recent systematic reviews of informed consent interventions. As such, no optimal method for measuring the impact of these interventions aimed at improving informed consent for RCTs has been identified. METHODS/DESIGN: The project will adopt and adapt methodology previously developed and used in projects developing core outcome sets for assessment of clinical treatments. Specifically, the work will consist of three stages: 1) A systematic methodology review of existing outcome measures of trial informed consent interventions; 2) Interviews with key stakeholders to explore additional outcomes relevant for trial participation decisions; and 3) A Delphi study to refine the core outcome set for evaluation of trial informed consent interventions. All stages will include the stakeholders involved in the various aspects of RCT consent: users (that is, patients), developers (that is, trialists), deliverers (focusing on research nurses) and authorisers (that is, ethics committees). A final consensus meeting including all stakeholders will be held to review outcomes. DISCUSSION: The ELICIT study aims to develop a core outcome set for the evaluation of interventions intended to improve informed consent for RCTs for use in future RCTs and reviews, thereby improving the reliability and consistency of research in this area.


Subject(s)
Comprehension , Delphi Technique , Informed Consent , Patient Selection , Randomized Controlled Trials as Topic/methods , Research Design , Research Personnel/psychology , Research Subjects/psychology , Attitude of Health Personnel , Consensus , Health Knowledge, Attitudes, Practice , Humans , Informed Consent/ethics , Patient Selection/ethics , Randomized Controlled Trials as Topic/ethics , Research Personnel/ethics , Systematic Reviews as Topic
5.
J Am Med Inform Assoc ; 9(4): 346-58, 2002.
Article in English | MEDLINE | ID: mdl-12087116

ABSTRACT

A systematic search of seven electronic databases was done to identify randomized controlled trials that assessed the effect of computer-generated patient education material (PEM) on professional practice. Three studies met the authors' criteria. All three studies involved preventive care. All used a complex intervention of which computer-generated PEM was a major component. Improvements in practice were seen in all studies, although these gains were generally modest. One study showed improvement in patient outcomes. Mann-Whitney statistics calculated for the studies' outcome measures ranged from 0.48 to 0.66, equivalent to risk differences of -4 to 32 percent. Computer-generated PEM seems to have a small, positive effect on professional practice. The small number of included studies and the complex nature of the interventions makes it difficult to draw conclusions about the ability of computer-generated PEM to change professional practice. Future work should involve well-defined interventions that can be clearly evaluated in terms of effect and cost.


Subject(s)
Medical Informatics Applications , Patient Education as Topic/methods , Primary Health Care , Professional Practice , Humans , Primary Prevention , Randomized Controlled Trials as Topic , Statistics, Nonparametric
SELECTION OF CITATIONS
SEARCH DETAIL
...