Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
Trials ; 25(1): 310, 2024 May 08.
Article in English | MEDLINE | ID: mdl-38720375

ABSTRACT

BACKGROUND: Use of electronic methods to support informed consent ('eConsent') is increasingly popular in clinical research. This commentary reports the approach taken to implement electronic consent methods and subsequent experiences from a range of studies at the Leeds Clinical Trials Research Unit (CTRU), a large clinical trials unit in the UK. MAIN TEXT: We implemented a remote eConsent process using the REDCap platform. The process can be used in trials of investigational medicinal products and other intervention types or research designs. Our standard eConsent system focuses on documenting informed consent, with other aspects of consent (e.g. providing information to potential participants and a recruiter discussing the study with each potential participant) occurring outside the system, though trial teams can use electronic methods for these activities where they have ethical approval. Our overall process includes a verbal consent step prior to confidential information being entered onto REDCap and an identity verification step in line with regulator guidance. We considered the regulatory requirements around the system's generation of source documents, how to ensure data protection standards were upheld and how to monitor informed consent within the system. We present four eConsent case studies from the CTRU: two randomised clinical trials and two other health research studies. These illustrate the ways eConsent can be implemented, and lessons learned, including about differences in uptake. CONCLUSIONS: We successfully implemented a remote eConsent process at the CTRU across multiple studies. Our case studies highlight benefits of study participants being able to give consent without having to be present at the study site. This may better align with patient preferences and trial site needs and therefore improve recruitment and resilience against external shocks (such as pandemics). Variation in uptake of eConsent may be influenced more by site-level factors than patient preferences, which may not align well with the aspiration towards patient-centred research. Our current process has some limitations, including the provision of all consent-related text in more than one language, and scalability of implementing more than one consent form version at a time. We consider how enhancements in CTRU processes, or external developments, might affect our approach.


Subject(s)
Consent Forms , Informed Consent , Humans , Confidentiality , Clinical Trials as Topic/ethics , Clinical Trials as Topic/methods , Randomized Controlled Trials as Topic/ethics , Randomized Controlled Trials as Topic/methods , Research Subjects/psychology , England , Research Design
2.
Res Involv Engagem ; 10(1): 39, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38637845

ABSTRACT

BACKGROUND: Research study participants can stop taking part early, in various circumstances. Sometimes this experience can be stressful. Providing participants with the information they want or need when they stop could improve participants' experiences, and may benefit individual studies' objectives and research in general. A group of public contributors and researchers at the Clinical Trials Research Unit, University of Leeds, aimed to develop a communication template and researcher guidance. This would address how to provide information sensitively around the time when participants stop or significantly reduce their level of participation. METHODS: The project lead used scoping review methods to identify relevant prior evidence and derive a list of potential information topics to communicate to participants who stop taking part. The topic list was reviewed by research professionals and public contributors before finalisation. Further public contributors were identified from a range of networks. The contributors formed a 'development group', to work on the detail of the planned resources, and a larger 'review group' to review the draft output before finalisation. The involvement was planned so that the development group could shape the direction and pace of the work. RESULTS: The literature review identified 413 relevant reports, resulting in 94 information topics. The review suggested that this issue has not been well explored previously. Some evidence suggested early-stopping participants are sometimes excluded from important communications (such as study results) without clear justification. The development group agreed early to focus on guidance with reusable examples rather than a template. We took time to explore different perspectives and made decisions by informal consensus. Review group feedback was broadly positive but highlighted the need to improve resource navigability, leading to its final online form. CONCLUSIONS: We co-developed a resource to provide support to research participants who stop taking part. A strength of this work is that several of the public contributors have direct lived experience of stopping research participation. We encourage others to review the resource and consider how they support these participants in their studies. Our work highlights the value of researchers and participants working together, including on complex and ethically challenging topics.


Participants in research sometimes stop taking part early. This can sometimes be stressful or difficult for them. Giving them information they want or need around that time could help them and the research. Public contributors and researchers worked together on this project. We wanted to help researchers get information to research participants who stop taking part. Some of the public contributors had experiences of stopping research participation early.The project lead first made a rough plan for the project, with public contributors' help. He left the plan open so the public contributors could help shape the project. The project lead searched for relevant information in published literature. This search showed there has not been much work before on how to help participants who stop taking part. He used the search results to make a list of topics that could be useful to give participants who stop taking part. He asked public contributors and researchers to review the list.Public contributors then joined one of two groups. A smaller group worked on the detail of the planned guidance. A larger group reviewed the draft guidance.The smaller group worked together to make the final guidance in six online meetings. The guidance includes example wording for others to use in their own participant communications. The reviewer group generally liked the guidance but had comments on making it easier to use. The final resource is available online and a link is in the references to this article.

3.
Clin Trials ; 20(6): 649-660, 2023 12.
Article in English | MEDLINE | ID: mdl-37515519

ABSTRACT

BACKGROUND/AIMS: Sharing trial results with participants is an ethical imperative but often does not happen. Show RESPECT (ISRCTN96189403) tested ways of sharing results with participants in an ovarian cancer trial (ISRCTN10356387). Sharing results via a printed summary improved patient satisfaction. Little is known about staff experience and the costs of communicating results with participants. We report the costs of communication approaches used in Show RESPECT and the views of site staff on these approaches. METHODS: We allocated 43 hospitals (sites) to share results with trial participants through one of eight intervention combinations (2 × 2 × 2 factorial; enhanced versus basic webpage, printed summary versus no printed summary, email list invitation versus no invitation). Questionnaires elicited data from staff involved in sharing results. Open- and closed-ended questions covered resources used to share results and site staff perspectives on the approaches used. Semi-structured interviews were conducted. Interview and free-text data were analysed thematically. The mean additional site costs per participant from each intervention were estimated jointly as main effects by linear regression. RESULTS: We received questionnaires from 68 staff from 41 sites and interviewed 11 site staff. Sites allocated to the printed summary had mean total site costs of sharing results £13.71/patient higher (95% confidence interval (CI): -3.19, 30.60; p = 0.108) than sites allocated no printed summary. Sites allocated to the enhanced webpage had mean total site costs £1.91/patient higher (95% CI: -14, 18.74; p = 0.819) than sites allocated to the basic webpage. Sites allocated to the email list had costs £2.87/patient lower (95% CI: -19.70, 13.95; p = 0.731) than sites allocated to no email list. Most of these costs were staff time for mailing information and handling patients' queries. Most site staff reported no concerns about how they had shared results (88%) and no challenges (76%). Most (83%) found it easy to answer queries from patients about the results and thought the way they were allocated to share results with participants would be an acceptable standard approach (76%), with 79% saying they would follow the same approach for future trials. There were no significant effects of the randomised interventions on these outcomes. Site staff emphasised the importance of preparing patients to receive the results, including giving opt-in/opt-out options, and the need to offer further support, particularly if the results could confuse or distress some patients. CONCLUSIONS: Adding a printed summary to a webpage (which significantly improved participant satisfaction) may increase costs to sites by ~£14/patient, which is modest in relation to the cost of trials. The Show RESPECT communication interventions were feasible to implement. This information could help future trials ensure they have sufficient resources to share results with participants.


Subject(s)
Ovarian Neoplasms , Female , Humans , Feasibility Studies , Surveys and Questionnaires , Cost-Benefit Analysis
4.
Clin Trials ; 19(1): 71-80, 2022 02.
Article in English | MEDLINE | ID: mdl-34693794

ABSTRACT

BACKGROUND: Addressing recruitment and retention challenges in trials is a key priority for methods research, but navigating the literature is difficult and time-consuming. In 2016, ORRCA (www.orrca.org.uk) launched a free, searchable database of recruitment research that has been widely accessed and used to support the update of systematic reviews and the selection of recruitment strategies for clinical trials. ORRCA2 aims to create a similar database to map the growing volume and importance of retention research. METHODS: Searches of Medline (Ovid), CINAHL, PsycINFO, Scopus, Web of Science Core Collection and the Cochrane Library, restricted to English language and publications up to the end of 2017. Hand searches of key systematic reviews were undertaken and randomised evaluations of recruitment interventions within the ORRCA database on 1 October 2020 were also reviewed for any secondary retention outcomes. Records were screened by title and abstract before obtaining the full text of potentially relevant articles. Studies reporting or evaluating strategies, methods and study designs to improve retention within healthcare research were eligible. Case reports describing retention challenges or successes and studies evaluating participant reported reasons for withdrawal or losses were also included. Studies assessing adherence to treatments, attendance at appointments outside of research and statistical analysis methods for missing data were excluded. Eligible articles were categorised into one of the following evidence types: randomised evaluations, non-randomised evaluations, application of retention strategies without evaluation and observations of factors affecting retention. Articles were also mapped against a retention domain framework. Additional data were extracted on research outcomes, methods and host study context. RESULTS: Of the 72,904 abstracts screened, 4,364 full texts were obtained, and 1,167 articles were eligible. Of these, 165 (14%) were randomised evaluations, 99 (8%) non-randomised evaluations, 319 (27%) strategies without evaluation and 584 (50%) observations of factors affecting retention. Eighty-four percent (n = 979) of studies assessed the numbers of participants retained, 27% (n = 317) assessed demographic differences between retained and lost participants, while only 4% (n = 44) assessed the cost of retention strategies. The most frequently reported domains within the 165 studies categorised as 'randomised evaluations of retention strategies' were participant monetary incentives (32%), participant reminders and prompts (30%), questionnaire design (30%) and data collection location and method (26%). CONCLUSION: ORRCA2 builds on the success of ORRCA extending the database to organise the growing volume of retention research. Less than 15% of articles were randomised evaluations of retention strategies. Mapping of the literature highlights several areas for future research such as the role of research sites, clinical staff and study design in enhancing retention. Future studies should also include cost-benefit analysis of retention strategies.


Subject(s)
Databases, Bibliographic , Humans , Surveys and Questionnaires , Systematic Reviews as Topic
6.
Trials ; 22(1): 736, 2021 Oct 24.
Article in English | MEDLINE | ID: mdl-34689802

ABSTRACT

BACKGROUND: Eligibility criteria are a fundamental element of clinical trial design, defining who can and who should not participate in a trial. Problems with the design or application of criteria are known to occur and pose risks to participants' safety and trial integrity, sometimes also negatively impacting on trial recruitment and generalisability. We conducted a short, exploratory survey to gather evidence on UK recruiters' experiences interpreting and applying eligibility criteria and their views on how criteria are communicated and developed. METHODS: Our survey included topics informed by a wider programme of work at the Clinical Trials Research Unit, University of Leeds, on assuring eligibility criteria quality. Respondents were asked to answer based on all their trial experience, not only on experiences with our trials. The survey was disseminated to recruiters collaborating on trials run at our trials unit, and via other mailing lists and social media. The quantitative responses were descriptively analysed, with inductive analysis of free-text responses to identify themes. RESULTS: A total of 823 eligible respondents participated. In total, 79% of respondents reported finding problems with eligibility criteria in some trials, and 9% in most trials. The main themes in the types of problems experienced were criteria clarity (67% of comments), feasibility (34%), and suitability (14%). In total, 27% of those reporting some level of problem said these problems had led to patients being incorrectly included in trials; 40% said they had led to incorrect exclusions. Most respondents (56%) reported accessing eligibility criteria mainly in the trial protocol. Most respondents (74%) supported the idea of recruiter review of eligibility criteria earlier in the protocol development process. CONCLUSIONS: Our survey corroborates other evidence about the existence of suboptimal trial eligibility criteria. Problems with clarity were the most often reported, but the number of comments on feasibility and suitability suggest some recruiters feel eligibility criteria and associated assessments can hinder recruitment to trials. Our proposal for more recruiter involvement in protocol development has strong support and some potential benefits, but questions remain about how best to implement this. We invite other trialists to consider our other suggestions for how to assure quality in trial eligibility criteria.


Subject(s)
Text Messaging , Cross-Sectional Studies , Emotions , Humans , Surveys and Questionnaires , United Kingdom
7.
PLoS Med ; 18(10): e1003798, 2021 10.
Article in English | MEDLINE | ID: mdl-34606495

ABSTRACT

BACKGROUND: Sharing trial results with participants is an ethical imperative but often does not happen. We tested an Enhanced Webpage versus a Basic Webpage, Mailed Printed Summary versus no Mailed Printed Summary, and Email List Invitation versus no Email List Invitation to see which approach resulted in the highest patient satisfaction with how the results were communicated. METHODS AND FINDINGS: We carried out a cluster randomised, 2 by 2 by 2 factorial, nonblinded study within a trial, with semistructured qualitative interviews with some patients (ISRCTN96189403). Each cluster was a UK hospital participating in the ICON8 ovarian cancer trial. Interventions were shared with 384 ICON8 participants who were alive and considered well enough to be contacted, at 43 hospitals. Hospitals were allocated to share results with participants through one of the 8 intervention combinations based on random permutation within blocks of 8, stratified by number of participants. All interventions contained a written plain English summary of the results. The Enhanced Webpage also contained a short video. Both the Enhanced Webpage and Email contained links to further information and support. The Mailed Printed Summary was opt-out. Follow-up questionnaires were sent 1 month after patients had been offered the interventions. Patients' reported satisfaction was measured using a 5-point scale, analysed by ordinal logistic regression estimating main effects for all 3 interventions, with random effects for site, restricted to those who reported receiving the results and assuming no interaction. Data collection took place in 2018 to 2019. Questionnaires were sent to 275/384 randomly selected participants and returned by 180: 90/142 allocated Basic Webpage, 90/133 Enhanced Webpage; 91/141 no Mailed Printed Summary, 89/134 Mailed Printed Summary; 82/129 no Email List Invitation, 98/146 Email List Invitation. Only 3 patients opted out of receiving the Mailed Printed Summary; no patients signed up to the email list. Patients' satisfaction was greater at sites allocated the Mailed Printed Summary, where 65/81 (80%) were quite or very satisfied compared to sites with no Mailed Printed Summary 39/64 (61%), ordinal odds ratio (OR) = 3.15 (1.66 to 5.98, p < 0.001). We found no effect on patient satisfaction from the Enhanced Webpage, OR = 1.47 (0.78 to 2.76, p = 0.235) or Email List Invitation, OR = 1.38 (0.72 to 2.63, p = 0.327). Interviewees described the results as interesting, important, and disappointing (the ICON8 trial found no benefit). Finding out the results made some feel their trial participation had been more worthwhile. Regardless of allocated group, patients who received results generally reported that the information was easy to understand and find, were glad and did not regret finding out the results. The main limitation of our study is the 65% response rate. CONCLUSIONS: Nearly all respondents wanted to know the results and were glad to receive them. Adding an opt-out Mailed Printed Summary alongside a webpage yielded the highest reported satisfaction. This study provides evidence on how to share results with other similar trial populations. Further research is needed to look at different results scenarios and patient populations. TRIAL REGISTRATION: ISRCTN: ISRCTN96189403.


Subject(s)
Information Dissemination , Aged , Cluster Analysis , Health Communication , Humans , Interviews as Topic , Middle Aged , Outcome Assessment, Health Care , Patient Satisfaction , Patient Selection
8.
Clin Trials ; 18(2): 245-259, 2021 04.
Article in English | MEDLINE | ID: mdl-33611927

ABSTRACT

BACKGROUND/AIMS: It is increasingly recognised that reliance on frequent site visits for monitoring clinical trials is inefficient. Regulators and trialists have recently encouraged more risk-based monitoring. Risk assessment should take place before a trial begins to define the overarching monitoring strategy. It can also be done on an ongoing basis, to target sites for monitoring activity. Various methods have been proposed for such prioritisation, often using terms like 'central statistical monitoring', 'triggered monitoring' or, as in the International Conference on Harmonization Good Clinical Practice guidance, 'targeted on-site monitoring'. We conducted a scoping review to identify such methods, to establish if any were supported by adequate evidence to allow wider implementation, and to guide future developments in this field of research. METHODS: We used seven publication databases, two sets of methodological conference abstracts and an Internet search engine to identify methods for using centrally held trial data to assess site conduct during a trial. We included only reports in English, and excluded reports published before 1996 or not directly relevant to our research question. We used reference and citation searches to find additional relevant reports. We extracted data using a predefined template. We contacted authors to request additional information about included reports. RESULTS: We included 30 reports in our final dataset, of which 21 were peer-reviewed publications. In all, 20 reports described central statistical monitoring methods (of which 7 focussed on detection of fraud or misconduct) and 9 described triggered monitoring methods; 21 reports included some assessment of their methods' effectiveness, typically exploring the methods' characteristics using real trial data without known integrity issues. Of the 21 with some effectiveness assessment, most contained limited information about whether or not concerns identified through central monitoring constituted meaningful problems. Several reports demonstrated good classification ability based on more than one classification statistic, but never without caveats of unclear reporting or other classification statistics being low or unavailable. Some reports commented on cost savings from reduced on-site monitoring, but none gave detailed costings for the development and maintenance of central monitoring methods themselves. CONCLUSION: Our review identified various proposed methods, some of which could be combined within the same trial. The apparent emphasis on fraud detection may not be proportionate in all trial settings. Despite some promising evidence and some self-justifying benefits for data cleaning activity, many proposed methods have limitations that may currently prevent their routine use for targeting trial monitoring activity. The implementation costs, or uncertainty about these, may also be a barrier. We make recommendations for how the evidence-base supporting these methods could be improved.


Subject(s)
Clinical Trials as Topic , Risk Assessment , Costs and Cost Analysis , Humans
9.
Clin Trials ; 18(1): 115-126, 2021 02.
Article in English | MEDLINE | ID: mdl-33231127

ABSTRACT

BACKGROUND/AIMS: Clinical trials should be designed and managed to minimise important errors with potential to compromise patient safety or data integrity, employ monitoring practices that detect and correct important errors quickly, and take robust action to prevent repetition. Regulators highlight the use of risk-based monitoring, making greater use of centralised monitoring and reducing reliance on centre visits. The TEMPER study was a prospective evaluation of triggered monitoring (a risk-based monitoring method), whereby centres are prioritised for visits based on central monitoring results. Conducted in three UK-based randomised cancer treatment trials of investigational medicine products with time-to-event outcomes, it found high levels of serious findings at triggered centre visits but also at visits to matched control centres that, based on central monitoring, were not of concern. Here, we report a detailed review of the serious findings from TEMPER centre visits. We sought to identify feasible, centralised processes which might detect or prevent these findings without a centre visit. METHODS: The primary outcome of this study was the proportion of all 'major' and 'critical' TEMPER centre visit findings theoretically detectable or preventable through a feasible, centralised process. To devise processes, we considered a representative example of each finding type through an internal consensus exercise. This involved (a) agreeing the potential, by some described process, for each finding type to be centrally detected or prevented and (b) agreeing a proposed feasibility score for each proposed process. To further assess feasibility, we ran a consultation exercise, whereby the proposed processes were reviewed and rated for feasibility by invited external trialists. RESULTS: In TEMPER, 312 major or critical findings were identified at 94 visits. These findings comprised 120 distinct issues, for which we proposed 56 different centralised processes. Following independent review of the feasibility of the proposed processes by 87 consultation respondents across eight different trial stakeholder groups, we conclude that 306/312 (98%) findings could theoretically be prevented or identified centrally. Of the processes deemed feasible, those relating to informed consent could have the most impact. Of processes not currently deemed feasible, those involving use of electronic health records are among those with the largest potential benefit. CONCLUSIONS: This work presents a best-case scenario, where a large majority of monitoring findings were deemed theoretically preventable or detectable by central processes. Caveats include the cost of applying all necessary methods, and the resource implications of enhanced central monitoring for both centre and trials unit staff. Our results will inform future monitoring plans and emphasise the importance of continued critical review of monitoring processes and outcomes to ensure they remain appropriate.


Subject(s)
Randomized Controlled Trials as Topic , Research Design , Antineoplastic Agents/adverse effects , Drugs, Investigational/adverse effects , Humans , Informed Consent , Neoplasms/drug therapy
10.
Clin Trials ; 17(1): 106-112, 2020 02.
Article in English | MEDLINE | ID: mdl-31665920

ABSTRACT

BACKGROUND/AIMS: Clinical trial oversight is central to the safety of participants and production of robust data. The United Kingdom Medical Research Council originally set out an oversight structure comprising three committees in 1998. The first committee, led by the trial team, is hands-on with trial conduct/operations ('Trial Management Group') and essential. The second committee (Data Monitoring Committee), usually completely independent of the trial, reviews accumulating trial evidence and is used by most later phase trials. The Independent Data Monitoring Committee makes recommendations to the third oversight committee. The third committee, ('Trial Steering Committee'), facilitates in-depth interactions of independent and non-independent trial members and gives broader oversight (blinded to comparative analysis). We investigated the roles and functioning of the third oversight committee with multiple research methods. We reflect upon these findings to standardise the committee's remit and operation and to potentially increase its usage. METHODS: We utilised findings from our recent published suite of research on the third oversight committee to inform guideline revision. In brief, we conducted a survey of 38 United Kingdom-registered Clinical Trials Units, reviewed a cohort of 264 published trials, observed 8 third oversight committee meetings and interviewed 52 trialists. We convened an expert panel to discuss third oversight committees. Subsequently, we interviewed nine patient/lay third committee members and eight committee Chairs. RESULTS: In the survey, most Clinical Trials Units required a third committee for all their trials (27/38, 71%) with independent members (ranging from 1 to 6). In the survey and interviews, the independence of the third committee was valued to make unbiased consideration of Independent Data Monitoring Committee recommendations and to advise on trial progress, protocol changes and recruitment issues in conjunction with the trial leadership. The third committee also advised funders and sponsors about trial continuation and represented patients and the public by including lay members. Of the cohort of 264 published trials, 144 reported a 'steering' committee (55%), but the independence of these members was not described so these may have been internal Trial Management Groups. Around two thirds of papers (60%) reported having an Independent Data Monitoring Committee and 26.9% neither a steering nor an Independent Data Monitoring Committee. However, before revising the third committee charter (Terms of Reference), greater standardisation is needed around defining member independence, composition, primacy of decision-making, interactions with other committees and the lifespan. CONCLUSION: A third oversight committee has benefits for trial oversight and conduct, and a revised charter will facilitate greater standardisation and wider adoption.


Subject(s)
Clinical Trials Data Monitoring Committees/organization & administration , Randomized Controlled Trials as Topic/methods , Biomedical Research , Cohort Studies , Humans , Patient Advocacy , Research Design , Surveys and Questionnaires , United Kingdom
11.
Trials ; 20(1): 241, 2019 Apr 27.
Article in English | MEDLINE | ID: mdl-31029148

ABSTRACT

BACKGROUND: Monitoring and managing data returns in multi-centre randomised controlled trials is an important aspect of trial management. Maintaining consistently high data return rates has various benefits for trials, including enhancing oversight, improving reliability of central monitoring techniques and helping prepare for database lock and trial analyses. Despite this, there is little evidence to support best practice, and current standard methods may not be optimal. METHODS: We report novel methods from the Trial of Imaging and Schedule in Seminoma Testis (TRISST), a UK-based, multi-centre, phase III trial using paper Case Report Forms to collect data over a 6-year follow-up period for 669 patients. Using an automated database report which summarises the data return rate overall and per centre, we developed a Microsoft Excel-based tool to allow observation of per-centre trends in data return rate over time. The tool allowed us to distinguish between forms that can and cannot be completed retrospectively, to inform understanding of issues at individual centres. We reviewed these statistics at regular trials unit team meetings. We notified centres whose data return rate appeared to be falling, even if they had not yet crossed the pre-defined acceptability threshold of an 80% data return rate. We developed a set method for agreeing targets for gradual improvement with centres having persistent data return problems. We formalised a detailed escalation policy to manage centres who failed to meet agreed targets. We conducted a post-hoc, descriptive analysis of the effectiveness of the new processes. RESULTS: The new processes were used from April 2015 to September 2016. By May 2016, data return rates were higher than they had been at any time previously, and there were no centres with return rates below 80%, which had never been the case before. In total, 10 centres out of 35 were contacted regarding falling data return rates. Six out of these 10 showed improved rates within 6-8 weeks, and the remainder within 4 months. CONCLUSIONS: Our results constitute preliminary effectiveness evidence for novel methods in monitoring and managing data return rates in randomised controlled trials. We encourage other researchers to work on generating better evidence-based methods in this area, whether through more robust evaluation of our methods or of others.


Subject(s)
Data Accuracy , Data Management/statistics & numerical data , Forms and Records Control/statistics & numerical data , Forms as Topic , Neoplasm Recurrence, Local/diagnostic imaging , Research Design/statistics & numerical data , Seminoma/diagnostic imaging , Testicular Neoplasms/diagnostic imaging , Data Management/trends , Forms and Records Control/trends , Humans , Magnetic Resonance Imaging , Male , Orchiectomy , Predictive Value of Tests , Radiation Exposure , Research Design/trends , Seminoma/surgery , Testicular Neoplasms/surgery , Time Factors , Tomography, X-Ray Computed , Treatment Outcome , United Kingdom
12.
Trials ; 20(1): 227, 2019 Apr 17.
Article in English | MEDLINE | ID: mdl-30995932

ABSTRACT

BACKGROUND: Triggered monitoring in clinical trials is a risk-based monitoring approach where triggers (centrally monitored, predefined key risk and performance indicators) drive the extent, timing, and frequency of monitoring visits. The TEMPER study used a prospective, matched-pair design to evaluate the use of a triggered monitoring strategy, comparing findings from triggered monitoring visits with those from matched control sites. To facilitate this study, we developed a bespoke risk-based monitoring system: the TEMPER Management System. METHODS: The TEMPER Management System comprises a web application (the front end), an SQL server database (the back end) to store the data generated for TEMPER, and a reporting function to aid users in study processes such as the selection of triggered sites. Triggers based on current practice were specified for three clinical trials and were implemented in the system. Trigger data were generated in the system using data extracted from the trial databases to inform the selection of triggered sites to visit. Matching of the chosen triggered sites with untriggered control sites was also performed in the system, while data entry screens facilitated the collection and management of the data from findings gathered at monitoring visits. RESULTS: There were 38 triggers specified for the participating trials. Using these, 42 triggered sites were chosen and matched with control sites. Monitoring visits were carried out to all sites, and visit findings were entered into the TEMPER Management System. Finally, data extracted from the system were used for analysis. CONCLUSIONS: The TEMPER Management System made possible the completion of the TEMPER study. It implemented an approach of standardising the automation of current-practice triggers, and the generation of trigger data to inform the selection of triggered sites to visit. It also implemented a matching algorithm informing the selection of matched control sites. We hope that by publishing this paper it encourages other trialists to share their approaches to, and experiences of, triggered monitoring and other risk-based monitoring systems.


Subject(s)
Data Collection/standards , Data Management/standards , Multicenter Studies as Topic/standards , Randomized Controlled Trials as Topic/standards , Research Design/standards , Algorithms , Clinical Trials Data Monitoring Committees/standards , Data Accuracy , Humans , Risk Assessment , Risk Factors , Time Factors
13.
Clin Trials ; 15(6): 600-609, 2018 12.
Article in English | MEDLINE | ID: mdl-30132361

ABSTRACT

BACKGROUND/AIMS: In multi-site clinical trials, where trial data and conduct are scrutinised centrally with pre-specified triggers for visits to sites, targeted monitoring may be an efficient way to prioritise on-site monitoring. This approach is widely used in academic trials, but has never been formally evaluated. METHODS: TEMPER assessed the ability of targeted monitoring, as used in three ongoing phase III randomised multi-site oncology trials, to distinguish sites at which higher and lower rates of protocol and/or Good Clinical Practice violations would be found during site visits. Using a prospective, matched-pair design, sites that had been prioritised for visits after having activated 'triggers' were matched with a control ('untriggered') site, which would not usually have been visited at that time. The paired sites were visited within 4 weeks of each other, and visit findings are recorded and categorised according to the seriousness of the deviation. The primary outcome measure was the proportion of sites with ≥1 'Major' or 'Critical' finding not previously identified centrally. The study was powered to detect an absolute difference of ≥30% between triggered and untriggered visits. A sensitivity analysis, recommended by the study's blinded endpoint review committee, excluded findings related to re-consent. Additional analyses assessed the prognostic value of individual triggers and data from pre-visit questionnaires completed by site and trials unit staff. RESULTS: In total, 42 matched pairs of visits took place between 2013 and 2016. In the primary analysis, 88.1% of triggered visits had ≥1 new Major/Critical finding, compared to 81.0% of untriggered visits, an absolute difference of 7.1% (95% confidence interval -8.3%, +22.5%; p = 0.365). When re-consent findings were excluded, these figures reduced to 85.7% versus 59.5%, (difference = 26.2%, 95% confidence interval 8.0%, 44.4%; p = 0.007). Individual triggers had modest prognostic value but knowledge of the trial-related activities carried out by site staff may be useful. CONCLUSION: Triggered monitoring approaches, as used in these trials, were not sufficiently discriminatory. The rate of Major and Critical findings was higher than anticipated, but the majority related to consent and re-consent with no indication of systemic problems that would impact trial-wide safety issues or integrity of the results in any of the three trials. Sensitivity analyses suggest triggered monitoring may be of potential use, but needs improvement and investigation of further central monitoring triggers is warranted. TEMPER highlights the need to question and evaluate methods in trial conduct, and should inform further developments in this area.


Subject(s)
Clinical Trials Data Monitoring Committees/standards , Multicenter Studies as Topic , Randomized Controlled Trials as Topic , Humans , Prospective Studies , Research Design/standards
14.
Trials ; 19(1): 95, 2018 Feb 07.
Article in English | MEDLINE | ID: mdl-29415751

ABSTRACT

BACKGROUND: Patient and public involvement (PPI) in clinical trials aims to ensure that research is carried out collaboratively with patients and/or members of the public. However, current guidance on involving clinical trial participants in PPI activities is not consistent. METHODS: We reviewed the concept of participant involvement, based on our experience. Two workshops were held at the MRCCTU at UCL with the aim of defining participant involvement, considering its rationale; benefits and challenges; and identifying appropriate models for participant involvement in clinical trials. We considered how participant involvement might complement the involvement of other public contributors. Both workshops were attended by two patient representatives and seven staff members with experience of PPI in trials. Two of the staff members had also been involved in studies that had actively involved participants. They shared details of that work to inform discussions. RESULTS: We defined trial participants as individuals taking part in the study in question, including those who had already completed their trial treatment and/or follow-up. Because of their direct experience, involving participants may offer advantages over other public contributors; for example, in studies of new interventions or procedures, and where it is hard to identify or reach patient or community groups that include or speak for the study population. Participant involvement is possible at all stages of a trial; however, because there are no participants to involve during the design stage of a trial, prior to enrolment, participant involvement should complement and not replace involvement of PPI stakeholders. A range of models, including those with managerial, oversight or responsive roles are appropriate for involving participants; however, involvement in data safety and monitoring committees may not be appropriate where there is a potential risk of unblinding. Involvement of participants can improve the trial experience for other participants; optimising study procedures, improving communications; however, there are some specific, notably, managing participant confidentiality and practicalities relating to payments. CONCLUSIONS: Participant involvement in clinical trials is feasible and complements other forms of PPI in clinical trials. Involving active participants offers significant advantages, particularly in circumstances where trials are assessing new, or otherwise unavailable, therapies or processes. We recommend that current guidance on PPI should be updated to routinely consider including participants as valid stakeholders in PPI and potentially useful approach to PPI.


Subject(s)
Clinical Trials as Topic/methods , Community-Institutional Relations , Patient Participation , Public Opinion , Research Design , Research Subjects/psychology , Stakeholder Participation , Consensus , Consensus Development Conferences as Topic , Humans , London
16.
Trials ; 17: 376, 2016 07 29.
Article in English | MEDLINE | ID: mdl-27473060

ABSTRACT

BACKGROUND: Patient and public involvement (PPI) in studies carried out by the UK Medical Research Council Clinical Trials Unit (MRC CTU) at University College London varies by research type and setting. We developed a series of case studies of PPI to document and share good practice. METHODS: We used purposive sampling to identify studies representing the scope of research at the MRC CTU and different approaches to PPI. We carried out semi-structured interviews with staff and patient representatives. Interview notes were analysed descriptively to categorise the main aims and motivations for involvement; activities undertaken; their impact on the studies and lessons learned. RESULTS: We conducted 19 interviews about ten case studies, comprising one systematic review, one observational study and 8 randomised controlled trials in HIV and cancer. Studies were either open or completed, with start dates between 2003 and 2011. Interviews took place between March and November 2014 and were updated in summer 2015 where there had been significant developments in the study (i.e. if the study had presented results subsequent to the interview taking place). A wide range of PPI models, including representation on trial committees or management groups, community engagement, one-off task-focused activities, patient research partners and participant involvement had been used. Overall, interviewees felt that PPI had a positive impact, leading to improvements, for example in the research question; study design; communication with potential participants; study recruitment; confidence to carry out or complete a study; interpretation and communication of results; and influence on future research. CONCLUSIONS: A range of models of PPI can benefit clinical studies. Researchers should consider different approaches to PPI, based on the desired impact and the people they want to involve. Use of multiple models may increase the potential impacts of PPI in clinical research.


Subject(s)
Clinical Trials as Topic , Patient Participation , Biomedical Research , Humans , Research Design , Universities
SELECTION OF CITATIONS
SEARCH DETAIL
...