Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 17 de 17
Filter
1.
Article in German | MEDLINE | ID: mdl-38175194

ABSTRACT

The increasing digitization of the healthcare system is leading to a growing volume of health data. Leveraging this data beyond its initial collection purpose for secondary use can provide valuable insights into diagnostics, treatment processes, and the quality of care. The Health Data Lab (HDL) will provide infrastructure for this purpose. Both the protection of patient privacy and optimal analytical capabilities are of central importance in this context, and artificial intelligence (AI) provides two opportunities. First, it enables the analysis of large volumes of data with flexible models, which means that hidden correlations and patterns can be discovered. Second, synthetic - that is, artificial - data generated by AI can protect privacy.This paper describes the KI-FDZ project, which aims to investigate innovative technologies that can support the secure provision of health data for secondary research purposes. A multi-layered approach is investigated in which data-level measures can be combined in different ways with processing in secure environments. To this end, anonymization and synthetization methods, among others, are evaluated based on two concrete application examples. Moreover, it is examined how the creation of machine learning pipelines and the execution of AI algorithms can be supported in secure processing environments. Preliminary results indicate that this approach can achieve a high level of protection while maintaining data validity. The approach investigated in the project can be an important building block in the secure secondary use of health data.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Germany , Delivery of Health Care
2.
PLoS Med ; 20(3): e1004175, 2023 03.
Article in English | MEDLINE | ID: mdl-36943836

ABSTRACT

BACKGROUND: University Medical Centers (UMCs) must do their part for clinical trial transparency by fostering practices such as prospective registration, timely results reporting, and open access. However, research institutions are often unaware of their performance on these practices. Baseline assessments of these practices would highlight where there is room for change and empower UMCs to support improvement. We performed a status quo analysis of established clinical trial registration and reporting practices at German UMCs and developed a dashboard to communicate these baseline assessments with UMC leadership and the wider research community. METHODS AND FINDINGS: We developed and applied a semiautomated approach to assess adherence to established transparency practices in a cohort of interventional trials and associated results publications. Trials were registered in ClinicalTrials.gov or the German Clinical Trials Register (DRKS), led by a German UMC, and reported as complete between 2009 and 2017. To assess adherence to transparency practices, we identified results publications associated to trials and applied automated methods at the level of registry data (e.g., prospective registration) and publications (e.g., open access). We also obtained summary results reporting rates of due trials registered in the EU Clinical Trials Register (EUCTR) and conducted at German UMCs from the EU Trials Tracker. We developed an interactive dashboard to display these results across all UMCs and at the level of single UMCs. Our study included and assessed 2,895 interventional trials led by 35 German UMCs. Across all UMCs, prospective registration increased from 33% (n = 58/178) to 75% (n = 144/193) for trials registered in ClinicalTrials.gov and from 0% (n = 0/44) to 79% (n = 19/24) for trials registered in DRKS over the period considered. Of trials with a results publication, 38% (n = 714/1,895) reported the trial registration number in the publication abstract. In turn, 58% (n = 861/1,493) of trials registered in ClinicalTrials.gov and 23% (n = 111/474) of trials registered in DRKS linked the publication in the registration. In contrast to recent increases in summary results reporting of drug trials in the EUCTR, 8% (n = 191/2,253) and 3% (n = 20/642) of due trials registered in ClinicalTrials.gov and DRKS, respectively, had summary results in the registry. Across trial completion years, timely results reporting (within 2 years of trial completion) as a manuscript publication or as summary results was 41% (n = 1,198/2,892). The proportion of openly accessible trial publications steadily increased from 42% (n = 16/38) to 74% (n = 72/97) over the period considered. A limitation of this study is that some of the methods used to assess the transparency practices in this dashboard rely on registry data being accurate and up-to-date. CONCLUSIONS: In this study, we observed that it is feasible to assess and inform individual UMCs on their performance on clinical trial transparency in a reproducible and publicly accessible way. Beyond helping institutions assess how they perform in relation to mandates or their institutional policy, the dashboard may inform interventions to increase the uptake of clinical transparency practices and serve to evaluate the impact of these interventions.


Subject(s)
Research Design , Humans , Prospective Studies , Registries , Universities , Clinical Trials as Topic
3.
PLoS Biol ; 21(1): e3001949, 2023 01.
Article in English | MEDLINE | ID: mdl-36693044

ABSTRACT

The state of open science needs to be monitored to track changes over time and identify areas to create interventions to drive improvements. In order to monitor open science practices, they first need to be well defined and operationalized. To reach consensus on what open science practices to monitor at biomedical research institutions, we conducted a modified 3-round Delphi study. Participants were research administrators, researchers, specialists in dedicated open science roles, and librarians. In rounds 1 and 2, participants completed an online survey evaluating a set of potential open science practices, and for round 3, we hosted two half-day virtual meetings to discuss and vote on items that had not reached consensus. Ultimately, participants reached consensus on 19 open science practices. This core set of open science practices will form the foundation for institutional dashboards and may also be of value for the development of policy, education, and interventions.


Subject(s)
Biomedical Research , Humans , Consensus , Delphi Technique , Surveys and Questionnaires , Research Design
4.
PLoS Biol ; 20(9): e3001783, 2022 09.
Article in English | MEDLINE | ID: mdl-36095010

ABSTRACT

Western blotting is a standard laboratory method used to detect proteins and assess their expression levels. Unfortunately, poor western blot image display practices and a lack of detailed methods reporting can limit a reader's ability to evaluate or reproduce western blot results. While several groups have studied the prevalence of image manipulation or provided recommendations for improving western blotting, data on the prevalence of common publication practices are scarce. We systematically examined 551 articles published in the top 25% of journals in neurosciences (n = 151) and cell biology (n = 400) that contained western blot images, focusing on practices that may omit important information. Our data show that most published western blots are cropped and blot source data are not made available to readers in the supplement. Publishing blots with visible molecular weight markers is rare, and many blots additionally lack molecular weight labels. Western blot methods sections often lack information on the amount of protein loaded on the gel, blocking steps, and antibody labeling protocol. Important antibody identifiers like company or supplier, catalog number, or RRID were omitted frequently for primary antibodies and regularly for secondary antibodies. We present detailed descriptions and visual examples to help scientists, peer reviewers, and editors to publish more informative western blot figures and methods. Additional resources include a toolbox to help scientists produce more reproducible western blot data, teaching slides in English and Spanish, and an antibody reporting template.


Subject(s)
Neurosciences , Proteins , Antibodies , Blotting, Western
5.
Clin Sci (Lond) ; 136(15): 1139-1156, 2022 08 12.
Article in English | MEDLINE | ID: mdl-35822444

ABSTRACT

Recent work has raised awareness about the need to replace bar graphs of continuous data with informative graphs showing the data distribution. The impact of these efforts is not known. The present observational meta-research study examined how often scientists in different fields use various graph types, and assessed whether visualization practices have changed between 2010 and 2020. We developed and validated an automated screening tool, designed to identify bar graphs of counts or proportions, bar graphs of continuous data, bar graphs with dot plots, dot plots, box plots, violin plots, histograms, pie charts, and flow charts. Papers from 23 fields (approximately 1000 papers/field per year) were randomly selected from PubMed Central and screened (n=227998). F1 scores for different graphs ranged between 0.83 and 0.95 in the internal validation set. While the tool also performed well in external validation sets, F1 scores were lower for uncommon graphs. Bar graphs are more often used incorrectly to display continuous data than they are used correctly to display counts or proportions. The proportion of papers that use bar graphs of continuous data varies markedly across fields (range in 2020: 4-58%), with high rates in biochemistry and cell biology, complementary and alternative medicine, physiology, genetics, oncology and carcinogenesis, pharmacology, microbiology and immunology. Visualization practices have improved in some fields in recent years. Fewer than 25% of papers use flow charts, which provide information about attrition and the risk of bias. The present study highlights the need for continued interventions to improve visualization and identifies fields that would benefit most.

6.
J Clin Epidemiol ; 144: 1-7, 2022 04.
Article in English | MEDLINE | ID: mdl-34906673

ABSTRACT

OBJECTIVE: Timely publication of clinical trial results is central for evidence-based medicine. In this follow-up study we benchmark the performance of German university medical centers (UMCs) regarding timely dissemination of clinical trial results in recent years. METHODS: Following the same search and tracking methods used in our previous study for the years 2009 - 2013, we identified trials led by German UMCs completed between 2014 and 2017 and tracked results dissemination for the identified trials. RESULTS: We identified 1,658 trials in the 2014 -2017 cohort. Of these trials, 43% published results as either journal publication or summary results within 24 months after completion date, which is an improvement of 3.8% percentage points compared to the previous study. At the UMC level, the proportion published after 24 months ranged from 14% to 71%. Five years after completion, 30% of the trials still remained unpublished. CONCLUSION: Despite minor improvements compared to the previously investigated cohort, the proportion of timely reported trials led by German UMCs remains low. German UMCs should take further steps to improve the proportion of timely reported trials.


Subject(s)
Academic Medical Centers , Evidence-Based Medicine , Benchmarking , Clinical Trials as Topic , Cohort Studies , Follow-Up Studies , Humans
7.
PLoS Biol ; 19(3): e3001107, 2021 03.
Article in English | MEDLINE | ID: mdl-33647013

ABSTRACT

Recent concerns about the reproducibility of science have led to several calls for more open and transparent research practices and for the monitoring of potential improvements over time. However, with tens of thousands of new biomedical articles published per week, manually mapping and monitoring changes in transparency is unrealistic. We present an open-source, automated approach to identify 5 indicators of transparency (data sharing, code sharing, conflicts of interest disclosures, funding disclosures, and protocol registration) and apply it across the entire open access biomedical literature of 2.75 million articles on PubMed Central (PMC). Our results indicate remarkable improvements in some (e.g., conflict of interest [COI] disclosures and funding disclosures), but not other (e.g., protocol registration and code sharing) areas of transparency over time, and map transparency across fields of science, countries, journals, and publishers. This work has enabled the creation of a large, integrated, and openly available database to expedite further efforts to monitor, understand, and promote transparency and reproducibility in science.


Subject(s)
Information Dissemination/methods , Scholarly Communication/economics , Scholarly Communication/trends , Biomedical Research/economics , Conflict of Interest , Databases, Factual , Disclosure , Humans , Open Access Publishing/economics , Open Access Publishing/trends , Publications , Reproducibility of Results
9.
F1000Res ; 9: 1193, 2020.
Article in English | MEDLINE | ID: mdl-33082937

ABSTRACT

Background: Never before have clinical trials drawn as much public attention as those testing interventions for COVID-19. We aimed to describe the worldwide COVID-19 clinical research response and its evolution over the first 100 days of the pandemic. Methods: Descriptive analysis of planned, ongoing or completed trials by April 9, 2020 testing any intervention to treat or prevent COVID-19, systematically identified in trial registries, preprint servers, and literature databases. A survey was conducted of all trials to assess their recruitment status up to July 6, 2020. Results: Most of the 689 trials (overall target sample size 396,366) were small (median sample size 120; interquartile range [IQR] 60-300) but randomized (75.8%; n=522) and were often conducted in China (51.1%; n=352) or the USA (11%; n=76). 525 trials (76.2%) planned to include 155,571 hospitalized patients, and 25 (3.6%) planned to include 96,821 health-care workers. Treatments were evaluated in 607 trials (88.1%), frequently antivirals (n=144) or antimalarials (n=112); 78 trials (11.3%) focused on prevention, including 14 vaccine trials. No trial investigated social distancing. Interventions tested in 11 trials with >5,000 participants were also tested in 169 smaller trials (median sample size 273; IQR 90-700). Hydroxychloroquine alone was investigated in 110 trials. While 414 trials (60.0%) expected completion in 2020, only 35 trials (4.1%; 3,071 participants) were completed by July 6. Of 112 trials with detailed recruitment information, 55 had recruited <20% of the targeted sample; 27 between 20-50%; and 30 over 50% (median 14.8% [IQR 2.0-62.0%]). Conclusions: The size and speed of the COVID-19 clinical trials agenda is unprecedented. However, most trials were small investigating a small fraction of treatment options. The feasibility of this research agenda is questionable, and many trials may end in futility, wasting research resources. Much better coordination is needed to respond to global health threats.


Subject(s)
Biomedical Research/trends , Clinical Trials as Topic , Coronavirus Infections , Pandemics , Pneumonia, Viral , Betacoronavirus , COVID-19 , China , Coronavirus Infections/prevention & control , Coronavirus Infections/therapy , Humans , Pandemics/prevention & control , Pneumonia, Viral/prevention & control , Pneumonia, Viral/therapy , SARS-CoV-2 , United States
10.
Clin Sci (Lond) ; 134(20): 2729-2739, 2020 10 30.
Article in English | MEDLINE | ID: mdl-33111948

ABSTRACT

Statistically significant findings are more likely to be published than non-significant or null findings, leaving scientists and healthcare personnel to make decisions based on distorted scientific evidence. Continuously expanding ´file drawers' of unpublished data from well-designed experiments waste resources creates problems for researchers, the scientific community and the public. There is limited awareness of the negative impact that publication bias and selective reporting have on the scientific literature. Alternative publication formats have recently been introduced that make it easier to publish research that is difficult to publish in traditional peer reviewed journals. These include micropublications, data repositories, data journals, preprints, publishing platforms, and journals focusing on null or neutral results. While these alternative formats have the potential to reduce publication bias, many scientists are unaware that these formats exist and don't know how to use them. Our open source file drawer data liberation effort (fiddle) tool (RRID:SCR_017327 available at: http://s-quest.bihealth.org/fiddle/) is a match-making Shiny app designed to help biomedical researchers to identify the most appropriate publication format for their data. Users can search for a publication format that meets their needs, compare and contrast different publication formats, and find links to publishing platforms. This tool will assist scientists in getting otherwise inaccessible, hidden data out of the file drawer into the scientific community and literature. We briefly highlight essential details that should be included to ensure reporting quality, which will allow others to use and benefit from research published in these new formats.


Subject(s)
Biomedical Research , Publication Bias , Software , Publishing
11.
Sci Eng Ethics ; 26(6): 2893-2910, 2020 12.
Article in English | MEDLINE | ID: mdl-32592136

ABSTRACT

Promoting translational research as a means to overcoming chasms in the translation of knowledge through successive fields of research from basic science to public health impacts and back is a central challenge for research managers and policymakers. Organizational leaders need to assess baseline conditions, identify areas needing improvement, and to judge the impact of specific initiatives to sustain or improve translational research practices at their institutions. Currently, there is a lack of such an assessment tool addressing the specific context of translational biomedical research. To close this gap, we have developed a new survey for assessing the organizational climate for translational research. This self-assessment tool measures employees' perceptions of translational research climate and underlying research practices in organizational environments and builds on the established Survey of Organizational Research Climate, assessing research integrity. Using this tool, we show that scientists at a large university hospital (Charité Berlin) perceive translation as a central and important component of their work. Importantly, local resources and direct support are main contributing factors for the practical implementation of translation into their own research practice. We identify and discuss potential leverage points for an improvement of research climate to foster successful translational research.


Subject(s)
Biomedical Research , Translational Research, Biomedical , Humans , Organizational Culture , Surveys and Questionnaires
12.
BMJ Open ; 10(1): e034666, 2020 01 22.
Article in English | MEDLINE | ID: mdl-31974090

ABSTRACT

OBJECTIVES: To establish the rates of publication and reporting of results for interventional clinical trials across Polish academic medical centres (AMCs) completed between 2009 and 2013. We aim also to compare the publication and reporting success between adult and paediatric trials. DESIGN: Cross-sectional study. SETTING: AMCs in Poland. PARTICIPANTS: AMCs with interventional trials registered on ClinicalTrials.gov. MAIN OUTCOME MEASURE: Results reporting on ClinicalTrials.gov and publishing via journal publication. RESULTS: We identified 305 interventional clinical trials registered on ClinicalTrials.gov, completed between 2009 and 2013 and affiliated with at least one AMC. Overall, 243 of the 305 trials (79.7%) had been published as articles or posted their summary results on ClinicalTrials.gov. Results were posted within a year of study completion and/or published within 2 years of study completion for 131 trials (43.0%). Dissemination by both posting and publishing results in a timely manner was achieved by four trials (1.3%). CONCLUSIONS: Our cross-sectional analysis revealed that Polish AMCs fail to meet the expectation for timely disseminating the findings of all interventional clinical trials. Delayed dissemination and non-dissemination of trial results negatively affects decisions in healthcare.


Subject(s)
Academic Medical Centers , Clinical Trials as Topic/statistics & numerical data , Information Dissemination/methods , Registries , Cross-Sectional Studies , Databases, Factual , Humans , Poland , Prospective Studies
13.
J Clin Epidemiol ; 115: 37-45, 2019 11.
Article in English | MEDLINE | ID: mdl-31195110

ABSTRACT

OBJECTIVES: Timely and comprehensive reporting of clinical trial results builds the backbone of evidence-based medicine and responsible research. The proportion of timely disseminated trial results can inform alternative national and international benchmarking of university medical centers (UMCs). STUDY DESIGN AND SETTING: For all German UMCs, we tracked all registered trials completed between 2009 and 2013. The results and an interactive website benchmark German UMCs regarding their performance in result dissemination. RESULTS: We identified and tracked 2,132 clinical trials. For 1,509 trials, one of the German UMCs took the academic lead. Of these 1,509 "lead trials," 39% published their results (mostly via journal publications) in a timely manner (<24 months after completion). More than 6 years after study completion, 26% of all eligible lead trials still had not disseminated results. CONCLUSION: Despite substantial attention from many stakeholders to the topic, there is still a strong delay or even absence of result dissemination for many trials. German UMCs have several opportunities to improve this situation. Further research should evaluate whether and how a transparent benchmarking of UMC performance in result dissemination helps to increase value and reduce waste in medical research.


Subject(s)
Clinical Trials as Topic , Publishing/statistics & numerical data , Academic Medical Centers , Benchmarking , Evidence-Based Medicine , Germany , Humans , Time Factors
14.
PLoS Biol ; 17(4): e3000188, 2019 04.
Article in English | MEDLINE | ID: mdl-30964856

ABSTRACT

The need for replication of initial results has been rediscovered only recently in many fields of research. In preclinical biomedical research, it is common practice to conduct exact replications with the same sample sizes as those used in the initial experiments. Such replication attempts, however, have lower probability of replication than is generally appreciated. Indeed, in the common scenario of an effect just reaching statistical significance, the statistical power of the replication experiment assuming the same effect size is approximately 50%-in essence, a coin toss. Accordingly, we use the provocative analogy of "replicating" a neuroprotective drug animal study with a coin flip to highlight the need for larger sample sizes in replication experiments. Additionally, we provide detailed background for the probability of obtaining a significant p value in a replication experiment and discuss the variability of p values as well as pitfalls of simple binary significance testing in both initial preclinical experiments and replication studies with small sample sizes. We conclude that power analysis for determining the sample size for a replication study is obligatory within the currently dominant hypothesis testing framework. Moreover, publications should include effect size point estimates and corresponding measures of precision, e.g., confidence intervals, to allow readers to assess the magnitude and direction of reported effects and to potentially combine the results of initial and replication study later through Bayesian or meta-analytic approaches.


Subject(s)
Biomedical Research/methods , Reproducibility of Results , Research Design/statistics & numerical data , Animals , Bayes Theorem , Biomedical Research/statistics & numerical data , Data Interpretation, Statistical , Humans , Models, Statistical , Probability , Publications , Sample Size
15.
Sci Rep ; 8(1): 3526, 2018 02 23.
Article in English | MEDLINE | ID: mdl-29476115

ABSTRACT

Body temperature is a valuable parameter in determining the wellbeing of laboratory animals. However, using body temperature to refine humane endpoints during acute illness generally lacks comprehensiveness and exposes to inter-observer bias. Here we compared two methods to assess body temperature in mice, namely implanted radio frequency identification (RFID) temperature transponders (method 1) to non-contact infrared thermometry (method 2) in 435 mice for up to 7 days during normothermia and lipopolysaccharide (LPS) endotoxin-induced hypothermia. There was excellent agreement between core and surface temperature as determined by method 1 and 2, respectively, whereas the intra- and inter-subject variation was higher for method 2. Nevertheless, using machine learning algorithms to determine temperature-based endpoints both methods had excellent accuracy in predicting death as an outcome event. Therefore, less expensive and cumbersome non-contact infrared thermometry can serve as a reliable alternative for implantable transponder-based systems for hypothermic responses, although requiring standardization between experimenters.


Subject(s)
Body Temperature , Hypothermia/diagnosis , Infrared Rays , Radio Frequency Identification Device/methods , Sepsis/diagnosis , Thermometry/methods , Acute Disease , Animals , Electrodes, Implanted , Female , Hypothermia/chemically induced , Hypothermia/mortality , Hypothermia/physiopathology , Lipopolysaccharides/administration & dosage , Machine Learning , Mice , Mice, Inbred C57BL , Sepsis/chemically induced , Sepsis/mortality , Sepsis/physiopathology , Survival Analysis , Thermometers/classification , Thermometry/instrumentation
16.
F1000Res ; 7: 1863, 2018.
Article in English | MEDLINE | ID: mdl-31131084

ABSTRACT

Background: Several meta-research studies and benchmarking activities have assessed how comprehensively and timely, academic institutions and private companies publish their clinical studies. These current "clinical trial tracking" activities differ substantially in how they sample relevant studies, and how they follow up on their publication. Methods: To allow informed policy and decision making on future publication assessment and benchmarking of institutions and companies, this paper outlines and discusses 10 variables that influence the tracking of timely publications. Tracking variables were initially selected by experts and by the authors through discussion. To validate the completeness of our set of variables, we conducted i) an explorative review of tracking studies and ii) an explorative tracking of registered clinical trials of three leading German university medical centres. Results: We identified the following 10 relevant variables impacting the tracking of clinical studies: 1) responsibility for clinical studies, 2) type and characteristics of clinical studies, 3) status of clinical studies, 4) source for sampling, 5) timing of registration, 6) determination of completion date, 7) timeliness of dissemination, 8) format of dissemination, 9) source for tracking, and 10) inter-rater reliability. Based on the description of these tracking variables and their influence, we discuss which variables could serve in what ways as a standard assessment of "timely publication". Conclusions: To facilitate the tracking and consequent benchmarking of how often and how timely academic institutions and private companies publish clinical study results, we have two core recommendations. First, the improvement in the link between registration and publication, for example via institutional policies for academic institutions and private companies. Second, the comprehensive and transparent reporting of tracking studies according to the 10 variables presented in this paper.


Subject(s)
Clinical Studies as Topic/statistics & numerical data , Clinical Studies as Topic/methods , Clinical Studies as Topic/standards , Decision Making , Evidence-Based Medicine/methods , Evidence-Based Medicine/standards , Evidence-Based Medicine/statistics & numerical data , Humans , Publications
17.
Genetics ; 201(1): 305-22, 2015 Sep.
Article in English | MEDLINE | ID: mdl-26139839

ABSTRACT

Trait differences between species may be attributable to natural selection. However, quantifying the strength of evidence for selection acting on a particular trait is a difficult task. Here we develop a population genetics test for selection acting on a quantitative trait that is based on multiple-line crosses. We show that using multiple lines increases both the power and the scope of selection inferences. First, a test based on three or more lines detects selection with strongly increased statistical significance, and we show explicitly how the sensitivity of the test depends on the number of lines. Second, a multiple-line test can distinguish between different lineage-specific selection scenarios. Our analytical results are complemented by extensive numerical simulations. We then apply the multiple-line test to QTL data on floral character traits in plant species of the Mimulus genus and on photoperiodic traits in different maize strains, where we find a signature of lineage-specific selection not seen in two-line tests.


Subject(s)
Mimulus/genetics , Quantitative Trait Loci , Selection, Genetic , Chromosome Mapping , Crosses, Genetic , Flowers/genetics , Genes, Plant , Genetics, Population , Models, Genetic
SELECTION OF CITATIONS
SEARCH DETAIL
...