Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 49
Filter
1.
Perspect ASHA Spec Interest Groups ; 9(3): 836-852, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38912383

ABSTRACT

Purpose: One manifestation of systemic inequities in communication sciences and disorders (CSD) is the chronic underreporting and underrepresentation of sex, gender, race, and ethnicity in research. The present study characterized recent demographic reporting practices and representation of participants across CSD research. Methods: We systematically reviewed and extracted key reporting and participant data from empirical studies conducted in the United States (US) with human participants published in the year 2020 in journals by the American Speech-Language-Hearing Association (ASHA; k = 407 articles comprising a total n = 80,058 research participants, search completed November 2021). Sex, gender, race, and ethnicity were operationalized per National Institutes of Health guidelines (National Institutes of Health, 2015a, 2015b). Results: Sex or gender was reported in 85.5% of included studies; race was reported in 33.7%; and ethnicity was reported in 13.8%. Sex and gender were clearly differentiated in 3.4% of relevant studies. Where reported, median proportions for race and ethnicity were significantly different from the US population, with underrepresentation noted for all non-White racial groups and Hispanic participants. Moreover, 64.7% of studies that reported sex or gender and 67.2% of studies that reported race or ethnicity did not consider these respective variables in analyses or discussion. Conclusion: At present, research published in ASHA journals frequently fails to report key demographic data summarizing the characteristics of participants. Moreover, apparent gaps in representation of minoritized racial and ethnic groups threaten the external validity of CSD research and broader health care equity endeavors in the US. Although our study is limited to a single year and publisher, our results point to several steps for readers that may bring greater accountability, consistency, and diversity to the discipline.

2.
J Sports Sci ; 42(7): 566-573, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38767324

ABSTRACT

Sport and sports research are inherently complex systems. This appears to be somewhat at odds with the current research paradigm in sport in which interventions are aimed are fixing or solving singular broken components within the system. In any complex system, such as sport, there are places where we can intervene to change behaviour and, ideally, system outcomes. Meadows influential work describes 12 different points with which to intervene in complex systems (termed "Leverage Points"), which are ordered from shallow to deeper based on their potential effectiveness to influence transformational change. Whether research in sport is aimed at shallow or deeper Leverage Points is unknown. This study aimed to assess highly impactful research in sports science, sports nutrition/metabolism, sports medicine, sport and exercise psychology, sports management, motor control, sports biomechanics and sports policy/law through a Leverage Points lens. The 10 most highly cited original-research manuscripts from each journal representing these fields were analysed for the Leverage Point with which the intervention described in the manuscript was focused. The results indicate that highly impactful research in sports science, sports nutrition/metabolism, sports biomechanics and sports medicine is predominantly focused at the shallow end of the Leverage Points hierarchy. Conversely, the interventions drawn from journals representing sports management and sports policy/law were focused on the deeper end. Other journals analysed had a mixed profile. Explanations for these findings include the dual practitioner/academic needing to "think fast" to solve immediate questions in sports science/medicine/nutrition, limited engagement with "working slow" systems and method experts and differences in incremental vs. non-incremental research strategies.


Subject(s)
Sports Medicine , Sports , Humans , Sports/physiology , Biomechanical Phenomena , Journal Impact Factor , Periodicals as Topic , Bibliometrics
3.
Cogn Res Princ Implic ; 9(1): 27, 2024 05 03.
Article in English | MEDLINE | ID: mdl-38700660

ABSTRACT

The .05 boundary within Null Hypothesis Statistical Testing (NHST) "has made a lot of people very angry and been widely regarded as a bad move" (to quote Douglas Adams). Here, we move past meta-scientific arguments and ask an empirical question: What is the psychological standing of the .05 boundary for statistical significance? We find that graduate students in the psychological sciences show a boundary effect when relating p-values across .05. We propose this psychological boundary is learned through statistical training in NHST and reading a scientific literature replete with "statistical significance". Consistent with this proposal, undergraduates do not show the same sensitivity to the .05 boundary. Additionally, the size of a graduate student's boundary effect is not associated with their explicit endorsement of questionable research practices. These findings suggest that training creates distortions in initial processing of p-values, but these might be dampened through scientific processes operating over longer timescales.


Subject(s)
Statistics as Topic , Humans , Adult , Young Adult , Data Interpretation, Statistical , Male , Psychology , Female
4.
Perspect Psychol Sci ; 19(3): 590-601, 2024 May.
Article in English | MEDLINE | ID: mdl-38652780

ABSTRACT

In the spirit of America's Shakespeare, August Wilson (1997), I have written this article as a testimony to the conditions under which I, and too many others, engage in scholarly discourse. I hope to make clear from the beginning that although the ideas presented here are not entirely my own-as they have been inherited from the minority of scholars who dared and managed to bring the most necessary, unpalatable, and unsettling truths about our discipline to the broader scientific community-I do not write for anyone but myself and those scholars who have felt similarly marginalized, oppressed, and silenced. And I write as a race scholar, meaning simply that I believe that race-and racism-affects the sociopolitical conditions in which humans, and scholars, develop their thoughts, feelings, and actions. I believe that it is important for all scholars to have a basic understanding of these conditions, as well as the landmines and pitfalls that define them, as they shape how research is conducted, reviewed, and disseminated. I also believe that to evolve one's discipline into one that is truly robust and objective, it must first become diverse and self-aware. Any effort to suggest otherwise, no matter how scholarly it might present itself, is intellectually unsound.


Subject(s)
Cultural Diversity , Psychology , Humans , Racism , Politics
5.
Behav Res Methods ; 2024 Feb 22.
Article in English | MEDLINE | ID: mdl-38389030

ABSTRACT

Monte Carlo simulation studies are among the primary scientific outputs contributed by methodologists, guiding application of various statistical tools in practice. Although methodological researchers routinely extend simulation study findings through follow-up work, few studies are ever replicated. Simulation studies are susceptible to factors that can contribute to replicability failures, however. This paper sought to conduct a meta-scientific study by replicating one highly cited simulation study (Curran et al., Psychological Methods, 1, 16-29, 1996) that investigated the robustness of normal theory maximum likelihood (ML)-based chi-square fit statistics under multivariate nonnormality. We further examined the generalizability of the original study findings across different nonnormal data generation algorithms. Our replication results were generally consistent with original findings, but we discerned several differences. Our generalizability results were more mixed. Only two results observed under the original data generation algorithm held completely across other algorithms examined. One of the most striking findings we observed was that results associated with the independent generator (IG) data generation algorithm vastly differed from other procedures examined and suggested that ML was robust to nonnormality for the particular factor model used in the simulation. Findings point to the reality that extant methodological recommendations may not be universally valid in contexts where multiple data generation algorithms exist for a given data characteristic. We recommend that researchers consider multiple approaches to generating a specific data or model characteristic (when more than one is available) to optimize the generalizability of simulation results.

6.
Elife ; 122024 Jan 19.
Article in English | MEDLINE | ID: mdl-38240745

ABSTRACT

Many postdoctoral fellows and scholars who hope to secure tenure-track faculty positions in the United States apply to the National Institutes of Health (NIH) for a Pathway to Independence Award. This award has two phases (K99 and R00) and provides funding for up to 5 years. Using NIH data for the period 2006-2022, we report that ~230 K99 awards were made every year, representing up to ~$250 million annual investment. About 40% of K99 awardees were women and ~89% of K99 awardees went on to receive an R00 award annually. Institutions with the most NIH funding produced the most recipients of K99 awards and recruited the most recipients of R00 awards. The time between a researcher starting an R00 award and receiving a major NIH award (such as an R01) ranged between 4.6 and 7.4 years, and was significantly longer for women, for those who remained at their home institution, and for those hired by an institution that was not one of the 25 institutions with the most NIH funding. Shockingly, there has yet to be a K99 awardee at a historically Black college or university. We go on to show how K99 awardees flow to faculty positions, and to identify various factors that influence the future success of individual researchers and, therefore, also influence the composition of biomedical faculty at universities in the United States.


Subject(s)
Awards and Prizes , Biomedical Research , Humans , Female , United States , Male , National Institutes of Health (U.S.) , Health Personnel , Research Personnel
7.
EPJ Data Sci ; 12(1): 58, 2023.
Article in English | MEDLINE | ID: mdl-38098785

ABSTRACT

Puberty is a phase in which individuals often test the boundaries of themselves and surrounding others and further define their identity - and thus their uniqueness compared to other individuals. Similarly, as Computational Social Science (CSS) grows up, it must strike a balance between its own practices and those of neighboring disciplines to achieve scientific rigor and refine its identity. However, there are certain areas within CSS that are reluctant to adopt rigorous scientific practices from other fields, which can be observed through an overreliance on passively collected data (e.g., through digital traces, wearables) without questioning the validity of such data. This paper argues that CSS should embrace the potential of combining both passive and active measurement practices to capitalize on the strengths of each approach, including objectivity and psychological quality. Additionally, the paper suggests that CSS would benefit from integrating practices and knowledge from other established disciplines, such as measurement validation, theoretical embedding, and open science practices. Based on this argument, the paper provides ten recommendations for CSS to mature as an interdisciplinary field of research.

8.
Elife ; 122023 11 03.
Article in English | MEDLINE | ID: mdl-37922198

ABSTRACT

The peer review process is a critical step in ensuring the quality of scientific research. However, its subjectivity has raised concerns. To investigate this issue, I examined over 500 publicly available peer review reports from 200 published neuroscience papers in 2022-2023. OpenAI's generative artificial intelligence ChatGPT was used to analyze language use in these reports, which demonstrated superior performance compared to traditional lexicon- and rule-based language models. As expected, most reviews for these published papers were seen as favorable by ChatGPT (89.8% of reviews), and language use was mostly polite (99.8% of reviews). However, this analysis also demonstrated high levels of variability in how each reviewer scored the same paper, indicating the presence of subjectivity in the peer review process. The results further revealed that female first authors received less polite reviews than their male peers, indicating a gender bias in reviewing. In addition, published papers with a female senior author received more favorable reviews than papers with a male senior author, for which I discuss potential causes. Together, this study highlights the potential of generative artificial intelligence in performing natural language processing of specialized scientific texts. As a proof of concept, I show that ChatGPT can identify areas of concern in scientific peer review, underscoring the importance of transparent peer review in studying equitability in scientific publishing.


Peer review is a vital step in ensuring the quality and accuracy of scientific research before publication. Experts assess research manuscripts, advise journal editors on publishing them, and provide authors with recommendations for improvement. But some scientists have raised concerns about potential biases and subjectivity in the peer review process. Author attributes, such as gender, reputation, or how prestigious their institution is, may subconsciously influence reviewers' scores. Studying peer review to identify potential biases is challenging. The language reviewers use is very technical, and some of their commentary may be subjective and vary from reviewer to reviewer. The emergence of OpenAI's ChatGPT, which uses machine learning to process large amounts of information, may provide a new tool to analyze peer review for signs of bias. Verharen demonstrated that ChatGPT can be used to analyze peer review reports and found potential indications of gender bias in scientific publishing. In the experiments, Verharen asked ChatGPT to analyze more than 500 reviews of 200 neuroscience studies published in the scientific journal Nature Communications over the past year. The experiments found no evidence that institutional reputation influenced reviews. Yet, female first authors were more likely to receive impolite comments from reviewers. Female senior authors were more likely to receive higher review scores, which may indicate they had to clear a higher bar for publication. The experiments indicate that ChatGPT could be used to analyze peer review for fairness. Verharen suggests that reviewers might apply this tool to ensure their reviews are polite and accurate reflections of their opinions. Scientists or publishers might also use it for large-scale analyses of peer review in individual journals or in scientific publishing more widely. Journals might also use ChatGPT to assess the impact of bias-prevention interventions on review fairness.


Subject(s)
Artificial Intelligence , Publishing , Female , Male , Humans , Sexism , Peer Review , Research Report
9.
Cogn Sci ; 47(10): e13365, 2023 10.
Article in English | MEDLINE | ID: mdl-37817646

ABSTRACT

Given the recent call to strengthen collaboration between researchers and relevant practitioners, we consider participatory design as a way to advance Cognitive Science. Building on examples from the Learning Sciences and Human-Computer Interaction, we (a) explore what, why, who, when, and where researchers can collaborate with community members in Cognitive Science research; (b) examine the ways in which participatory-design research can benefit the field; and (c) share ideas to incorporate participatory design into existing basic and applied research programs. Through this article, we hope to spark deeper discussions on how cognitive scientists can collaborate with community members to benefit both research and practice.


Subject(s)
Cognition , Computers , Research Design , Humans
10.
Neuron ; 111(22): 3505-3516, 2023 Nov 15.
Article in English | MEDLINE | ID: mdl-37738981

ABSTRACT

Adversarial collaboration has been championed as the gold standard for resolving scientific disputes but has gained relatively limited traction in neuroscience and allied fields. In this perspective, we argue that adversarial collaborative research has been stymied by an overly restrictive concern with the falsification of scientific theories. We advocate instead for a more expansive view that frames adversarial collaboration in terms of Bayesian belief updating, model comparison, and evidence accumulation. This framework broadens the scope of adversarial collaboration to accommodate a wide range of informative (but not necessarily definitive) studies while affording the requisite formal tools to guide experimental design and data analysis in the adversarial setting. We provide worked examples that demonstrate how these tools can be deployed to score theoretical models in terms of a common metric of evidence, thereby furnishing a means of tracking the amount of empirical support garnered by competing theories over time.


Subject(s)
Models, Theoretical , Neurosciences , Bayes Theorem , Research Design
11.
R Soc Open Sci ; 10(7): 230448, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37476516

ABSTRACT

Theoretical arguments and empirical investigations indicate that a high proportion of published findings do not replicate and are likely false. The current position paper provides a broad perspective on scientific error, which may lead to replication failures. This broad perspective focuses on reform history and on opportunities for future reform. We organize our perspective along four main themes: institutional reform, methodological reform, statistical reform and publishing reform. For each theme, we illustrate potential errors by narrating the story of a fictional researcher during the research cycle. We discuss future opportunities for reform. The resulting agenda provides a resource to usher in an era that is marked by a research culture that is less error-prone and a scientific publication landscape with fewer spurious findings.

12.
R Soc Open Sci ; 10(6): 230235, 2023 Jun.
Article in English | MEDLINE | ID: mdl-37293356

ABSTRACT

The past decade has witnessed a proliferation of big team science (BTS), endeavours where a comparatively large number of researchers pool their intellectual and/or material resources in pursuit of a common goal. Despite this burgeoning interest, there exists little guidance on how to create, manage and participate in these collaborations. In this paper, we integrate insights from a multi-disciplinary set of BTS initiatives to provide a how-to guide for BTS. We first discuss initial considerations for launching a BTS project, such as building the team, identifying leadership, governance, tools and open science approaches. We then turn to issues related to running and completing a BTS project, such as study design, ethical approvals and issues related to data collection, management and analysis. Finally, we address topics that present special challenges for BTS, including authorship decisions, collaborative writing and team decision-making.

13.
J Med Internet Res ; 25: e45482, 2023 03 30.
Article in English | MEDLINE | ID: mdl-36995753

ABSTRACT

BACKGROUND: Scientists often make cognitive claims (eg, the results of their work) and normative claims (eg, what should be done based on those results). Yet, these types of statements contain very different information and implications. This randomized controlled trial sought to characterize the granular effects of using normative language in science communication. OBJECTIVE: Our study examined whether viewing a social media post containing scientific claims about face masks for COVID-19 using both normative and cognitive language (intervention arm) would reduce perceptions of trust and credibility in science and scientists compared with an identical post using only cognitive language (control arm). We also examined whether effects were mediated by political orientation. METHODS: This was a 2-arm, parallel group, randomized controlled trial. We aimed to recruit 1500 US adults (age 18+) from the Prolific platform who were representative of the US population census by cross sections of age, race/ethnicity, and gender. Participants were randomly assigned to view 1 of 2 images of a social media post about face masks to prevent COVID-19. The control image described the results of a real study (cognitive language), and the intervention image was identical, but also included recommendations from the same study about what people should do based on the results (normative language). Primary outcomes were trust in science and scientists (21-item scale) and 4 individual items related to trust and credibility; 9 additional covariates (eg, sociodemographics, political orientation) were measured and included in analyses. RESULTS: From September 4, 2022, to September 6, 2022, 1526 individuals completed the study. For the sample as a whole (eg, without interaction terms), there was no evidence that a single exposure to normative language affected perceptions of trust or credibility in science or scientists. When including the interaction term (study arm × political orientation), there was some evidence of differential effects, such that individuals with liberal political orientation were more likely to trust scientific information from the social media post's author if the post included normative language, and political conservatives were more likely to trust scientific information from the post's author if the post included only cognitive language (ß=0.05, 95% CI 0.00 to 0.10; P=.04). CONCLUSIONS: This study does not support the authors' original hypotheses that single exposures to normative language can reduce perceptions of trust or credibility in science or scientists for all people. However, the secondary preregistered analyses indicate the possibility that political orientation may differentially mediate the effect of normative and cognitive language from scientists on people's perceptions. We do not submit this paper as definitive evidence thereof but do believe that there is sufficient evidence to support additional research into this topic, which may have implications for effective scientific communication. TRIAL REGISTRATION: OSF Registries osf.io/kb3yh; https://osf.io/kb3yh. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/41747.


Subject(s)
COVID-19 , Communication , Trust , Adult , Humans , COVID-19/epidemiology , COVID-19/prevention & control , Language , Social Media , Masks
14.
R Soc Open Sci ; 10(2): 221460, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36756064

ABSTRACT

Open Research aims to make research more accessible, transparent, reproducible, shared and collaborative. Doing so is meant to democratize and diversify access to knowledge and knowledge production, and ensure that research is useful outside of academic contexts. Increasing equity is therefore a key aim of the Open Research movement, yet mounting evidence demonstrates that the practices of Open Research are implemented in ways that undermine this. In response, we convened a diverse community of researchers, research managers and funders to co-create actionable recommendations for supporting the equitable implementation of Open Research. Using a co-creative modified Delphi method, we generated consensus-driven recommendations that address three key problem areas: the resource-intensive nature of Open Research, the high cost of article processing charges, and obstructive reward and recognition practices at funders and research institutions that undermine the implementation of Open Research. In this paper, we provide an overview of these issues, a detailed description of the co-creative process, and present the recommendations and the debates that surrounded them. We discuss these recommendations in relation to other recently published ones and conclude that implementing ours requires 'global thinking' to ensure that a systemic and inclusive approach to change is taken.

15.
Am J Epidemiol ; 192(4): 658-664, 2023 04 06.
Article in English | MEDLINE | ID: mdl-36627249

ABSTRACT

Starting in the 2010s, researchers in the experimental social sciences rapidly began to adopt increasingly open and reproducible scientific practices. These practices include publicly sharing deidentified data when possible, sharing analytical code, and preregistering study protocols. Empirical evidence from the social sciences suggests such practices are feasible, can improve analytical reproducibility, and can reduce selective reporting. In academic epidemiology, adoption of open-science practices has been slower than in the social sciences (with some notable exceptions, such as registering clinical trials). Epidemiologic studies are often large, complex, conceived after data have already been collected, and difficult to replicate directly by collecting new data. These characteristics make it especially important to ensure their integrity and analytical reproducibility. Open-science practices can also pay immediate dividends to researchers' own work by clarifying scientific reasoning and encouraging well-documented, organized workflows. We consider how established epidemiologists and early-career researchers alike can help midwife a culture of open science in epidemiology through their research practices, mentorship, and editorial activities.


Subject(s)
Epidemiology , Research Design , Humans , Reproducibility of Results
16.
Brain Commun ; 5(1): fcac322, 2023.
Article in English | MEDLINE | ID: mdl-36601624

ABSTRACT

The replication crisis poses important challenges to modern science. Central to this challenge is re-establishing ground truths or the most fundamental theories that serve as the bedrock to a scientific community. However, the goal to identify hypotheses with the greatest support is non-trivial given the unprecedented rate of scientific publishing. In this era of high-volume science, the goal of this study is to sample from one research community within clinical neuroscience (traumatic brain injury) and track major trends that have shaped this literature over the past 50 years. To do so, we first conduct a decade-wise (1980-2019) network analysis to examine the scientific communities that shape this literature. To establish the robustness of our findings, we utilized searches from separate search engines (Web of Science; Semantic Scholar). As a second goal, we sought to determine the most highly cited hypotheses influencing the literature in each decade. In a third goal, we then searched for any papers referring to 'replication' or efforts to reproduce findings within our >50 000 paper dataset. From this search, 550 papers were analysed to determine the frequency and nature of formal replication studies over time. Finally, to maximize transparency, we provide a detailed procedure for the creation and analysis of our dataset, including a discussion of each of our major decision points, to facilitate similar efforts in other areas of neuroscience. We found that the unparalleled rate of scientific publishing within the brain injury literature combined with the scarcity of clear hypotheses in individual publications is a challenge to both evaluating accepted findings and determining paths forward to accelerate science. Additionally, while the conversation about reproducibility has increased over the past decade, the rate of published replication studies continues to be a negligible proportion of the research. Meta-science and computational methods offer the critical opportunity to assess the state of the science and illuminate pathways forward, but ultimately there is structural change needed in the brain injury literature and perhaps others.

17.
Br J Soc Psychol ; 62(4): 1621-1634, 2023 Oct.
Article in English | MEDLINE | ID: mdl-36068662

ABSTRACT

Qualitative data sharing practices in psychology have not developed as rapidly as those in parallel quantitative domains. This is often explained by numerous epistemological, ethical and pragmatic issues concerning qualitative data types. In this article, I provide an alternative to the frequently expressed, often reasonable, concerns regarding the sharing of qualitative human data by highlighting three advantages of qualitative data sharing. I argue that sharing qualitative human data is not by default 'less ethical', 'riskier' and 'impractical' compared with quantitative data sharing, but in some cases more ethical, less risky and easier to manage for sharing because (1) informed consent can be discussed, negotiated and validated; (2) the shared data can be curated by special means; and (3) the privacy risks are mainly local instead of global. I hope this alternative perspective further encourages qualitative psychologists to share their data when it is epistemologically, ethically and pragmatically possible.


Subject(s)
Informed Consent , Privacy , Humans , Information Dissemination , Knowledge
18.
J Law Med Ethics ; 51(S2): 21-23, 2023.
Article in English | MEDLINE | ID: mdl-38433677

ABSTRACT

Kesselheim proposes doubling the NIH's budget to promote clinically meaningful pharmaceutical innovation. Since the effects of a previous doubling (from 1998-2003) were mixed, I argue that policymakers should couple future budget growth with investments in experimentation and evaluation.


Subject(s)
Budgets , Investments , Humans , Empirical Research , Research Design
19.
Eur J Philos Sci ; 12(4): 61, 2022.
Article in English | MEDLINE | ID: mdl-36407486

ABSTRACT

Despite continued attention, finding adequate criteria for distinguishing "good" from "bad" scholarly journals remains an elusive goal. In this essay, I propose a solution informed by the work of Imre Lakatos and his methodology of scientific research programmes (MSRP). I begin by reviewing several notable attempts at appraising journal quality - focusing primarily on the impact factor and development of journal blacklists and whitelists. In doing so, I note their limitations and link their overarching goals to those found within the philosophy of science. I argue that Lakatos's MSRP and specifically his classifications of "progressive" and "degenerative" research programmes can be analogized and repurposed for the evaluation of scholarly journals. I argue that this alternative framework resolves some of the limitations discussed above and offers a more considered evaluation of journal quality - one that helps account for the historical evolution of journal-level publication practices and attendant contributions to the growth (or stunting) of scholarly knowledge. By doing so, the seeming problem of journal demarcation is diminished. In the process I utilize two novel tools (the mistake index and scite index) to further illustrate and operationalize aspects of the MSRP.

20.
JMIR Res Protoc ; 11(9): e41747, 2022 Sep 09.
Article in English | MEDLINE | ID: mdl-36044639

ABSTRACT

BACKGROUND: Trust in science and scientists has received renewed attention because of the "infodemic" occurring alongside COVID-19. A robust evidence basis shows that such trust is associated with belief in misinformation and willingness to engage in public and personal health behaviors. At the same time, trust and the associated construct of credibility are complex meta-cognitive concepts that often are oversimplified in quantitative research. The discussion of research often includes both normative language (what one ought to do based on a study's findings) and cognitive language (what a study found), but these types of claims are very different, since normative claims make assumptions about people's interests. Thus, this paper presents a protocol for a large randomized controlled trial to experimentally test whether some of the variability in trust in science and scientists and perceived message credibility is attributable to the use of normative language when sharing study findings in contrast to the use of cognitive language alone. OBJECTIVE: The objective of this trial will be to examine if reading normative and cognitive claims about a scientific study, compared to cognitive claims alone, results in lower trust in science and scientists as well as lower perceived credibility of the scientist who conducted the study, perceived credibility of the research, trust in the scientific information on the post, and trust in scientific information coming from the author of the post. METHODS: We will conduct a randomized controlled trial consisting of 2 parallel groups and a 1:1 allocation ratio. A sample of 1500 adults aged ≥18 years who represent the overall US population distribution by gender, race/ethnicity, and age will randomly be assigned to either an "intervention" arm (normative and cognitive claims) or a control arm (cognitive claims alone). In each arm, participants will view and verify their understanding of an ecologically valid claim or set of claims (ie, from a highly cited, published research study) designed to look like a social media post. Outcomes will be trust in science and scientists, the perceived credibility of the scientist who conducted the study, the perceived credibility of the research, trust in the scientific information on the post, and trust in scientific information coming from the author of the post. Analyses will incorporate 9 covariates. RESULTS: This study will be conducted without using any external funding mechanisms. CONCLUSIONS: If there is a measurable effect attributable to the inclusion of normative language when writing about scientific findings, it should generate discussion about how such findings are presented and disseminated. TRIAL REGISTRATION: Open Science Framework n7yfc; https://osf.io/n7yfc. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): PRR1-10.2196/41747.

SELECTION OF CITATIONS
SEARCH DETAIL
...