Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Elife ; 102021 10 19.
Article in English | MEDLINE | ID: mdl-34665132

ABSTRACT

Background: Blinding reviewers to applicant identity has been proposed to reduce bias in peer review. Methods: This experimental test used 1200 NIH grant applications, 400 from Black investigators, 400 matched applications from White investigators, and 400 randomly selected applications from White investigators. Applications were reviewed by mail in standard and redacted formats. Results: Redaction reduced, but did not eliminate, reviewers' ability to correctly guess features of identity. The primary, preregistered analysis hypothesized a differential effect of redaction according to investigator race in the matched applications. A set of secondary analyses (not preregistered) used the randomly selected applications from White scientists and tested the same interaction. Both analyses revealed similar effects: Standard format applications from White investigators scored better than those from Black investigators. Redaction cut the size of the difference by about half (e.g. from a Cohen's d of 0.20-0.10 in matched applications); redaction caused applications from White scientists to score worse but had no effect on scores for Black applications. Conclusions: Grant-writing considerations and halo effects are discussed as competing explanations for this pattern. The findings support further evaluation of peer review models that diminish the influence of applicant identity. Funding: Funding was provided by the NIH.


Subject(s)
Biomedical Research/statistics & numerical data , Financing, Organized/statistics & numerical data , Peer Review, Research , Research Personnel/psychology , Humans , Research Personnel/statistics & numerical data
2.
Sci Adv ; 6(23): eaaz4868, 2020 06.
Article in English | MEDLINE | ID: mdl-32537494

ABSTRACT

Previous research has found that funding disparities are driven by applications' final impact scores and that only a portion of the black/white funding gap can be explained by bibliometrics and topic choice. Using National Institutes of Health R01 applications for council years 2014-2016, we examine assigned reviewers' preliminary overall impact and criterion scores to evaluate whether racial disparities in impact scores can be explained by application and applicant characteristics. We hypothesize that differences in commensuration-the process of combining criterion scores into overall impact scores-disadvantage black applicants. Using multilevel models and matching on key variables including career stage, gender, and area of science, we find little evidence for racial disparities emerging in the process of combining preliminary criterion scores into preliminary overall impact scores. Instead, preliminary criterion scores fully account for racial disparities-yet do not explain all of the variability-in preliminary overall impact scores.

3.
PLoS One ; 10(6): e0126938, 2015.
Article in English | MEDLINE | ID: mdl-26039440

ABSTRACT

The predictive validity of peer review at the National Institutes of Health (NIH) has not yet been demonstrated empirically. It might be assumed that the most efficient and expedient test of the predictive validity of NIH peer review would be an examination of the correlation between percentile scores from peer review and bibliometric indices of the publications produced from funded projects. The present study used a large dataset to examine the rationale for such a study, to determine if it would satisfy the requirements for a test of predictive validity. The results show significant restriction of range in the applications selected for funding. Furthermore, those few applications that are funded with slightly worse peer review scores are not selected at random or representative of other applications in the same range. The funding institutes also negotiate with applicants to address issues identified during peer review. Therefore, the peer review scores assigned to the submitted applications, especially for those few funded applications with slightly worse peer review scores, do not reflect the changed and improved projects that are eventually funded. In addition, citation metrics by themselves are not valid or appropriate measures of scientific impact. The use of bibliometric indices on their own to measure scientific impact would likely increase the inefficiencies and problems with replicability already largely attributed to the current over-emphasis on bibliometric indices. Therefore, retrospective analyses of the correlation between percentile scores from peer review and bibliometric indices of the publications resulting from funded grant applications are not valid tests of the predictive validity of peer review at the NIH.


Subject(s)
Databases, Factual , National Institutes of Health (U.S.) , Peer Review, Research , Humans , United States
SELECTION OF CITATIONS
SEARCH DETAIL
...