ABSTRACT
The similarity of documents in a large database of published Fractals articles was examined for redundancy. Three different text matching techniques were used on published Abstracts to identify redundancy candidates, and predictions were verified by reading full text versions of the redundancy candidate articles. A small fraction of the total articles in the database was judged to be redundant. This was viewed as a lower limit, because it excluded cases where the concepts remained the same, but the text was altered substantially. Far more pervasive than redundant publications were publications that did not violate the letter of redundancy but rather violated the spirit of redundancy. There appeared to be widespread publication maximization strategies. Studies that resulted in one comprehensive paper decades ago now result in multiple papers that focus on one major problem, but are differentiated by parameter ranges, or other stratifying variables. This 'paper inflation' is due in large part to the increasing use of metrics (publications, patents, citations, etc) to evaluate research performance, and the researchers' motivation to maximize the metrics.
Subject(s)
Bibliometrics , Databases, Factual/statistics & numerical data , Duplicate Publications as Topic , Fractals , Plagiarism , Algorithms , Databases, Bibliographic , Ethics, Research , Humans , Information Storage and Retrieval/methodsABSTRACT
Three generic types of RIA approaches used by the federal government were described (retrospective, peer review, and quantitative methods). Peer review is the method used most frequently. All methods examined have their unique shortcomings. A fundamental problem is that many research impact targets exist. These include impacts on the research field itself, allied research fields, technology, systems, operations, education, etc. The strength of the specific impact of the research on each of these targets and the weighting assigned to the value of the research impact on each of these targets depends on the technical, organizational, and personal perspectives of the reviewers. Much of the research evaluation community has come to believe that simultaneous use of many techniques is the preferred approach. However, there is little evidence of multiple technique use by the federal government in impact assessment, especially bibliometrics to support peer review. This area is ripe for exploitation.