Abstract
In recent years, several online tools have appeared capable of identifying potential plagiarism in science. While such tools may help to maintain or even increase the originality and ethical quality of the scientific literature, no apparent consensus exists among editors on the degree of plagiarism or self-plagiarism necessary to reject or retract manuscripts. In this study, two entire volumes of published original papers and reviews from Basic & Clinical Pharmacology & Toxicology were retrospectively scanned for similarity in anonymized form using iThenticate software to explore measures to predictively identify true plagiarism and self-plagiarism and to potentially provide guidelines for future screening of incoming manuscripts. Several filters were applied, all of which appeared to lower the noise from irrelevant hits. The main conclusions were that plagiarism software offers a unique opportunity to screen for plagiarism easily but also that it has to be employed with caution as automated or uncritical use is far too unreliable to allow a fair basis for judging the degree of plagiarism in a manuscript. This remains the job of senior editors. Whereas a few cases of self-plagiarism that would not likely have been accepted with today's guidelines were indeed identified, no cases of fraud or serious plagiarism were found. Potential guidelines are discussed
Original language | English |
---|---|
Journal | Basic & Clinical Pharmacology & Toxicology |
Volume | 119 |
Issue number | 2 |
Pages (from-to) | 161-164 |
ISSN | 1742-7835 |
DOIs | |
Publication status | Published - 1 Aug 2016 |