The papers that cite a lot of the opposite studies are exclusive
Red flags, citations, and plagiarism checkers: How publishers should behave when they do not disclose or withhold information about their research manuscripts
It’s important that false results aren’t spread in the scientific literature. No one wants to base their reasoning on false premises. The scientific community doesn’t want researchers, the public and Artificial Intelligence to rely on data or conclusions that are different from the real thing in the same way people wouldn’t accept a medical treatment with shaky clinical trials.
publishers should scale up their efforts when it comes to investigations and correction of errors. They should take firm responsibility for the articles they have published, and conduct regular checks so that retractions in their portfolios do not go unnoticed.
Overall, to facilitate all these steps, publishers must update their practices and attribute more resources to both editorial and research-integrity teams.
Some examples in which papers cite work suggest that publishers could do a better job of scrutinizing manuscripts. A review paper on 20 studies about genetic and ovarian cancer in the journal Frontiers in Oncology had to be removed before it was published. The paper will not be revised or withdrew according to the co-author, a pharmacist at the University of Medical Sciences in Iran. The publisher, Frontiers, says it is investigating.
Publishers should be more open about issues of concern. Readers are warned that the reliability of a paper’s conclusions has been called into question.
The Committee on Publication Ethics has guidelines for how they should identify retracted articles. The article PDF file that most publishers edit includes a watermarked banner. The crucial caveat will not be included in any copy downloaded before a retraction took place.
RetractoBot is a tool used to alert scholars when papers they have cited are no longer valid. The Feet of Clay detector can be used to check whether the reference list of an article has any red flags. It is possible to run checks on literature that is of interest to you using the title of the article and the publishers’ portfolios.
Publishers are best placed to make impactful changes to their practices and processes. They should check submitted manuscripts for plagiarism with the help of tools such as the artificial intelligence and citation plantations on a regular basis.
Red flags include tortured phrases and possibly machine-generated texts, which should be known by reviewers. Suspicious phrases that look like they might be machine-generated can be checked using tools such as the PPS Tortured Phrases Detector2.
There are more than 1,700 papers that have caught his eye because of their reliance on retracted work that were flagged by him on his website and elsewhere online. Some authors have thanked Cabanac for alerting them to problems in their references. Others argue that it’s unfair to effectively cast aspersions on their work because of retractions made after publication that, they say, don’t affect their paper.
Authors should check for any post-publication criticism or retraction when using a study, and certainly before including a reference in a manuscript draft.
Detecting bad research in the scientific literature using a human-controlled fingerprinting system to monitor and remove published comments in PubPeer
Two PubPeer extensions are important. One plug-in automatically flags any paper that has received comments on PubPeer, which can include corrections and retractions, when readers skim through journal websites. The other works to identify the same articles within a user’s digital library. For local copies of downloaded PDFs, the publishing industry uses Crossmark: readers can click on the Crossmark button to check the status of the article on the landing page at the publisher’s website.
I created a tool to comb the literature for phrases that are in the scientific literature. Each tortured phrase needs to first be spotted by a human reader, and then added as a ‘fingerprint’ to the tool that screens the literature using 130 million scientific documents. Over 5,000 fingerprints have been collected. Humans check for false positives in a third step. (Dimensions is in the portfolio of Digital Science, which is part of Holtzbrinck, the majority shareholder in Nature’s publisher, Springer Nature.)
Cabanac, a research-integrity sleuth, has already created software to flag thousands of problematic papers in the literature for issues such as computer-written text or disguised plagiarism. He hopes that his new detector, which he describes in a Nature article this week, will provide another way to stop the spread of bad research through the scientific literature.
Other avenues exist to question a study after publication, such as commenting on the PubPeer platform, where a growing number of papers are being reported. Nearly all of the comments that have been received on PubPeer were critical. The authors of a paper that is criticized aren’t obliged to respond, because publishers don’t monitor them. The authors of the journal can sometimes overlook important issues raised in the comments after the publication.
And journals are notoriously slow. The process requires journal staff to mediate a conversation between all parties — a discussion that authors of the criticized paper are typically reluctant to engage in and which sometimes involves extra data and post-publication reviewers. Most investigations can take months or years before the outcome is made public.
Scientists can flag a paper if they call the editorial team of the journal in which it appeared. It can be difficult to find ways to raise concerns. Furthermore, this process is typically not anonymous and, depending on the power dynamics at play, some researchers might be unwilling or unable to enter these conversations.
Paper mills have popped up that take advantage of the system. They produce manuscripts based on made-up, manipulated or plagiarised data, sell those fake manuscripts as well as authorship and citations, and engineer the peer-review process.
A researcher’s performance metrics — including the number of papers published, citations acquired and peer-review reports submitted — can all serve to build a reputation and visibility, leading to invitations to speak at conferences, review manuscripts, guest-edit special issues and join editorial boards. It can give more weight to job applications, be important to attracting funding and lead to more citations that can build a high-profile career. Institutions generally seem happy to host scientists who publish a lot, are highly cited and attract funding.
Article retractions have been growing steadily over the past few decades, soaring to a record-breaking figure of nearly 14,000 last year, compared with less than 1,000 per year before 2009 (see go.nature.com/3azcxan and go.nature.com/3x9uxfn).
In January, a review paper1 about ways to detect human illnesses by examining the eye appeared in a conference proceedings published by the Institute of Electrical and Electronics Engineers (IEEE) in New York City. The authors didn’t see that most of the paper it referred to had already been withdrawn.
Nature has put out a list of the highest ranked papers in terms of number of studies withdrawn. Its authors didn’t respond to requests for comment, but IEEE integrity director Luigi Longobardi says that the publisher didn’t know about the issue until Nature asked, and that it is investigating.
“We are not accusing anybody of doing something wrong. It is believed that some of the references have been withdrawn, meaning that the paper may be unreliable. He calls his tool a Feet of Clay Detector, referring to an analogy, originally from the Bible, about statues or edifices that collapse because of their weak clay foundations.
These include engineering researcher Ali Nazari, who was dismissed from Swinburne University of Technology in Melbourne, Australia, in 2019, after a university misconduct investigation into his activities. The current location of him is unclear and he worked at Islamic Azad University in Saveh, Iran. After Nature told publishers about his extant papers2,3 topping Cabanac’s lists — including Elsevier and Fap-Unifesp, a non-profit foundation that supports the Federal University of São Paulo in Brazil — they said that they would look into the articles. One of the journals that had relevant information was discontinued.
Cabanac’s detector also flags papers4 by Chen-Yuan Chen, a computer scientist who worked at the National Pingtung University of Education in Taiwan until 2014. He was behind a syndicate that faked peer review and boosted citations, which came to light in 2014 after an investigation by the publisher SAGE. Some of Chen’s papers that are still in the literature were published by Springer Nature, which says it hadn’t been aware of the issue but is now investigating. Neither Chen nor Nazari responded to Nature’s requests for comment.
Almost all of the references have been taken out of the article, even after the review was published. The corresponding author, Mara Sol Brassesco, said that she sent a new version of the paper to the journal which hasn’t published it, and that removing references didn’t change the conclusions of the review. She says the expression of concern felt like we were being punished for something that we could not see in the future because the cited works were not published again after publication. Jia says that editors felt that adding the notice was the most appropriate action.
A study shows that authors are reluctant to update reviews even though they’re told the paper doesn’t cite retracted work. The authors of 88 systematic reviews were e-mailed by researchers who claimed to be aware of the studies that have been discredited. The authors of the reviews told Nature last year that only 11 of them had been updated.
In the past few years, paper-management tools for researchers have begun to flag papers that have been taken down, but it isn’t frequent for authors to be alert when work is withdrawn. Cabanac thinks publishers might use tools like his to create similar alerts.
More than 100,000 researchers have been contacted by the team. DeVito says that a minority of authors are annoyed about being contacted, but that others are grateful. He states that they’re merely trying to provide a service to reduce the practice from happening.