
Some researchers are writing papers anonymously using ChatGPT.Credit: Jonathan Rae/NurPhoto via Getty
Journal on August 9 physical script Published a paper aimed at finding new solutions to complex mathematical equations1. It looked genuine, but scientist Guillaume Cabanac noticed a strange phrase on the third page of the manuscript: ‘Reproduce the response’.
The phrase was the label for a button on ChatGPT, a free-to-use AI chatbot that generates fluent text when users ask questions. Cabnac, a computer scientist at the University of Toulouse in France, quickly posted a screenshot of the page in question on PubPeer – a website where scientists discuss published research.
Authors have since confirmed with the journal that they used ChatGPT to help draft their manuscripts, says Kim Eggleton, head of peer review and research integrity at IOP Publishing. physical scriptPublishers from Bristol, UK. No inconsistencies were detected during two months of peer review (the paper was submitted in May and a revised version was sent in July) or during typesetting. The publisher has now decided to retract the paper, as the authors did not declare their use of the tool when submitted. “This is a violation of our ethical policies,” Eggleton says. Corresponding author Abdullahi Yusuf, who is jointly affiliated with Biruni University in Istanbul and the Lebanese American University in Beirut, did not comment. NatureRequest for comment.
‘tip of the iceberg’
This is not the only case of a ChatGPT-assisted manuscript slipping into an unannounced peer-reviewed journal. Since April, Cabanac has flagged more than a dozen journal articles that contain the telltale ChatGPT phrases ‘regenerate response’ or ‘as an AI language model, I …’ and posted them on PubPeer. Several publishers, including Elsevier and Springer Nature, have said that authors can use ChatGPT and other large language model (LLM) tools to help prepare their manuscripts, as long as they declare. (Nature)’s news team is editorially independent from its publisher, Springer Nature.)
Since only naive unannounced uses of ChatGPT were used in key phrase searches — in which authors forgot to edit the telltale signs — the number of unannounced peer-reviewed papers generated with ChatGPT’s unannounced use is likely to be much higher. “This is just the tip of the iceberg,” Cabanac says. (The supposed icons also change: ChatGPT’s ‘regenerate response’ button changed to ‘regenerate’ in an update to the tool earlier this year).
Cabnack has discovered typical ChatGPT phrases in a handful of papers published in Elsevier journals. The latest paper was published on August 3 Resources Policy which explored the impact of e-commerce on fossil-fuel efficiency in developing countries2. Cabnack noticed that some of the equations in the paper didn’t make sense, but they were above the table: ‘Please note that as an AI language model, I am unable to create specific tables or perform tests …’
An Elsevier spokesperson said Nature The publisher is “aware of the issue” and is investigating. The paper’s authors, from Liaoning University in Shenyang, China, and the Chinese Academy of International Trade and Economic Cooperation in Beijing, did not respond to requests for comment. NatureRequest for comment.
A terrible fluidity
Papers written in whole or in part by computer software, but without the authors disclosing that fact, are nothing new. However, they usually contain subtle but detectable traces — such as specific patterns of language or mistranslated ‘translated phrases’ — that distinguish them from their human-written counterparts, says Matt Hodgkinson, research integrity manager at the UK Research Integrity Office, headquartered in London. But if researchers remove the boilerplate ChatGPT phrases, it’s “almost impossible” to find the fluent text of a more sophisticated chatbot, Hodgkinson says. “It’s essentially an arms race,” he says – “the scammers versus the people trying to keep them out”.
Cabnack and others have found undeclared use of ChatGPT in peer-reviewed conference papers and preprints—manuscripts that have not gone through peer review. When these issues were raised on PubPeer, the authors acknowledged that they used ChatGPT, undisclosed, to help create the work.
Elizabeth Bick, a microbiologist and independent research integrity consultant in San Francisco, California, says the rise of ChatGPT and other generative AI tools will power paper mills — companies that create and sell fake manuscripts to researchers looking to boost their publication output. . “That would multiply the problem a hundredfold,” says Bick. “I’m very concerned that we already have an influx of these papers that we don’t even recognize.”
Stretched to the limit
The problem of anonymous LLM-produced papers in journals points to a deeper problem: Stretched reviewers often don’t have time to thoroughly check manuscripts for red flags, says David Biemler, who uncovers fake papers under the pseudonym Smut Clyde. “The whole science ecosystem is publish or perish,” says Bimler, a retired psychologist at Massey University in Palmerston North, New Zealand. “Can’t keep count of the gatekeepers.”
ChatGPT and other LLMs have a tendency to spit out false references, which can be a signal to reviewers looking for the use of these tools in manuscripts, Hodgkinson says. “If the context doesn’t exist, that’s a red flag,” he says. For example, the website Retraction Watch reported on a preprint of Millipedes written using ChatGPT; It was seen by a researcher cited by the work who noticed that its references were fake.
Rune Stenvold, a microbiologist at the State Serum Institute in Copenhagen, ran into the fake-reference problem when a student asked for a copy of a 2006 paper Stansvold co-authored with a colleague. exists. A student asked an AI chatbot to suggest papers Blastocystis — a species of intestinal parasite — and the chatbot combined a reference to Stansvold’s name on it. “It looked so real,” he says. “It taught me that when I get papers to review, I should start by looking at the reference section.”
Additional reporting by Chris Stoeckel-Walker.