Thomas G. Dietterich said on 14 May that arXiv authors should face penalties for arxiv hallucinated papers, arguing in an X post that every listed author is responsible for a paper’s contents even when text or references were generated with AI.
The immediate flashpoint was a proposed one-year arXiv ban for papers containing hallucinated references or other obvious LLM and generative AI artifacts, a penalty Dietterich said had drawn surprisingly strong backlash.
Dietterich, an emeritus professor at Oregon State University and former president of the Association for the Advancement of Artificial Intelligence, wrote that arXiv’s code of conduct already makes the principle clear. “By signing your name as an author of a paper, each author takes full responsibility for all its contents, irrespective of how the contents were generated,” he said.
That matters because the argument online was not really about whether fake references are acceptable. It was about whether modern research labs are structured in a way that lets senior authors plausibly claim they did not check. In other words: the social norm some commenters defended appears to be exactly the one the proposed penalty is trying to break.
In his post, Dietterich described the sanction as applying to authors and coauthors who publish papers with hallucinated references and “other obvious LLM/Gen AI artifacts.” The post did not spell out a formal evidentiary standard or list all artifacts that would qualify, but the cited examples were fabricated references and visibly uncorrected AI-generated material.
The backlash he highlighted centered on workload and scale. Commenters argued that principal investigators cannot be expected to read every reference a student adds, that some academics publish more than 20 papers a year, and that teams with hundreds of contributors cannot realistically verify every citation. One reply boiled the norm down even further: “Who reads references in depth anyway!?”
Other objections were about abuse, not just practicality. One commenter raised the risk that someone could submit a paper with an adversary’s name attached and get that person banned if arXiv did not verify that every listed author had signed off on the exact submitted version. That is the sort of edge case that sounds contrived until you remember academia runs on email threads, last-minute edits and a surprising amount of trust.
Still, several researchers responding to the discussion said the basic standard was not controversial: authors can use AI tools, but they should verify the output before submission. Some said labs may need stricter internal review processes if LLM-assisted drafting is now normal.
The main caveat is that this was a social media post and not an official arXiv policy announcement. Dietterich referred to a proposed one-year ban, but the source here does not include a formal arXiv notice setting out the final rule, enforcement process or start date.
For now, the most concrete point is narrower than the online argument around arxiv hallucinated papers: Dietterich is publicly tying author credit to full responsibility, and the dispute is now over whether large labs are willing to operate as if that has always been true.
Key Takeaways
- Thomas G. Dietterich said each arXiv author is fully responsible for a paper’s contents, even when AI generated part of them.
- The proposed penalty discussed in his post was a one-year ban for authors and coauthors tied to papers with hallucinated references or other obvious AI artifacts.
- Backlash focused on whether supervisors and large research teams can realistically verify every reference and final edit.
- One commenter warned that any ban system would need protections against fraudulent authorship listings and unsigned submissions.
- The source is a social post, not an official arXiv policy document, so enforcement details remain unclear.
Further Reading
- Thomas G. Dietterich on XCancel, Mirror of Dietterich’s May 14, 2026 post on author responsibility and proposed penalties for AI-generated paper artifacts.
- arXiv submission policies, arXiv’s submission guidance and moderation framework.
- arXiv moderation and endorsement, Background on how arXiv reviews submissions and handles policy enforcement.
