Wikipedia editors just adopted a new policy to help them deal with the slew of AI-generated articles flooding the online encyclopedia. The new policy, which gives an administrator the authority to quickly delete an AI-generated article that meets a certain criteria, isn’t only important to Wikipedia, but also an important example for how to deal with the growing AI slop problem from a platform that has so far managed to withstand various forms of enshittification that have plagued the rest of the internet.
Wikipedia is maintained by a global, collaborative community of volunteer contributors and editors, and part of the reason it remains a reliable source of information is that this community takes a lot of time to discuss, deliberate, and argue about everything that happens on the platform, be it changes to individual articles or the policies that govern how those changes are made. It is normal for entire Wikipedia articles to be deleted, but the main process for deletion usually requires a week-long discussion phase during which Wikipedians try to come to consensus on whether to delete the article.
However, in order to deal with common problems that clearly violate Wikipedia’s policies, Wikipedia also has a “speedy deletion” process, where one person flags an article, an administrator checks if it meets certain conditions, and then deletes the article without the discussion period.
For example, articles composed entirely of gibberish, meaningless text, or what Wikipedia calls “patent nonsense,” can be flagged for speedy deletion. The same is true for articles that are just advertisements with no encyclopedic value. If someone flags an article for deletion because it is “most likely not notable,” that is a more subjective evaluation that requires a full discussion.
At the moment, most articles that Wikipedia editors flag as being AI-generated fall into the latter category because editors can’t be absolutely certain that they were AI-generated. Ilyas Lebleu, a founding member of WikiProject AI Cleanup and an editor that contributed some critical language in the recently adopted policy on AI generated articles and speedy deletion, told me that this is why previous proposals on regulating AI generated articles on Wikipedia have struggled.
“While it can be easy to spot hints that something is AI-generated (wording choices, em-dashes, bullet lists with bolded headers, …), these tells are usually not so clear-cut, and we don’t want to mistakenly delete something just because it sounds like AI,” Lebleu told me in an email. “In general, the rise of easy-to-generate AI content has been described as an ‘existential threat’ to Wikipedia: as our processes are geared towards (often long) discussions and consensus-building, the ability to quickly generate a lot of bogus content is problematic if we don’t have a way to delete it just as quickly. Of course, AI content is not uniquely bad, and humans are perfectly capable of writing bad content too, but certainly not at the same rate. Our tools were made for a completely different scale.”
The solution Wikipedians came up with is to allow the speedy deletion of clearly AI-generated articles that broadly meet two conditions. The first is if the article includes “communication intended for the user.” This refers to language in the article that is clearly an LLM responding to a user prompt, like “Here is your Wikipedia article on…,” “Up to my last training update …,” and “as a large language model.” This is a clear tell that the article was generated by an LLM, and a method we’ve previously used to identify AI-generated social media posts and scientific papers.
Lebleu, who told me they’ve seen these tells “quite a few times,” said that more importantly, they indicate the user hasn’t even read the article they’re submitting.
“If the user hasn’t checked for these basic things, we can safely assume that they haven’t reviewed anything of what they copy-pasted, and that it is about as useful as white noise,” they said.
The other condition that would make an AI-generated article eligible for speedy deletion is if its citations are clearly wrong, another type of error LLMs are prone to. This can include both the inclusion of external links for books, articles, or scientific papers that don’t exist and don’t resolve, or links that lead to completely unrelated content. Wikipedia’s new policy gives the example of “a paper on a beetle species being cited for a computer science article.”
Lebleu said that speedy deletion is a “band-aid” that can take care of the most obvious cases and that the AI problem will persist as they see a lot more AI-generated content that doesn’t meet these new conditions for speedy deletion. They also noted that AI can be a useful tool that could be a positive force for Wikipedia in the future.
“However, the present situation is very different, and speculation on how the technology might develop in the coming years can easily distract us from solving issues we are facing now, they said. “A key pillar of Wikipedia is that we have no firm rules, and any decisions we take today can be revisited in a few years when the technology evolves.”
Lebleu said that ultimately the new policy leaves Wikipedia in a better position than before, but not a perfect one.
“The good news (beyond the speedy deletion thing itself) is that we have, formally, made a statement on LLM-generated articles. This has been a controversial aspect in the community before: while the vast majority of us are opposed to AI content, exactly how to deal with it has been a point of contention, and early attempts at wide-ranging policies had failed. Here, building up on the previous incremental wins on AI images, drafts, and discussion comments, we workshopped a much more specific criterion, which nonetheless clearly states that unreviewed LLM content is not compatible in spirit with Wikipedia.”
The state of Florida is suing some of the biggest porn platforms on the internet,…
Immigration and Customs Enforcement (ICE) is urgently looking for a company to help it “dominate”…
A researcher has scraped nearly 100,000 conversations from ChatGPT that users had set to share…
Cybersecurity attacks are rising sharply in 2025, and Microsoft has been one among many prominent…
Collective Shout, an organization “for anyone concerned about the increasing pornification of culture,” based its…
Welcome back to the Abstract! Here are the studies this week that boiled my blood,…