Wikipedia blocked editors from using generative AI to write or substantially rewrite articles in March 2025, after the Wikimedia Foundation and community approved the policy in response to a surge of fabricated content. The ban took immediate effect on English Wikipedia, restricting how editors can deploy large language models.
AI-generated text often inserts false claims, called hallucinations, that break Wikipedia's core rules of verifiable sourcing and neutral point of view. These fabrications threaten credibility for the millions of readers who rely on the encyclopedia for accurate information. The policy targets generative tools that compose or rewrite entire passages, not assistive software that corrects existing work.
Editors may still employ AI tools for spelling checks, grammar polishing, and simple translation under the new rules. These narrow uses act on already verified content, reducing hallucination risk while speeding routine tasks. The distinction turns on whether the tool creates new claims or refines established text.
Detecting AI-written edits remains imperfect, with OpenAI's own 2023 classifier showing a 26% true-positive rate and a 9% false-positive rate. Volunteers flag suspicious revisions for manual review, and repeated violations may lead to temporary editing blocks. The Wikimedia Foundation announced that enforcement depends on community vigilance rather than automated detection.
Other language editions are watching the English Wikipedia rollout to inform their own policies. The Wikimedia Foundation's FY2025-26 plan lists monthly active editor metrics as a key indicator for future policy extensions. Editors and readers should remain vigilant for updates and continue applying rigorous source checks to maintain the encyclopedia's integrity.





















