Wikipedia’s Existential Threats Feel Greater Than Ever


In 2010, the FBI sent Wikipedia a letter that would be intimidating for any organization to receive.

The missive demanded that the free online encyclopedia remove the FBI’s logo from an entry about the agency, claiming that reproducing the emblem was illegal and punishable with fines, imprisonment, “or both.” Rather than back down, a lawyer for the Wikimedia Foundation, which hosts Wikipedia, shot back a sharp refusal outlining how the FBI’s interpretation of the relevant statute was incorrect and saying that Wikipedia was “prepared to argue our view in court.” It worked—the FBI dropped the matter.

But the spat presupposed a society based on the rule of law, where a government agency would hear a legal argument in good faith rather than overriding it with power. Fast-forward to the present day, and things are very different. Elon Musk has dubbed the site “Wokepedia” and alleged that it’s controlled by far-left activists. Last fall, Tucker Carlson devoted an entire 90-minute podcast to railing against Wikipedia as “completely dishonest and completely controlled on questions that matter.” And after Republican congresspeople James Comer and Nancy Mace accused Wikipedia of “information manipulation” in a congressional investigation, the foundation replied with a respectful explainer about how Wikipedia works, taking a more conciliatory approach rather than arguing about government overreach. The pragmatic shift reflects a world where the Trump administration selects winners and losers based on political preference.

As the world’s most famous free internet encyclopedia turns 25 today, it’s facing a host of challenges. Forces on the political right have attacked Wikipedia for alleged liberal bias, with the conservative Heritage Foundation going so far as to say that it will “identify and target” the site’s volunteer editors. AI bots have relentlessly scraped Wikipedia’s information, straining the site’s servers. Compounding these issues is the struggle to replenish the project’s volunteer community, the so-called graying of Wikipedia.

Beneath these threats is the foreboding feeling that the culture has drifted away from Wikipedia’s founding ideals. Aiming for neutrality, evaluating sources, volunteering for the public benefit, sustaining a noncommercial online project, these concepts seem at best old-fashioned and at worst useless in today’s overtly partisan, lawless, antihuman, “greed is good” phase of the internet.

Still, there remains the possibility that Wikipedia’s most influential days lie in its future, assuming it recasts itself inside the crucible.

Bernadette Meehan, Wikimedia Foundation’s new CEO, whose résumé includes stints as a foreign service officer and ambassador, is well poised to meet these attacks, according to chief communications officer Anusha Alikhan. “The diplomacy and negotiation skills are things that I think will lend well to the current environment,” she told WIRED. But even the best diplomat would struggle with the current slate of challenges: The UK has proposed age-gating Wikipedia under its Online Safety Act. In Saudi Arabia, Wikipedia editors have been imprisoned after documenting the country’s human rights abuses on the platform. And the Great Firewall continues to block every version of the site for mainland China.

What’s perhaps more telling is that even inside the Wikipedia community, longtime contributors are worried about its diminishing relevance. In a widely circulated essay, veteran editor Christopher Henner said he fears that Wikipedia will increasingly become a “temple” filled with aging volunteers, self-satisfied by work nobody looks at anymore.

Beyond these ongoing censorship battles, Wikipedia is also struggling to explain why human labor still matters in the age of artificial intelligence. Although nearly every major AI system trains on Wikipedia’s freely-licensed content, the tech industry’s message since 2022 has been that human-powered knowledge production has been rendered irrelevant by AI. Except that’s not true. While we are still in the early days of the AI revolution, it seems for now that AI applications perform better when they are trained on human-written and human-vetted information, the kind that comes from human-centered editorial processes like Wikipedia’s. When an AI system trains recursively on its own AI-generated synthetic data, it is likely to suffer from model collapse.

Leave a Reply

Your email address will not be published. Required fields are marked *