Business

Is any AI safe from your boycott?

OpenAI's Pentagon problem and Anthropic's convenient conscience put Big Tech's "altruism" to the test.

A decade ago, Sam Altman, Elon Musk, and several fellow researchers launched OpenAI. A not-for-profit organisation, the company’s core purpose was to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”

Dario Amodei, an AI researcher in his early thirties, joined the firm the following year. He quickly rose to the rank of VP of Research, only to leave the company in late 2020 with his sister, Daniela, and a small group of senior colleagues following disagreements over leadership’s allegedly lax approach to AI safety.

Sam Altman and Dario Amodei refuse to hold hands at AI summit with Indian Prime Minister Narendra Modi and other major tech figures. Indian Prime Minister's Office, 2026.

The group of OpenAI alumni went on to start Anthropic – which it defined as an “AI safety and research company” – in 2021, with Dario Amodei at the helm as CEO.

From the outset, Amodei sought to place safety at the centre of Anthropic’s mission. The company was established as a public benefit corporation (PBC), meaning it holds a legal responsibility to pursue both profit and a stated public mission – which in Anthropic’s case was responsibly developing and maintaining advanced AI “for the long-term benefit of humanity.”

Following pressure from investors, and after experimenting with a hybrid “capped profit” structure from 2019 , OpenAI opted for the same PBC structure in late 2025.

Despite both firms hard-coding “the greater good” into their governance structures, the eye-watering pace at which their models have progressed – and rising financial pressures they face as they race to turn a profit and go public – is putting this altruism to the test. And so far, it looks like OpenAI’s Sam Altman is failing.

A boycott campaign called “QuitGPT” has been gaining ground since reports emerged that OpenAI’s chatbot was being deployed to support ICE’s mass deportation campaign, and that the company’s President donated $25 million to MAGA Inc., President Donald Trump’s main political fundraising body.

A deal struck between OpenAI and the Pentagon in March allowing for the use of Open AI’s technology in classified US military operations brought yet another unwelcome spotlight onto the company. Critics have voiced concerns that the partnership could supercharge autonomous weapons systems and mass surveillance of US citizens.

Despite Altman’s quick amendments to the deal to implement safeguards against these concerns materialising, the damage was already done. Even he seemed to concede as much, admitting in a memo to employees that the hasty change made OpenAI look “opportunistic and sloppy,” and allegedly stating separately that the company could not fully control how the Pentagon used its technology.

In contrast to Altman’s embarrassing scramble, Amodei made Anthropic’s refusal of the deal clear from the outset, stating its belief that “using these systems for mass domestic surveillance is incompatible with democratic values.” The Trump administration responded furiously by declaring the company a supply chain risk to national security, a designation that has never been used against an American company before.

This debacle is unfolding as both firms continue their race to IPO later this year. Anthropic’s Claude initially seemed a meek contender to trailblazing ChatGPT, but it has managed to pinch market share from its incumbent in recent months. Analysts point to the success of its coding agent for businesses, Claude Enterprise, and its lucrative off-take agreements with the likes of Nvidia and Microsoft, at bringing in much-needed cash as the broader AI sector vies to finally turn a profit. Meanwhile, OpenAI resorted this year to running ads to generate revenue – which Altman once labelled a “last resort” – and is chipping away at its “side quests” including its generative video app, Sora.

In the fallout of the Pentagon deal, Claude seized the top spot on Apple’s App Store. The #QuitGPT campaign itself estimates that 4 million people have “taken action as part of the boycott.”

In a poll shared to Felix’s Instagram story, two-thirds of respondents said the Pentagon deal pushed them to abandon ChatGPT – with two-thirds of those switchers opted for Claude instead. Google’s Gemini captured 19%, Perplexity 4%, and other providers the remaining 10%.

But is Anthropic really the safe and altruistic alternative it claims to be?

Sure, Amodei’s refusal of Trump’s deal did a good job at virtue-signalling, and he has publicly endorsed the need for better regulation of AI safety (including by donating $20 million to a group lobbying for this purpose). He has openly shared his concerns about AI leading to widespread job losses, increasingly autonomous weaponry, and more empowered authoritarian regimes.

But just this month, Anthropic launched Mythos, a coding and cybersecurity model that reportedly found “thousands” of major vulnerabilities across even the most established operating systems, to a small selection of major tech companies. The launch has sparked panicked meetings in boardrooms worldwide, as executives fear existential cybersecurity inadequacies for companies and even the “entire banking system.”

Responding to concerns, Amodei posted on X: “The dangers of getting this wrong are obvious, but if we get it right, there is a real opportunity to create a fundamentally more secure internet and world than we had before the advent of AI-powered cyber capabilities.”

The worry is that Amodei’s words mean as little in reality as Anthropic and its competitor’s PBC status – which legal experts say lacks real enforcement power to ensure the greater good and profit really do go hand-in-hand. Claude might not be the upstanding philanthropist you think he is

Feature image: Sam Altman and Dario Amodei refuse to hold hands at AI summit with Indian Prime Minister Narendra Modi and other major tech figures. Indian Prime Minister's Office, 2026.

Tagged in:

From Issue 1896

24 April 2026

Discover stories from this section and more in the list of contents

Explore the edition