AI Companies Learn the Word No

2026-05-04 10:00 • ;Katherine Mangu-Ward




An illustration of Dario Amodei and Pete Hegseth | Illustration: Algi Febri Sugita/ZUMAPRESS/Jen Golbeck/SOPA Images/Sipa USA/BONNIE CASH/UPI/Newscom/Tech Crunch/Wikimedia Commons


One of the more encouraging developments in artificial intelligence is that some of the people building it have started acting like it might be dangerous. Not in the Skynet sense or the HAL 9000 sense or even the "oops, it deleted all my emails" sense, though AI might be dangerous in all of those ways too. The question is whether the latest models are dangerous to infrastructure, dangerous to privacy, dangerous to security, and dangerous to the blurry line between public and private. For years, Big Tech has been heavy on the gas, light on the brakes—and we have all benefited tremendously, even as angry debates about the downsides have raged. But with AI, at least in a few notable cases, the companies themselves have begun doing something unusual. They have started saying no.


Anthropic has announced that it would not broadly release Claude Mythos Preview, a frontier model that it says has already found "thousands of high-severity vulnerabilities," including in every major operating system and web browser. Instead, it is confining access to a consortium that includes Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, Palo Alto Networks, and some other organizations that build or maintain critical software infrastructure. Anthropic says the point is defensive: to use the model to find and patch catastrophic flaws before less scrupulous actors get their hands on similar capabilities.


There is no shortage of self-interest in that decision to launch Project Glasswing. These companies would rather be seen as stewards than as Visigoths. Waiting for general release will slow the cycle of copycatting by rivals. But in an industry that spent years insisting that every new capability had to be shipped immediately because progress was inevitable, it is genuinely notable to see a major player conclude that a sufficiently powerful model should not simply be tossed into the public square.


***


The same instinct showed up, more dramatically, in Anthropic's recent fight with the Pentagon. The company publicly said it had only two "narrow exceptions" to military use of its models: mass domestic surveillance and fully autonomous weapons. On surveillance, Anthropic CEO Dario Amodei argued that AI makes it possible to turn commercially available data into "a comprehensive picture of any person's life—automatically and at massive scale." On autonomous weapons, he said today's frontier systems are "not reliable enough" to "take humans out of the loop entirely and automate selecting and engaging targets" on their own.


Like nearly every other major tech company, Anthropic remains perfectly happy to assist the government with most of its run-of-the-mill murder and destruction. But it's notable, and praiseworthy, that the company wanted to insist on some contractual limits to how its product is used.


The response reflected the Pentagon's chaotic, vengeful new normal. The military insisted it would contract only with AI companies willing to accept "any lawful use" and remove those safeguards. When Anthropic refused, Defense Secretary Pete Hegseth designated the company a risk to national security, a label that blocks Pentagon contracts and could potentially widen into a broader blacklist. Anthropic sued, arguing the move was retaliation. The status as of mid-April is messy: A California federal judge blocked one set of punitive actions and forced the government to remove stigmatizing labels, while on April 8 the D.C. Circuit declined, for now, to pause the Pentagon's separate supply-chain-risk designation as litigation continues.


The California judge, Rita Lin, had it right when she wrote: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government."


***


A great many people, on both the right and the left, have spent the past two years demanding that somebody, somewhere, place meaningful limits on AI. Now a major company has done exactly that, and it turns out many of the same people are uncomfortable with what limits actually look like in practice.


Project Glasswing is, after all, a cartel. There are many ways such an arrangement might go wrong. There is every reason to worry that the major AI players could use "safety" language as a way to consolidate their own power and freeze out smaller competitors. Antitrust regulators must be salivating at the prospect. But one can also imagine a far worse alternative, in which we wait for some combination of Congress, the Federal Trade Commission, the Commerce Department, the European Union, and 17 blue-state attorneys general to act.


The informal coordination of the major players may, for a time, be the best bet. It is more flexible, more reversible, and more tightly connected to the people who actually understand the technology.


This new phase in the story of AI arrives at a strange moment. As this issue goes to press, the United States and Iran have agreed to a two-week ceasefire after a six-week war that killed thousands and disrupted global trade and energy markets. Everything is uncertain, and that's one reason the Pentagon AI fight matters so much: It is happening in the middle of an actual war, with actual stakes, as AI becomes ever more entangled with military planning, intelligence, cyber operations, and state power. A question hangs over all of it: Was this the first AI war, or the last war of a previous age?


The post AI Companies Learn the Word No appeared first on Reason.com.

Read More Here: https://reason.com/2026/05/04/ai-companies-learn-the-word-no/