What if Anthropic Might Have Found a Better Business Model than Selling Tokens?
As the frontier lab creates ever more powerful models, soon every major company might be required to pay for model previews.
Imagine Anthropic releases a model so capable that, before it goes public, the company quietly contacts Google, Microsoft, OpenAI, Meta, and a handful of critical infrastructure operators. The message is polite, clinical, almost bureaucratic in tone. It goes something like this:
We have identified that our forthcoming model poses novel security risks to your systems. We would like to offer you a structured preview period to assess and remediate vulnerabilities before general availability.
On its face, this sounds like responsible disclosure. The same norm that governs how security researchers handle zero-day exploits. You find a flaw, you tell the vendor privately, you give them time to patch. Responsible. Mature. Ethical, even.
But look at it from a slightly different angle, and the same transaction starts to resemble something your local crime novelist would recognize immediately. A powerful party has created a threat. They are offering to delay the threat — for a fee, or a relationship, or both. They will be reasonable about it. They are always reasonable about it.
“They have created a problem only they fully understand, and they are selling the map to the minefield.”
The peculiar economics of frontier risk
Here is what makes this scenario structurally interesting: it is not obviously wrong. Anthropic, in this hypothetical, is doing something genuinely useful. If their model can find and exploit novel attack surfaces that no existing tool can detect, then yes, Google probably does want to know about that before the model is deployed to 100 million users. The preview has real value. The question is who captures that value, and under what terms.
To be clear: At the moment, Anthropic is providing this security service for free! (WSJ)
The closest analogy in the existing economy is the cybersecurity industry’s relationship with vulnerability research. Firms like CrowdStrike or Mandiant generate revenue partly because the threat environment is perpetually escalating. They do not cause the threats but they do have a structural incentive to operate in a world where threats are real and scary. A world of perfect security would bankrupt them. They are in the business of navigating danger, which means danger is a precondition for their business model.
Anthropic’s position in this hypothetical is more extreme, because they would be the threat’s proximate cause. They built the model. They created the asymmetric information: they know what it can do, and for a brief, commercially valuable window, nobody else does. The security preview is a product born entirely from that asymmetry. They have created a problem only they fully understand, and they are selling the map to the minefield.
How the pricing conversation would actually go
Let’s be concrete. Suppose Anthropic charges enterprise-tier access for a 90-day security preview. Microsoft pays. Google pays. Amazon, whose AWS infrastructure is plausibly vulnerable, pays. What exactly are they buying?
They are buying time. They are buying the ability to audit their systems against a model whose capabilities they could not otherwise anticipate. They are buying the assurance that when Claude-X goes public, they will not be the company whose flagship product gets pwned on day one, live on Hacker News, with a screenshot of the model casually defeating their authentication layer.
That is an extraordinarily high-value purchase. The willingness-to-pay is enormous. And Anthropic, if they wanted to, could set the price accordingly.
“No one is being threatened. No one is saying: pay us or we will release this. They are just setting a price for something extremely valuable that they happen to uniquely possess.”
Now here is the critical thing: this would be, in a narrow legal and commercial sense, entirely legitimate. No one is being threatened. No one is saying: pay us or we will release this. Anthropic is going to release it regardless. They are just setting a price for something extremely valuable that they happen to uniquely possess advance knowledge of a new class of risk. The coercion, if it exists at all, is structural rather than explicit. The alternative to paying is not punishment. It is merely exposure.
The grammar of extortion, without the crime
Extortion, in its traditional legal form, requires a threat: pay me or I will do something harmful to you. What I am describing does not meet that threshold. Anthropic is not threatening to attack Google. They are offering Google the chance to prepare for a risk that exists independently of the commercial relationship. The harm is not contingent on the negotiation.
And yet the deep grammar of the transaction is the same. There is a powerful party. There is a weaker party that needs something the powerful party controls. The price of access is set entirely by the powerful party, because the powerful party is the only one who knows the full shape of the threat. The weaker party cannot shop around, cannot get a second opinion, cannot credibly refuse. Paying is just obviously the correct decision.
This is what sophisticated rent-extraction looks like in a world of genuine knowledge asymmetry. It does not need to be illegal. It does not need to be cruel. It just needs to be structured correctly and the structure is provided, for free, by the fact that Anthropic would be the only entity in the world that truly understands what their own model can do.
The safety mission as commercial moat
There is one more twist that makes this scenario genuinely strange: Anthropic’s entire public identity is built around being the responsible actor. Their stated mission is the safe development of AI for the long-term benefit of humanity. They publish safety research. They talk openly about existential risk. They have structured themselves as a public benefit corporation.
And in this hypothetical, every single one of those things would be true and also, simultaneously, the mechanism by which they extract billions from their competitors. The safety mission is not a cover. It is the product. The seriousness with which Anthropic takes catastrophic risk is precisely what makes the security preview worth purchasing. If you did not believe the model was genuinely dangerous, you would not pay to preview it.
This is not cynicism about Anthropic specifically. It is an observation about the position that any sufficiently dominant frontier lab eventually occupies. Being the most careful actor in the most dangerous industry is, structurally, a form of market power. The more seriously the world takes your warnings, the more valuable your reassurances become. The more dangerous your technology, the higher the premium on access to your safety apparatus.
“Good intentions and extractive power are not mutually exclusive. They can be the same thing, dressed in different language depending on who is holding the pen.”
Nobody has to be villainous for this to be a problem
The scenario I have described does not require anyone at Anthropic to be acting in bad faith. The engineers running the security previews might be deeply motivated by genuine concern. The executives setting the pricing might believe, correctly, that they are doing the world a favor by giving companies time to prepare. The lawyers might have constructed the arrangement with careful attention to every applicable regulation.
None of that changes the basic shape of what is happening, which is this: a private company has accrued so much leverage over critical infrastructure through the act of building something extremely powerful that they can charge for the privilege of not being caught off guard by what they built. Good intentions and extractive power are not mutually exclusive. They can be the same thing, dressed in different language depending on who is holding the pen.
The question worth asking, before we arrive at the moment this becomes real, is whether we want this power concentrated in any single private entity, however well-intentioned, or whether the advance knowledge of civilizational-scale risk should be, in some meaningful sense, a public good. Because once the model exists, and the previews begin, that question will already have been answered for us.
This is a speculative essay exploring the structural incentives of AI frontier development. It does not describe any actual Anthropic product, policy, or intention. It describes a hypothetical that the industry’s current trajectory makes worth thinking about now, rather than later.




As the Frontier AI labs have a massive incentive in protecting their own infrastructure themselves, would they pay for (and get) early access of each others models as well?
Would google for example pay for a preview of Claude. And OpenAI pay for a preview of Gemini?
Would this possibly even speed up the flywheel of AI development as they would be sharing information at an earlier stage?
Interesting thoughts!
I'd love to get the newsletter in English, and I'd prefer the Doppelgänger Podcast in English as well.