Interactive Essay

Anthropic Is Going to Win and Nobody's Paying Attention to Why

The race isn't about compute. It's about who retains the people who make it.

Lok
Co-written by Claude Sonnet 4.6
29 March 2026·~1,400 words·AIGame Theory
01

Not Compute

Everyone thinks the AI race is about compute. More GPUs, bigger data centres, scale harder. It's not.

Epoch AI estimates roughly 5x capability improvement per FLOP through algorithmic innovation. That number compounds. Two labs with identical GPU budgets but a 5x gap in algorithmic efficiency aren't competing. One's in the race. The other's a spectator that happens to own expensive hardware.

Gen 1
Gen 2
Gen 3
Lab A
Lab B
—
5×
25×
fig. 1 — Algorithmic efficiency, compounding over generations

There are maybe 2,000 people alive who hold the tacit knowledge that produces that 5x. Training recipes. Architecture intuitions. Data pipeline decisions that never make it into papers because they're too valuable. That's the scarce input. Not silicon. People who know things that can't be written down.

And the knowledge compounds. Insight A is prerequisite to insight B. Hire away the person who had insight A and you don't just get one fact — you get the trajectory. The derivative, not the point estimate. This is the only appreciating asset in the entire AI stack, and it walks out the door every evening.

So the race is a retention race. The question is what retains these specific people.

02

Alignment as Infrastructure

Here's the argument: alignment investment functions as talent retention infrastructure. Whether or not Anthropic consciously designed it that way. I suspect they understand this. I'm not sure it matters if they do.

The "alignment tax" framing — that safety investment costs you capability — keeps showing up in otherwise smart commentary and it's wrong. Not morally wrong. Mechanistically wrong. The people doing the best safety research are, by selection effect, the same people generating the algorithmic insights that constitute the actual competitive moat. RLHF came from safety-motivated research. Constitutional AI came from safety-motivated research. These aren't different teams. They're the same people, and they have preferences about where they work.

They left Google over Maven. They left OpenAI when the safety teams got gutted. They don't go where the blog posts are nicest. They go where the commitments are load-bearing.

Anthropic's two-year retention: 80%. OpenAI's: 67%. That's not a safety-team number. That's company-wide. Cut alignment to move faster and you accelerate the rate at which your actual moat walks out the door, goes to your competitor, and posts a ten-million-view exit letter. That's not hypothetical. That's Sharma. That's Vallone. That's seven researchers leaving for Meta in one batch. And the departures don't just cost you talent — they leak the trajectory.

COST OF ALIGNMENT
Anthropic80%
OpenAI67%
2-year staff retention rate

Anthropic invests in alignment research, retaining 80% of staff over two years. The 'alignment tax.'

Whether Anthropic's leadership are good people is irrelevant to the argument. What matters is that alignment investment creates the organisational conditions that retain compounding tacit knowledge. Game theory follows from ethics. Ethics is the upstream thing.

03

The Costly Signal

Blog posts are free. Responsible scaling policies are free. The test is what happens when the commitment costs real money.

Hegseth gave Amodei a four-day ultimatum. Accept "any lawful purpose" language — autonomous weapons, mass surveillance of Americans, whatever the Pentagon wants — or lose the contract and get blacklisted. Amodei's response: "we cannot in good conscience accede to their request."

So Trump ordered every federal agency to cease using Claude. The Pentagon designated Anthropic a supply chain risk — a classification previously reserved for foreign adversaries like Huawei. First American company ever. The damage: hundreds of millions in contracts collapsing, three deals worth over $180 million gone, defence contractors told to certify they don't use Claude. A sitting president calling them a "radical left, woke company" on Truth Social.

Hours later — I still can't get over this — OpenAI announced a Pentagon deal. Claimed the same red lines. Accepted "any lawful purpose" language that functionally negates them.

THE OFFER

Anthropic has just been designated a supply chain risk. The Pentagon is offering you the contract. Same terms Anthropic refused. "Any lawful purpose." $200 million. Your competitor just ate the cost of saying no, and you get to eat their lunch. What do you do?

Anthropic sued. Judge Rita Lin called the designation "Orwellian," ruled it was "classic illegal First Amendment retaliation," and issued a preliminary injunction. The case is ongoing. The designation still holds in a parallel circuit. Anthropic isn't out of the woods.

But every safety-conscious researcher in the world watched this happen. The retention mechanism doesn't need to be explained to them. They already updated.

04

Sora

OpenAI launched Sora as a standalone app in September 2025. TikTok clone with deepfake face-scanning. Users immediately generated Martin Luther King deepfakes and Pikachu ASMR. MLK's daughter asked them to stop. Disney signed a billion-dollar licensing deal anyway, because Disney. An OpenAI exec compared it to the end of the silent film era.

Downloads peaked at 3.3 million in November. By February: 1.1 million. Lifetime in-app revenue: $2.1 million. Estimated inference costs since launch: $0 — running at $15 million per day.

March 24, 2026: killed. Not because it failed commercially — although it did — but because OpenAI needed the compute. They couldn't run Sora and train their next model at the same time. The Disney deal dissolved in three months. The compute that went to deepfake Pikachu could have gone to staying competitive.

Altman told staff "things are moving faster than many of us expected." Read that as triage, not excitement. If things were moving fast in his favour, he wouldn't be killing products, dissolving partnerships, and renaming his deployment org "AGI Deployment" as if changing the name changes the position.

Their next model is called Spud. No benchmarks. No details. "Really accelerate the economy." Meanwhile Anthropic's Mythos leaked with actual performance data and a new tier above Opus. One company has a model. The other has a name and a promise.

I think Altman is a talented operator who overextended into consumer products that burned compute he couldn't spare. He played the wrong game.

05

The Byproduct Problem

Trust is a byproduct good. It degrades when you pursue it directly. A company that markets its principles is performing them, and the performance is discounted by everyone worth selling to. The moment you commodify your integrity, you've already lost it.

Anthropic didn't hold against the Pentagon because it was good marketing. They held because they meant it. If they'd held because it was good marketing, it wouldn't have worked as marketing. This isn't a paradox. It's just how trust works, and it's slightly embarrassing how many smart people don't get it.

Anthropic
OpenAI
$200M contract cancelled
$180M+ in deals collapsed
Supply chain risk designation
Presidential directive
"Radical left, woke company"
$200M contract secured
Classified network deployment
Head of robotics resigns
Contract rewritten under pressure
ChatGPT uninstalls +300%
Claude becomes #1 app

OpenAI claimed the same red lines while accepting "any lawful purpose" language. Costless signals carry no information. Everyone knows the words are there because Anthropic paid for them first and OpenAI copied the homework. It's the corporate equivalent of putting "integrity" in your LinkedIn bio.

06

What Happens

Anthropic is going to be first to a game-changing model. The talent retention advantage and the compute discipline advantage are going to compound into something measurably ahead of whatever OpenAI and Google produce in the same window.

The IPO will be cleaner. No killed products and dissolved partnerships. Path to profitability years ahead of OpenAI's timeline.

I don't think Anthropic survives long-term as an independent company. The market will consolidate. Google or Amazon will eventually absorb the technology, the talent, or both. That's probably fine.

The question that matters is whether the proof of concept survives the acquisition. A frontier AI lab held the line against the Pentagon, ate the cost, got blacklisted, sued, won an injunction — and also built the best model. The industry consensus was that safety and capability were a trade-off. That you had to choose. That the labs which moved fastest and worried least would win.

That consensus is broken. Not by argument. By performance.

Whatever happens to the company, the demonstration that it's possible to hold the line and still win doesn't go away. "It's not possible" was the only thing protecting the companies that didn't try. Now everyone knows it is.

Lok · Co-written by Claude Sonnet 4.6
Reply by email