T
Theo - t3.gg·TechPrime is (mostly) right about AI
TL;DR
AI pricing restrictions from Anthropic, Microsoft, and Google stem from compute scarcity, not profit-squeezing — a nuance Primagen's otherwise accurate analysis missed.
Key Points
- 1.Primagen's core argument is correct but misses the compute angle. His video about the AI economy breaking down is largely right — pricing models are unsustainable — but he frames it as a money problem rather than a compute scarcity problem.
- 2.The subsidy collapse began with Cursor in mid-2024, not recently. Cursor had to abandon message-based pricing because some users burned $10 of inference while others burned thousands — and Cursor, unlike the labs, pays cash to Anthropic and OpenAI per request.
- 3.Anthropic's Claude Code pricing change is about reclaiming compute, not upselling users. Restricting Claude Code on the $20 plan mirrors GitHub Copilot pausing signups entirely — both signal compute exhaustion, not revenue extraction.
- 4.Some heavy users cost Anthropic money purely on electricity, ignoring all other costs. API inference costs labs roughly 15–20% of what users pay; but $200/month subscribers using up to $5,000 of compute are running at a loss even on electricity alone.
- 5.Pre-training vs. post-training cost structure explains why model updates vary wildly in expense. Opus 4.5 likely cost near $1 billion in pre-training; subsequent versions (4.6, 4.7) were cheaper post-training refinements — so not every model drop represents a billion-dollar bet.
- 6.Microsoft's Copilot repricing reflects compute reservation for enterprise, not greed. GPT-4.5 is rated 1x messages but GPT-5 is 7.5x — not because GPT-5 costs 7.5x more to run, but because Microsoft is rationing GPU availability it needs to sell to Fortune 500 clients.
- 7.Google is actually the most aggressive subsidizer and was first to restrict — not a safe harbor. Google runs free AI Overviews for signed-out users, included Claude Opus 4.5 in Google One subscriptions, then banned developers building anti-gravity plugins when compute demand exploded.
- 8.The cost of intelligence is dropping fast even as frontier token prices rise. GPT-5 medium matched GPT-4.5's intelligence score while costing less than half as much ($1,200 vs $2,850 per benchmark run); GPT-5 low scored higher than DeepSeek V4 using only 7 million tokens vs. DeepSeek's far higher token count.
- 9.Enterprise users pay full API rates — consumer subscription economics are irrelevant to them. Uber reportedly burned a full year's AI budget in 4 months at API prices; engineers at some companies spend more on inference than their own salaries, and Anthropic's consumer ToS explicitly prohibits commercial use on Pro/Max plans.
Life's too short for long videos.
Summarize any YouTube video in seconds.
Quit Yapping — Try it Free →