The AGI Dream is Held Hostage by a Spreadsheet
Table of Contents▼
I'll be honest, I've started scrolling past the AGI timeline posts. Not because the people writing them are wrong, but because there's a quieter, more grounded question nobody seems to want to sit with. Which is: do these companies actually have enough runway to build this thing?
Anthropic is spending around $12 billion just on training their models this year. That's before servers, salaries, or any of the boring infrastructure that keeps the whole thing alive. They've pushed their "we're finally not losing money" date back to 2028. And this isn't a small company scraping by - it's one of the most well-funded teams in the world. Still, structurally, there's a clock running.
OpenAI crossed $7 billion in operating costs in 2024. They just raised $110 billion, which sounds enormous until you see what goes out the door every year. Across the whole industry, companies are planning to spend over $650 billion on infrastructure this year alone. Some are quietly cancelling stock buybacks just to pay the bills. The spending is growing faster than the revenue, and that gap is the part of this story that doesn't get talked about enough.
So when I think about whether AGI is five years away or fifteen, I'm not really thinking about benchmarks. I'm thinking about which of these companies is still around when it matters.
The way they stay around is pretty straightforward, actually.
One - make the product so useful and so woven into how people work that nobody wants to leave. That's what the newer Claude models are quietly doing. It's not just "better outputs." It's building habits. Getting a developer's whole workflow running through it. Getting a team comfortable enough that switching feels like a real cost. That's what turns a model into a business.
Two - start charging real money for things people genuinely need. Anthropic launched a code review tool inside Claude Code this week. It reads your pull requests, understands what changed, and flags the parts that look off. They're charging $15 to $25 per review - not per month, per review. And honestly, for the people buying it, the math works. If your team is shipping more code than ever and some of it has quiet bugs baked in, you'll pay $25 to catch them before they hit production. It's a fair trade. And it's how the lights stay on.
Here's what I find kind of fascinating though.
This whole space has quietly turned into an attention game. Every company gets a window - a few days, maybe a week - where everyone is talking about them. Then it moves on. Grok launched and it was the whole conversation for a moment. DeepSeek dropped and it felt like a genuine reckoning for a couple of weeks. Every Anthropic launch sends developer communities into a small frenzy. The spikes are real, the sign-ups go up, but the window closes faster each time because everyone is playing the same game now.
So companies are going to keep investing in moments. Planned, timed, carefully framed moments to pull the spotlight back. OpenAI's $110 billion raise was a funding round, yes - but it was also a message. A quiet signal to every company choosing a vendor that OpenAI is the safe, serious choice. Anthropic turning down a government request to loosen their safety guidelines wasn't just a policy decision - it was a statement about who they are, aimed at exactly the kind of people who needed to hear it. These things are intentional. And they work, for a while.
The tricky part is that the more everyone does this, the shorter each window becomes. What feels like a bold move today is a standard feature six months later. The bar keeps rising and the booms keep compressing.
The companies that actually get to the finish line - whatever that even looks like - probably won't be the ones who had the best launch week. They'll be the ones who quietly built something people are genuinely embedded in. Developers who've reorganised how they work around a tool. Teams whose whole output runs through an API. Products that have stopped feeling like products and started feeling like infrastructure.
The race to AGI gets framed as a research challenge. And at some level it is. But from where I'm sitting right now, in early 2026, watching the numbers and the launches and the attention cycles - it looks a lot more like a slow, unglamorous financial endurance test with a very big prize waiting at the other end.
And I genuinely have no idea who outlasts who.