🌊 Hidden Economics of the AI Boom
AI infrastructure bottlenecks, economic risks, and the real limits of the current AI boom.
I’m Ivan. This is a weekly founder-first market intelligence brief. A faster way for founders & investors to understand startup markets.
This week’s sponsor is AI CRM Attio:
Attio is the AI-native CRM for the next era of companies: Connect your email and calendar, and Attio instantly builds a CRM that matches your business model.
Instantly prospect and route leads with research agents
Get real-time insights from AI during customer conversations
Build powerful AI automations for your most complex workflows
Hello there!
It has been a massive quarter for AI, with a record breaking number of >$100M deals (the list of 40+ deals + 3 dominant categories here). So massive that I keep thinking about historical parallels to what might be (really) going on in our economy.
So, on this very topic, I ran into this must listen interview between Paul Krugman (Nobel-winning economist, NYT) and Paul Kedrosky (investor, MIT research fellow, long time tech + markets nerd).
It is one of the most contrarian and interesting conversations I’ve come across.
Here are the top 10 insights + my 2 cents:
1. The AI boom looks a lot like Dutch disease
What happened:
Krugman and Kedrosky frame AI as a classic “Dutch disease” story.
The details:
Dutch disease is what happens when one sector gets so hot that it quietly starves everything else of capital and attention. In the 70s, Dutch gas exports pushed wages up, strengthened the currency, and made every other industry less competitive. The gas boom was real but it hollowed out manufacturing because capital, talent, and policy chased the “easy returns”.
We see capital, talent, and political focus are being sucked into this one AI vertical today. I see it every time I talk to founders, LPs, VCs, banks, you name it (to the detriment of those building “outside” of being a “pure” ai company, for now).
So what:
When an economy gets Dutch disease founders don’t feel it as a “macro phenomenon” but they do feel it as harder fundraising, more hype expectations (as you’ve probably listened to in a most vc podcasts…) and less (perceived) room to build outside the narrative.
2. We already burned through the Saudi Arabia of data
What happened:
Transformers worked so well because we discovered a giant free data reservoir: the open internet.
The details:
Large language models got good by training on public text at internet scale. Once we found a way to turn raw text into predictions, scaling laws kicked in: more data, more compute, better models. That trick got us from “this is a cat?” to GPT 5 level systems today. The problem is that most of the high quality public data is now used and new gains need way more work for smaller jumps. There’s a must watch interview on this topic by Ilya (co-founder of OpenAI) on this topic which I summarised here.
So what:
The easy part of the curve is behind us and it looks like the future progress is less “just add more data” and more “we need new sources, new architectures, and smarter training”. Which (might be?) slower and potentially more expensive.
3. LLMs are great at code, less great at the rest
What happened:
Kedrosky is very skeptical that current LLMs are on a straight road to AGI.
The details:
Models are incredibly good at software because code gives a sharp learning signal. Natural language is fuzzy: you and I can disagree on which sentence is better. So the model learns much faster from code than from prose. Benchmarks that show models “crushing coding tasks” can mislead people into thinking we are close to general intelligence. Kedrosky argues that we are not and that we are close to “very good stochastic programmer”.
So what:
Betting on AI to keep eating software workflows (and in some sectors, parts of labour) is far more realistic than betting on near-term “god-level intelligence.” LLMs have clear strengths (i.e. code, text, structured tasks) and clear ceilings. The architecture is biased toward certain domains and plateaus in others.
4. GPUs age like fruit, not factories
What happened:
The chips inside AI data centers do not behave like normal industrial capital.
The details:
Training runs keep GPUs at full throttle, 24/7, under heavy thermal stress. That is not like your laptop but more like running a race car flat out all season. What ends up happening is high early failure rates, slowdowns, constant replacement etc etc.
Which means that in a 10,000 GPU data center you should expect regular failures every few hours. Which means that long before “the next generation chip” arrives, a lot of your fleet has already worn out.
So what:
AI infra is not laying railroads you can use for 50 years (there’s been a lot of analogies on this lately). It is closer to a warehouse full of bananas, which matters for depreciation, returns, and how fragile the whole buildout might be.
5. Power (not compute) is the real bottleneck
What happened:
We are hitting the limits of the electrical grid faster than the limits of model size.
The details:
Data centers want hundreds of megawatts and even gigawatts of power. These need the kind of power normally reserved for aluminum smelters or small cities. The problem is that the grid was never built for this. Substations are full, transmission lines take years to approve and local communities push back when they hear “your bills might go up.” So utilities do strange things:
speculate on power for future AI sites
cancel projects when the demand slips
push hyperscalers into “bring your own power” deals
slow-roll approvals because the grid can’t move at startup speed
Which ties into the broader idea of fragility in this boom, because the physical world moves slower than the hype cycle.
So what:
You can raise a billion dollars in a quarter but you can’t build a gigawatt in a quarter and if your AI forecast assumes infinite, instant electricity, you are not modelling reality. It is basically modelling a wish, and this makes the economy more fragile.
6. The financing stack looks like the “sum of all bubbles”
What happened:
Kedrosky calls this the first big bubble that combines all four classic ingredients.
The details:
Real estate: speculative land grabs around “powered land”
Tech: massive bets on one architecture and one use case.
Loose credit: private credit, SPVs, structured products wrapped around data centers.
Geopolitics: “we must win the AI race with China at any cost”.
So what:
Everyone inside their silo thinks they are being rational but in aggregate it looks like 2000 plus 2008 plus an arms race. Crises are endogenous to this kind of system.
7. Our economic data is stuck in 1929
What happened:
Even the people measuring the economy do not really see what is going on.
The details:
National accounts lump AI CapEx into broad buckets where data centers, old IT gear, and random software all live together. Surveys that ask firms about “AI adoption” are still framed like 2012 machine learning and not 2025 LLMs. Kedrosky estimates that without AI CapEx the US might have been in recession in early 2025, yet the official story points to other policies and tariffs.
So what:
If the dashboard is wrong, policymakers misread cause and effect which means bad narratives, bad decisions, and no real plan for what happens when this CapEx wave slows.
8. Geography: San Francisco is back, the rest is starving
What happened:
AI investment is not only concentrated by sector but also geography, more than almost any tech cycle before it (I mapped 16 new unicorns in the bay area so far this year).
The details:
Capital is clustering in two places: San Francisco and New York. If you’re an AI startup in those cities, you feel tailwinds everywhere (talked about this in my recent trip to a16z’s speedrun demo day). There’s an undeniable pull toward SF, and I’d argue also even a (sometimes borderline miss-information) social media campaign aimed at it.
Kedrosky frames this as a Dutch disease symptom when one sector + one region pull in attention, talent, and narrative, which means everything else has to fight harder for oxygen.
So what:
The fundraising map is pretty skewed for now, although there’s good / bad investors everywhere, which means there are solid funds out there thinking for themselves as well (hit us up!).
Founders outside the AI hot zones are not doomed, but they do need to assume an “uphill” gradient especially if they compete against startups that have a strong foothold in SF/NY (stronger fundamentals, clearer unit economics, etc). The concentration isn’t fatal but it is definitely real (at least in the short-run) and it shapes how the next few years of venture will feel on the ground.
9. The human side: hype, fear, and hidden usage
What happened:
Adoption is ahead of what official stats say, but trust and incentives are messy.
The details:
Anthropic’s own research shows around 70 percent of people who use AI at work hide it. To me this means 2 things:
People are increasingly scared (reminds me of the origins of the word “Sabotage” during the industrial revolution). A big chunk of “knowledge work” today will be displaced, especially (as we discussed last week) those relying on chains of people passing information because no software could understand the content before (now models can understand it). A lot of workers know this, and want to hide it.
This LLM wave exposes how much human work was already pattern-following and performative effort, and one important way stay ahead of this (if you fall under this category) is to become someone who designs the system, not someone who executes inside it.
So what:
On the ground this looks like chaos where we all read headlines overpromising at the top, but also feel the quiet experimentation at the bottom (and I bet you’ll hear it during your holiday dinners in the next couple weeks). Any sober view of AI needs to hold both imho: it is genuinely powerful in some domains and wildly overhyped in others.
10. This wave ends (in crisis) when physics and unit economics say “no”
What happened:
Kedrosky’s key point is that AI won’t fail because the technology “doesn’t work” but because It will fail the way most financial cycles fail due to slowly building fragility until one moment cracks confidence and triggers a reset.
The details:
AI already works very well to justify real sticky spend, you see it in:
quality and testing (i.e. Galtea)
customer support (i.e. Rauda, Konvo, HappyRobot)
robotics (i.e. Theker)
cybersecurity (i.e. Equixly)
The risk is not in capability but in the broader bottlenecks:
Training cannot scale if data is tapped out.
Model size cannot scale if the grid cannot deliver power.
Hardware economics can break if GPUs behave like consumables.
At some point, the financial system that inflated the boom hits its own limits.
So what:
The opportunity is not “bet on collapse” or “bet on AGI”, the opportunity is to build things that still make sense when the hype premium disappears. That means targeting real ROI, efficiency, etc, and the underrated parts of the economy everyone is ignoring while chasing heat.
🔒 Go Deeper: Join Startup Riders Pro
Clear, founder-first intelligence on markets and capital:
Market Research Drops
🔒 Accel’s AI Report - The “Race to Compute in 2025”
🔒 Infinite Interns - Ben Evans on the AI platform shift
🔒 Atomico’s EU Tech - Data behind a $4T tech economy
🔒 AI Voice Agents - Deep dive into the future of voice AI.
🔒 Agentic Revolution - Deep dive into the platform shift.
🔒 Private Communities - The rise of curated networks.
🔒 Vertical Social Networks - The unbundling of social media.
Founder Playbooks
🔒 Fundraising toolbox - a list of key resources and benchmarks.
🔒 Founder Info-Diet - 40+ quality info resources to stay ahead.
🔒 How not to raise money - a checklist of fundraising mistakes.
🔒 Fundraising Frameworks - 3 frameworks to tighten your pitch.
🔒 Founder Compensation - salary data from top founders.
🔒 Bottleneck Thinking - a founder prioritisation framework.
🔒 AI GTM Stack - How founders are rebuilding sales in 2025.
🔒 HubSpot’s GTM Playbook - $0 to $100M sales strategy.
That’s it!
If you want to help me out, the best thing you can do is share this newsletter with someone who’d find it useful. 🙏🤙















Thank you for listing all of these issues together. I've seen them discussed separately and have wondered how many other people are thinking the way I am about them. Hard to believe Paul Krugman was involved with something useful. His New York Times commentary was not.
Excellent post! I didn't know what "dutch disease" was but now it makes sense.
Great to see all these different points laid out so clear and well presented.
I and my biz partner are building something in this space that is of real value even beyond the "AI hype". Let's hope it gives good results before the burst 🫧