The AI Bubble Debate - Why Jensen Huang and Clem Delangue Can't Both Be Right (Or Can They?)
Nitin Ahirwal / November 20, 2025
So here's a fun question: What do you do when one CEO says "we're printing money faster than we can count it" while another CEO says "yeah, that money printer is about to jam"?
You grab popcorn and watch the show, that's what.
Last week, two of the most important people in AI said things that are completely opposite. And somehow, they might both be right. Which is either genius-level market analysis or we're all collectively losing our minds. Maybe both.
Let me explain.
TL;DR: Nvidia just posted a jaw-dropping $57 billion quarter while Hugging Face's CEO says the LLM bubble is about to pop. Who's right? Spoiler: both of them, and that's exactly why this gets interesting.
The Setup: Two CEOs Walk Into an Earnings Call...
Picture this: It's November 2025, and the tech world is having what I can only describe as an existential crisis wrapped in a earnings report.
In one corner, we have Jensen Huang, Nvidia's leather-jacket-wearing CEO, literally standing on stage saying "Blackwell sales are off the charts" and "forget about the bubble, there is only growth." The man sounds like he's mainlining pure confidence. His company just reported $57 BILLION in quarterly revenue (up 62% year-over-year), and he's basically telling everyone worrying about an AI bubble to touch grass.
In the other corner, Clem Delangue from Hugging Face is at an Axios event, calmly dropping bombs: "I think we're in an LLM bubble, and I think the LLM bubble might be bursting next year."
Wait, what? How can both of these things be true at the same time?
Buckle up, because we're about to dive deep into the most fascinating contradiction in tech right now.
๐ Nvidia's Reality: Money Printer Go BRRR
Let's start with the numbers, because holy hell, these numbers are insane.
Nvidia didn't just beat expectations in Q3 2025 โ they demolished them:
- $57 billion in revenue (that's more than some countries' GDP)
- $32 billion in net income (65% higher than last year)
- $51.2 billion came from data centers alone (up 66% YoY)
- They're forecasting $65 billion for Q4
To put this in perspective: Nvidia made more profit in ONE QUARTER than most Fortune 500 companies make in a year. Their data center business is growing so fast it's basically becoming its own economy.
And Jensen? The man is vibing. During the earnings call, he said:
"There's been a lot of talk about an AI bubble. From our vantage point, we see something very different."
He's not just optimistic โ he's looking at hard order books. The company announced AI infrastructure projects totaling 5 million GPUs in Q3 alone. That's not speculation; that's concrete demand from cloud providers, governments, enterprises, and supercomputing centers all screaming "TAKE MY MONEY."
Their Blackwell Ultra chips? Selling faster than concert tickets to a Taylor Swift show. Cloud GPUs? Sold out. Jensen literally said compute demand is "accelerating and compounding" across both training and inference.
From where Nvidia sits, this isn't a bubble โ it's the gold rush, and they're selling shovels made of pure silicon and dreams.
๐ Hugging Face's Reality: Pop Goes the LLM
Now let's flip the coin.
Clem Delangue isn't some doom-and-gloom pessimist trying to get attention. He's been in AI for 15 years, survived multiple hype cycles, and built one of the most important platforms in the AI ecosystem. When he talks, people listen.
And what he's saying is fascinating: we're not in an AI bubble, we're in an LLM bubble.
Here's his argument, and honestly, it's pretty compelling:
The "One Model to Rule Them All" Problem
Right now, the entire AI industry is betting on this idea: train one massive language model, throw ungodly amounts of compute at it, and boom โ it'll solve every problem for every company and every person.
Sounds great in theory. In practice? Delangue thinks it's nonsense.
His example really hits home: imagine you're building a chatbot for a bank. Does it need to explain the meaning of life? Does it need to write poetry or generate images? No. It needs to help customers check their balance, dispute charges, and maybe explain why their card got declined at 2 AM.
For that, you don't need GPT-7-Ultra-Mega-Plus. You need a smaller, specialized, cheaper, faster model that runs on your own infrastructure and doesn't send your customers' financial data to some cloud provider.
This is the core of Delangue's thesis: the future of AI isn't one giant model, it's thousands of specialized models solving specific problems better, faster, and cheaper than any general-purpose LLM ever could.
๐ฐ The Money Plot Twist
Here's where it gets really interesting.
Hugging Face raised $400 million in funding. You know how much they still have in the bank? Half of it. $200 million, just sitting there.
In the AI world right now, that's called... wait for it... profitability. Or at least, responsible financial management that looks like profitability compared to everyone else.
Meanwhile, OpenAI is burning through billions. Anthropic is burning through billions. Every major AI lab is in an arms race, training bigger and bigger models, buying more and more compute, hoping to be the last one standing when the music stops.
Delangue is watching this and basically saying: "Yeah, we're good. We'll just build sustainable infrastructure for the entire AI ecosystem while everyone else panic-spends their way to either glory or bankruptcy."
The guy is playing 4D chess while everyone else is playing hot potato with venture capital.
๐ค So Who's Actually Right?
Plot twist: they both are, and that's exactly why this moment in tech is so wild.
Let me explain.
Jensen Huang is right because:
-
Hardware demand is real and immediate: Companies need GPUs NOW to train models, run inference, and stay competitive. That demand isn't going away anytime soon.
-
The infrastructure build-out is just beginning: We're in the phase where every major company is building AI capabilities. That requires hardware. Lots of it.
-
Nvidia has diversified beyond LLMs: Gaming, professional visualization, automotive, robotics โ they're not putting all their eggs in the ChatGPT basket.
-
Short-term momentum is undeniable: Those $65 billion Q4 projections aren't coming from thin air. Real customers with real budgets are placing real orders.
Clem Delangue is right because:
-
The economics of giant LLMs don't make sense for most use cases: Why pay $20/month for ChatGPT Plus when a local, specialized model costs pennies and works better for your specific task?
-
The training-cost curve is getting brutal: Going from GPT-4 to GPT-5 might cost 10x more for only marginal improvements. That math eventually breaks.
-
Open-source and specialized models are catching up fast: The gap between closed-source mega-models and open alternatives is shrinking rapidly. Hugging Face's own platform proves this daily.
-
History rhymes: Every tech bubble follows the same pattern โ massive infrastructure investment, then a shakeout, then the real winners emerge. We're probably in act two.
๐ญ The Real Story: It's About Timing
Here's what I think is actually happening, and why both perspectives make sense:
2025-2026: The Nvidia Years
- Companies are still figuring out AI
- They're throwing money at the problem
- "Better safe than sorry" = "better buy GPUs now"
- Jensen's victory lap is earned
2027-2028: The Reckoning
- CFOs start asking "what ROI are we getting on these AI investments?"
- Specialized, efficient models start winning contracts
- The "spend whatever it takes" era ends
- Delangue's thesis proves out
Think of it like the dot-com era. Cisco sold routers like crazy from 1996-2000 (Nvidia today). Then the bubble popped in 2001. But you know what? The internet didn't go away. It just became more efficient, specialized, and useful.
Same thing here. AI isn't going anywhere. But the way we build and deploy AI is absolutely going to change.
๐ฅ The China Factor Nobody's Talking About
Buried in Nvidia's earnings was this little gem: their H20 chip sales to China were 50 million units short of expectations.
Nvidia CFO Colette Kress said: "Sizable purchase orders never materialized in the quarter due to geopolitical issues and the increasingly competitive market in China."
Translation: Export restrictions are killing a massive revenue stream, and Chinese companies are building their own alternatives.
This is huge. China represents potentially the largest AI market in the world, and Nvidia is getting frozen out. Meanwhile, Hugging Face's model? Works everywhere. Open-source doesn't care about export controls.
Score another point for the "specialized, distributed AI" thesis.
๐ก What This Means for You (Yes, You)
If you're an AI engineer:
Short term: Keep grinding on those LLM projects. The money is flowing, the jobs are real, and companies are hiring.
Long term: Start learning about model optimization, quantization, fine-tuning, and deployment. The future isn't "who can prompt ChatGPT better," it's "who can build and deploy specialized models efficiently."
If you're a startup founder:
Don't build another "ChatGPT wrapper" and hope OpenAI doesn't eat your lunch.
Do find specific verticals where a specialized model can outperform general-purpose LLMs on cost, speed, privacy, or accuracy.
The winners in the next wave will be companies solving real problems with right-sized AI, not companies trying to be "OpenAI but for X."
If you're an investor:
Nvidia is probably still a good bet for the next 12-24 months. But start looking at companies building the infrastructure for specialized AI โ think MLOps platforms, edge deployment tools, model optimization startups.
The Hugging Faces of the world are positioning for the long game, and they might be right.
๐ฏ The Bottom Line
The AI bubble debate isn't actually a debate. It's two people describing different parts of the same elephant.
Nvidia is describing the present: massive infrastructure build-out, unlimited demand, rockets to the moon.
Hugging Face is describing the future: efficiency, specialization, sustainability, the actual useful applications of AI that will outlast the hype.
Both are correct. The question isn't "is there a bubble?" The question is: what happens after the bubble?
History suggests:
- The infrastructure companies (Nvidia) print money during the build-out
- Then there's a correction when people realize they over-invested
- Then the real winners emerge โ companies that use the now-cheaper infrastructure to build actually valuable products
We're probably in phase one, heading toward phase two. Smart money is already positioning for phase three.
๐ฎ My Prediction (For What It's Worth)
Here's what I think happens:
2025-2026: Nvidia continues crushing it. Jensen keeps wearing leather jackets and talking about exponential growth. Stock goes up.
2027: First major LLM company shuts down or gets acquired for parts. Market gets spooked. "Is AI over?" headlines everywhere.
2028-2030: The real AI revolution happens, but it looks nothing like ChatGPT. It's invisible, specialized models embedded in everything from your fridge to your car to your doctor's diagnostic tools. Hugging Face becomes the GitHub of AI. Nvidia is still profitable but not growing 60% YoY anymore.
2031+: We laugh about how we thought ChatGPT was the pinnacle of AI, the same way we laugh about thinking Pets.com represented the peak of e-commerce.
๐ฌ The Final Word
Look, I don't have a crystal ball. Maybe Nvidia keeps going to infinity. Maybe the LLM bubble never pops. Maybe we're all wrong and AGI drops tomorrow and none of this matters.
But here's what I know for sure: both Nvidia's execution and Hugging Face's caution are teaching us something important.
The lesson isn't "who's right?" The lesson is: success in tech requires knowing which game you're playing and when.
Nvidia is playing the short-game perfectly. Hugging Face is playing the long-game wisely.
The real question is: which game are you playing?
What do you think? Are we in a bubble, or is this just the beginning? Drop your hottest takes in the comments. Bonus points if you can predict what Jensen's jacket will look like in Q4 earnings.
And if you're building something in AI โ whether it's a massive LLM or a tiny specialized model โ I'd love to hear about it. The future is being built right now, and honestly, it's the most exciting time to be in tech since... well, maybe ever.
Now if you'll excuse me, I need to go check if I can still afford Nvidia stock, or if I should just buy a lottery ticket instead. The odds are probably about the same at this point.