Nvidia CEO Jensen Huang Predicts $1 Trillion AI Chip Backlog, Declares 'The Inference Inflection Has Arrived'
At Nvidia's GTC event in San Jose, CEO Jensen Huang unveiled a vision of doubling chip orders to $1 trillion by year-end, announced a multi-billion dollar deal with Groq for inference chips, and claimed Nvidia's market value will hit $6 trillion.
Introduction
Nvidia CEO Jensen Huang took the stage in San Jose on Monday with a bold prediction: the AI chip market is heading toward a $1 trillion order backlog by year-end — doubling his estimate from just a year ago. Speaking to a packed arena at Nvidia’s flagship GTC conference, the leather-jacketed executive declared that “the inference inflection has arrived,” positioning Nvidia to dominate the next phase of the AI boom as the industry shifts from training large language models to running them at scale.
The $1 Trillion Backlog
Huang’s headline claim represents a staggering acceleration in AI infrastructure demand. Nvidia has already leveraged its dominant position to grow annual revenue from $27 billion in 2022 to $216 billion last year — nearly an 8x increase in three years. The company briefly became the first to surpass a $5 trillion market value in October 2025.
“We reinvented computing, just like the PC revolution and the internet revolution,” Huang proclaimed. “We are now at the beginning of a new platform change.”
The numbers:
- 2022 revenue: $27 billion
- 2025 revenue: $216 billion
- Projected 2026 revenue: $330+ billion (analyst estimates)
- Current market cap: $4.5 trillion
- Predicted market cap: $6 trillion within a year (Wedbush estimate)
- Order backlog target: $1 trillion by end of 2026
The Inference Inflection
The strategic centerpiece of Huang’s keynote was the transition from AI training to inference — the phase where trained models generate responses for users.
“Once an AI tool is trained, inference chips enable the technology to take what it has learned and produce responses — whether it be writing a document or creating an image — more efficiently than the processors that were used while the large language models were being built,” Huang explained.
This shift matters because inference workloads are vastly larger than training workloads. While training a model like GPT-4 requires massive compute upfront, running that model for millions of daily users requires sustained, efficient inference capacity — exactly the market Nvidia is targeting with its next-generation processors.
The Groq Deal
To accelerate its push into inference, Nvidia struck a multi-billion dollar licensing deal with Groq, a startup specializing in inference processors. The deal includes hiring Groq’s top engineers — a talent acquisition play as much as a technology one.
“Nvidia isn’t going to cede any market share to Google or Meta,” said Wedbush Securities analyst Dan Ives, who predicts Nvidia’s market value will eclipse $6 trillion within the next year.
Competitive Pressure
Despite the optimism, Nvidia faces its first serious competitive challenges:
- Google: Developing custom TPU (Tensor Processing Unit) chips for both training and inference
- Meta: Building its own MTIA (Meta Training and Inference Accelerator) processors
- Groq: Specializing in inference-optimized architectures (before the Nvidia deal)
- China: US trade barriers have blocked Nvidia from selling advanced chips to the Chinese market — a significant revenue constraint
Nvidia’s stock has cooled from its October highs, down 6% even after a strong quarterly report in late February. The market is pricing in competitive risk alongside the opportunity.
”A White-Knuckle Period”
Wedbush analyst Dan Ives captured the market’s ambivalence: “This is just a white-knuckle period for the technology industry.”
The tension is clear: Nvidia’s fundamentals have never been stronger, but the expectations priced into the stock demand near-perfect execution. The $1 trillion backlog prediction is Huang’s way of signaling that demand isn’t just holding — it’s accelerating.
After the keynote, Nvidia shares edged up nearly 2% to close Monday at $183.22.
The Broader AI Economy
Huang’s vision extends beyond chips to the entire AI infrastructure stack:
- ChatGPT, Gemini, and competitors require massive inference capacity to serve billions of users
- Enterprise AI adoption is creating demand for on-premises and edge inference
- Autonomous vehicles, robotics, and IoT represent emerging inference markets
- Sovereign AI — nations building their own AI infrastructure — is a new growth vector
Sources
- AP News — Jensen Huang outlines Nvidia’s AI vision at San Jose event (March 17, 2026)
- AP News — AI inference chips market
- Google Trends — Worldwide (accessed March 17, 2026)