Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Stratechery overturns the theory of the AI bubble. What should we do with AI?

CN
律动BlockBeats
Follow
13 hours ago
AI summarizes in 5 seconds.
Original Title: Agent Over Bubbles
Original Author: Ben Thompson, Stratechery
Translation: Peggy, BlockBeats

Editor's Note: Against the backdrop of the ongoing intensification of AI investments and industrial narratives, "Is there a bubble?" has become the core issue discussed repeatedly in the market. On one hand, extreme risk narratives continuously reinforce people's concerns about technological overreach; on the other hand, rapidly expanding capital expenditure and valuation levels keep the "bubble theory" ever-present. In this divergence, market judgments show significant uncertainty.

The author of this article, Ben Thompson, is the founder of the tech analysis platform Stratechery, focusing for a long time on the structural evolution of the tech industry and business models. As the Nvidia GTC 2026 takes place, he has revised his previous judgment on "whether AI is in a bubble": he no longer views the current state as a bubble but rather understands it as a round of structural growth driven by a change in technological paradigms.

This judgment is based on the observation of three key leaps in LLMs. Since ChatGPT first showcased the capabilities of large language models in 2022, LLMs have evolved from "usable but unreliable" to "capable of reasoning," and then to "able to execute tasks independently." Especially by the end of 2025, with the release of Anthropic Opus 4.5 and OpenAI GPT-5.2-Codex, agentic workloads began translating from concept to reality.

The key here is not the model itself, but the emergence of the "agent harness." Agents decouple users from models, managing the scheduling of models, calling tools, and validating results, transforming AI from a tool requiring continuous human intervention into an execution system capable of handling tasks autonomously. This change not only improves reliability but also expands the application boundaries of AI.

Based on this paradigm shift, the author further points out that the expansion of AI demand no longer depends on the number of users, but more on the scheduling capabilities of each user; at the same time, agentic workloads exhibit a "winner-takes-all" feature, continuously driving up the demand for high-performance computing power and bringing structural opportunities to chip manufacturers and cloud service providers.

In this framework, current large-scale capital expenditure is no longer merely speculative bets on the future but more likely a preemptive reflection of real demand. As AI transitions from "assistive tools" to "execution infrastructure," its economic impact might just be beginning to emerge.

Here is the original text:

In the past, I leaned more towards the latter, even believing that bubbles may not necessarily be a bad thing at certain stages.

But at this moment, standing in March 2026, at the opening of the Nvidia GTC, my judgment has changed: this may not be a bubble. (Ironically, this judgment itself may very well be a signal of a bubble.)

Three Paradigm Shifts in LLMs

In recent weeks, while discussing Nvidia and Oracle's earnings reports, I repeatedly mentioned that LLMs have undergone three key shifts.

First Stage: ChatGPT

The first turning point was the release of ChatGPT in November 2022, which needs little elaboration. Although transformer-based large language models had emerged as early as 2017, their capabilities had been steadily improving but were long undervalued. Even in October 2022, I mentioned in an interview at Stratechery that while this technology was impressive, it lacked productization and entrepreneurial momentum.

But just weeks later, everything changed dramatically. ChatGPT made the world truly realize the capabilities of LLMs for the first time.

However, the early versions left two profound impressions that were particularly noted by "bubble theorists":

First, the model often made mistakes, even fabricating answers in a "hallucinatory" manner when it didn't know the answer. This made it more of a "show-off tool," stunning but unreliable.

Second, despite this, it was still very useful, but only if you knew how to use it and needed to constantly verify outputs and correct errors.

Second Stage: o1

The second turning point was the release of the o1 model by OpenAI in September 2024. By then, LLMs had significantly progressed due to stronger foundational models and post-training techniques, providing more accurate outputs and fewer hallucinations.

But the key breakthrough of o1 was that it would first "think," and then answer.

Traditional LLMs are path-dependent; once they take a wrong turn during reasoning, they will continue to be wrong. This is the fundamental weakness of "autoregressive models." The reasoning model will self-evaluate answers, generating an answer first, then judging its correctness, and trying other paths if necessary.

This means that the model begins to actively manage errors, reducing the burden of user intervention. The results are also significant. If ChatGPT's breakthrough was in "making LLMs usable," then o1's breakthrough was in "making LLMs reliable."

Third Stage: Agent (Opus 4.5 / Codex)

At the end of 2025, the third leap occurred.

In November 2025, Anthropic released Opus 4.5, which initially received a lukewarm response. But by December, the Claude Code powered by this model suddenly demonstrated unprecedented capabilities; almost simultaneously, OpenAI released GPT-5.2-Codex, also exhibiting similar levels.

People had been talking about "agents" for a while, but at this moment, they finally began to truly accomplish tasks, even complex tasks that took hours, and they did so correctly.

The key is not the model itself but the control layer (harness), which is the software layer that schedules the model, calls tools, executes processes, and verifies results. In other words, users no longer operate the model directly but instead set goals for the agent to schedule the model, call tools, execute processes, and validate results.

For example, in programming:

· First Stage: The model generates code

· Second Stage: The model performs reasoning during code generation

· Third Stage: Agent generates code → Executes tests → Runs tests automatically → If it’s wrong, it restarts, without continuous user intervention.

This means that the core deficiencies of the ChatGPT era are being systematically resolved, with higher accuracy rates, stronger reasoning capabilities, and automatic verification mechanisms.

The only remaining question is: what exactly should we use it for?

The Threshold of "Proactivity" is Lowering

The reason I repeatedly emphasize these three turning points is to illustrate why the entire industry is facing a serious shortage of computing power and why ultra-large-scale capital expenditure is reasonable.

The three paradigms exhibit entirely different demands for computing power:

· First Stage: Training consumes computing power, but inference costs are relatively low

· Second Stage: Inference costs skyrocket (more tokens + higher usage frequencies)

· Third Stage (Agent): Multiple calls to reasoning models; the agent itself also consumes computing power (even leaning towards CPU), with frequencies further exploding

But more importantly is the third point: the structural changes in demand are seriously underestimated.

Currently, there are far more people using chatbots than there are using agents, and many people aren't fully utilizing AI at all. The reason is that using AI requires "proactivity." LLMs are tools; they lack goals, they lack will—they can only be called upon actively.

But agents have changed this; they have lowered the requirement for human proactivity. In the future, one person can command multiple agents simultaneously.

This means that even if only a few individuals possess "proactivity," it is sufficient to drive tremendous computing power demand and economic output.

AI still needs "humans to drive it," but it no longer requires "many people."

Enterprise's Payment Drivers

Consumer willingness to pay for AI is limited, which has gradually become clear. The ones truly willing to pay for productivity are enterprises.

What excites enterprises the most isn't just that AI improves efficiencies; it’s that AI can replace human labor and do so more efficiently.

The current reality is that, in large companies, it is often a small number of individuals who drive the business forward; yet the organization is very large, resulting in considerable coordination costs. The role of the agent is to amplify the influence of "value-driving individuals" while reducing organizational friction.

The result is "fewer people → higher output → lower costs." This is also why future layoffs may not merely be "cyclical adjustments," but rather structural changes.

Companies will rethink not just whether they "hired too many during the pandemic," but also whether in the AI era, we inherently need so many people anymore.

Why Isn’t This a Bubble?

From this perspective, the logic of "not a bubble" becomes clearer:

1. The core deficiencies of LLMs are being continuously resolved by computing power and architectures

2. The threshold of the number of individuals driving demand is lowering

3. The benefits brought by agents are not just cost reduction but also revenue generation

Therefore, it’s not difficult to understand why all cloud providers are saying there is a shortage of computing power and are continuously increasing capital expenditure significantly.

Agent and Value Chain Reconstruction

Another key issue is, if models ultimately become commoditized, can OpenAI and Anthropic still profit?

Conventional wisdom suggests no, but agents have changed this. The key is that real value does not lie in the model itself, but in the integration of "model + control system."

Profits often flow to the "integration layer," rather than interchangeable modules. Just like Apple, its hardware is not commoditized because it is deeply integrated with software. Similarly, agents require deep synergy between models and harnesses, making OpenAI and Anthropic key integrators in the value chain, rather than interchangeable links.

Microsoft's transformation is a signal; it originally emphasized "model interchangeability," but after launching a true agent product, it had to abandon this point.

This means that models may not be fully commoditized because agents require integrated capabilities.

The Final Paradox

I must return to that initial paradox.

I have always believed that as long as everyone is still worried about a bubble, it is not a bubble; the real bubble is when no one questions it anymore.

And now, my conclusion is: this is not a bubble.

But if "my saying this is not a bubble" itself proves it to be a bubble, then so be it.

[Original Link]

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

邀好友领 $50 龙虾金,平分 2 万刀!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 律动BlockBeats

3 hours ago
Nvidia starts installing chips on the roads | Rewire Evening News
6 hours ago
A missile from Iran sparked a threat worth over a hundred million dollars for a life.
7 hours ago
Illustrated Stargate Major Turnaround: 14 Trillion Computing Power Empire Dream, Awakened
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarOdaily星球日报
2 hours ago
Is your "little lobster" exposed? CertiK's practical test: How the vulnerable OpenClaw Skill deceives the review process and takes over computers without authorization.
avatar
avatar律动BlockBeats
3 hours ago
Nvidia starts installing chips on the roads | Rewire Evening News
avatar
avatarOdaily星球日报
3 hours ago
Aster Chain officially launches: defining a new era of on-chain privacy and transparency.
avatar
avatarOdaily星球日报
5 hours ago
From precious metals to major U.S. stocks, cryptocurrency platforms are reshaping the global asset pricing power.
avatar
avatarTechub News
5 hours ago
From the Q4 2025 and full-year financial report figures, Cango Inc's strategic shift towards AI.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink