Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Eight, will the profits of the three HBM companies exceed Nvidia?

CN
BTCdayu
Follow
3 hours ago
AI summarizes in 5 seconds.

Eight, will the profits of the three HBM companies exceed NVIDIA?

After breaking down the entire reasoning industrial chain, the most interesting phenomenon is: the most profitable segment in this chain is not the designers of AI chips, but those who supply memory for AI chips.

Is it possible that the three HBM companies can earn more than NVIDIA?

NVIDIA's FY26 revenue is approximately $216 billion, GAAP net profit is around $120 billion, and the current market capitalization is about $4.85 trillion.

It is possible that SK Hynix, Samsung DS, and Micron can collectively generate profits reaching or exceeding $150 billion during the 2026 memory super cycle.

The current combined net profit of the three companies is already close to that of NVIDIA alone. However, the market capitalization difference remains significant, as Samsung Semiconductor is not listed separately. But if we take an average business valuation from the other two companies and combine them, the current market capitalization is roughly $2-2.5 trillion, which means the three combined are about 40-50% of NVIDIA's market cap.

It wouldn't be too surprising if the combined profits of the three exceed NVIDIA in the future, mainly due to several points:

First, the HBM process threshold is not inferior to that of GPUs.

The physical manufacturing and packaging yield thresholds of HBM have reached levels comparable to advanced logic chips; moreover, it is not just "designing a chip," but rather stacking 12-16 layers of DRAM in a stable, low-power, high-yield manner.

The real threshold for GPUs lies in three aspects—architecture design (CUDA + software stack + NVLink), wafer manufacturing (TSMC's advanced process), and ecosystem (developers + libraries + frameworks). Among these, NVIDIA controls architecture and ecosystem, while manufacturing relies on TSMC.

The thresholds for HBM are in four areas—DRAM process, 3D stacking + TSV, MR-MUF / hybrid bonding packaging processes, and customer collaborative design. These four areas are under the control of the three companies.

Comparing process difficulties horizontally:

GPUs are planar single-layer wafers, while HBM involves 12-16 layers of vertical stacking.

The yield challenge for GPUs lies in transistor density, while the yield challenge for HBM is in stacking + TSV + soldering + packaging across four dimensions simultaneously.

GPU architecture breakthroughs rely on NVIDIA's own design, while HBM architecture breakthroughs depend on collaboration between the three companies and clients to design basic chips.

Samsung has been stuck several times on NVIDIA's qualification certification for HBM3E 12-Hi—this serves as direct evidence of engineering difficulty.

The difficulty of HBM's physical manufacturing does not appear lower than that of GPUs. This is also why this market only has three players plus one follower, while the GPU market has over 10 players.

Second, HBM is a "basic supply," while GPUs represent a "single architecture."

This layer is more critical than the process threshold.

The GPU market is NVIDIA vs AMD vs Trainium vs TPU vs Maia vs MTIA vs Cerebras vs Groq vs Huawei Ascend—at least 10+ players competing for the same computing orders. In the next five years, NVIDIA's share in AI accelerators is likely to drop from 80%+ to a range of 50-60%—not because NVIDIA isn't doing well, but because cloud vendors are unwilling to tie their fortunes to one company; all hyperscalers are developing their own ASICs.

However, HBM is different. All these GPUs, ASICs, TPUs, Trainium, Maia, MTIA need HBM—except for the extreme path taken by Cerebras with on-chip SRAM (which constitutes a very small share).

Thus, the true moat for the three HBM companies is not "monopolizing market share within HBM," but rather "regardless of who wins the computing war, all three HBM companies will get a piece of the pie."

Share shifts among GPU manufacturers will spread out NVIDIA's profits, but will not decrease the total orders for the three HBM companies. HBM is the basic supply of AI computing power, while GPUs are the single architecture that compete within the track. The former is closer to the "selling shovels" position than the latter.

Combining the process threshold with industry structure, the probability of the three companies' combined net profit exceeding that of NVIDIA alone within five years is not low.

Third, HBM's bargaining power in the supply chain will exceed that of GPUs.

In the material costs of an NVIDIA GPU, HBM accounts for about 30-40%. The material cost of AMD's MI455X may have an even higher proportion of HBM (as it is equipped with 432 GB of HBM4, more than NVIDIA). The profits taken by the three HBM companies are catching up to the proportion of the entire chain that belongs to GPU design companies themselves.

GPUs can expand production—TSMC can increase a few more 3nm/2nm production lines; NVIDIA and AMD can quickly increase GPU shipments. However, expanding HBM capacity requires building new fabs, which takes 3-5 years. Therefore, in the next 6-12 months, HBM will be the segment with the strongest bargaining power in the entire chain.

Focus on storage, mainly consider three things:

First is the real growth rate of the HBM TAM. Micron's prediction of $35 billion to $100 billion (2025-2028) is a very aggressive forecast. If the reasoning model + Agent + long context keeps the HBM demand curve steep, this number will be realized; however, if any one of these falls back (for example, if a certain compression technology released by Google in 2026 reduces per-token HBM consumption), the number will be discounted.

Second is the speed of capacity expansion. HBM is a branch of DRAM, which historically has a cycle of every 3-4 years. The Samsung P5 plant is set to start production in 2028, SK Hynix's M15X plant in 2027, and Micron's Boise/Clay plants will gradually come online in 2026-2027, which means a significant wave of HBM capacity will emerge in 2028-2029.

Third is the competition for wafer-level + ASIC + edge inference shares. Google TPU 8i + Microsoft Maia 200 are enhancing on-chip SRAM, Cerebras is soldering all memory to the chip itself, NVIDIA Rubin CPX is using GDDR7, and Intel Crescent Island is using LPDDR5X—all of these are attempting to reduce dependence on HBM. If these solutions are widely accepted by the market, the growth of HBM TAM will slow down.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by BTCdayu

3 days ago
<贵人与命运>
3 days ago
"Intel from 20 to 100"
4 days ago
"Former Google TPU Architect: The True Bottleneck of AI is Not Computing Power" In this two-hour interview
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarCryptoSlate
41 minutes ago
Guide #2 is here — and it tackles the hidden risk behind many crypto lending failures
avatar
avatarIgnas | DeFi Research
3 hours ago
RFV Raiders are back
avatar
avatarPhyrex
4 hours ago
Watching these matters between the United States and Iran every day really makes me feel exhausted both mentally and physically.
avatar
avatarPhyrex
5 hours ago
On Monday, the data for $ETH is also acceptable.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink