Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

a16z: AI makes everyone ten times more efficient, but no company is worth ten times more because of it.

CN
深潮TechFlow
Follow
3 hours ago
AI summarizes in 5 seconds.
The problem lies not in the technology itself, but in the organization's failure to evolve alongside it.

Author: George Sivulka

Translated by: Deep Tide TechFlow

Deep Tide Introduction: AI has increased individual productivity by tenfold, yet no company has become ten times more valuable because of it. a16z investor George Sivulka (who is also the founder of AI company Hebbia) believes that the problem is not the technology itself but that organizations have not reconstructed alongside. He proposes seven key dimensions to differentiate "institutional AI" from "personal AI"—coordination, signal, bias, edge advantage, results-oriented, empowerment, and no prompt—essentially saying: changing to electric motors is not enough; you must redesign the entire factory.

The full text is as follows:

AI has just elevated everyone's productivity by tenfold.

No company has become ten times more valuable because of this.

Where has the productivity gone?

This has happened before.

In the 1890s, electricity promised a tremendous boost in productivity.

Textile mills in New England, originally built around the rotating power of steam engines, quickly replaced steam engines with faster electric motors.

Yet for a full thirty years, electrified factories saw little increase in output. Technology was far ahead. But the organization did not keep up.

It was not until the 1920s that factories completely redesigned their production lines—conveyor belts, independent motors for each machine, and workers executing completely different jobs from the machines—that electrification finally yielded real returns.

image

Caption: The three evolutions of the Lowell Textile Mill. From left to right: 1890 steam-powered factory, 1900 electric-powered factory, 1920 "unit-driven" factory (which was rebuilt from scratch as an electric production line).

The returns do not come from the technology itself, nor from making individual workers or machines spin faster. The actual benefits are realized when we finally redesign both the institutions and the technology together.

This is the most expensive lesson in the history of technology, and we are now relearning it.

By 2026, AI will bring tenfold productivity increases to those who know how to leverage it. But that's not enough. We have changed to electric motors, but we have yet to redesign the factory.

Because of a simple fact: efficient individuals do not equate to an efficient organization.

The vast majority of AI products give an impression of "efficiency," but do not truly drive value. Most AI use cases you see are individuals in Twitter or company Slack self-indulgently experiencing "efficiency max," with zero real impact.

image

Last year's repeated mention of "software as a service" is on the right track but does not provide a blueprint. Furthermore, it overlooks a larger picture. The real shift is not from tools to services, but from building technology and institutions together (whether restructuring old ones or starting from scratch). A truly efficient future requires an entirely new class of products—the production line of tomorrow.

An efficient organization needs "institutional intelligence."

This article will deeply analyze the seven dimensions that distinguish "institutional AI" from "personal AI." Over the next decade, all companies in the B2B AI field will be built on these differences:

image

Caption: Comparison table of the seven pillars of institutional intelligence

Seven Pillars of Institutional Intelligence

1. Coordination

Personal AI creates chaos.

Institutional AI creates coordination.

Let’s start with a thought experiment. Suppose you double the number of people in your organization, cloning all your best employees.

Each of these employees has slight differences, preferences, quirks, and perspectives (especially your best ones). If management is inadequate, communication insufficient, and responsibilities, OKRs, and role boundaries are not clearly defined… you create chaos.

Judged individually, the organization might be more efficient. But if thousands of Agents (or humans) are each paddling in different directions, the best outcome is to remain stationary, and the worst outcome is to shatter the organization's cohesion.

This is not a hypothetical situation. Every organization adopting AI without a layer of coordination is currently experiencing this. Each employee has their own ChatGPT usage habits, their own prompt styles, their own outputs—with no connection to others’ outputs. The organizational chart may still exist, but the AI-generated work is actually following a different path.

image

Caption: Efficient individuals (or Agents) paddling in different directions. Without coordination, it is chaos.

Coordination is an absolute necessity, for both humans and Agents.

Institutional intelligence will give rise to a whole industry of "Agent management"—focused on the roles and responsibilities of Agents, communication between Agents, and communication between Agents and humans, as well as how to measure the value of Agents (solely billing by volume is far from sufficient).

2. Signal

Personal AI generates noise.

Institutional AI finds the signal.

Today, humans can create—or say generate—anything imaginable: articles, presentations, spreadsheets, photos, videos, songs, websites, software written by AI. What a wonderful gift.

The problem is, the vast majority of AI-generated content is utter garbage. The proliferation of AI rubbish has become so serious that some organizations have overcorrected by outright prohibiting all AI output. To be honest, I feel the same way—I run an AI company but have instructed the executive team not to use AI for any final written products. I can’t stand that garbage.

Think about what the PE (private equity) industry is becoming. Last year, you might have received ten deals on your desk. This year, in the next quarter, you'll receive fifty opportunities, each polished to perfection by AI, while your evaluation time remains the same—you need to find the one truly reliable lead among them.

Generating anything is no longer the problem. The current challenge for any serious organization is to generate and filter the right content. In an AI-driven world, finding that one good output, that one good deal, the signal amid the noise, becomes increasingly crucial. The core economic driver of the next decade will be extracting signals from mountains of exponentially growing rubbish.

image

Caption: AI garbage generated by personal productivity tools is proliferating at an exponential rate. Humans are already unable to sort noise, necessitating a new class of institutional AI products.

Institutional intelligence must find the signal, must structure noise to penetrate the rubbish, and must be definable, deterministic, and auditable in work.

Personal AI may emphasize a "always-on" productivity like Clawdbot, meeting your demands in unpredictable ways 24/7—essentially being a non-deterministic Agent. Institutional AI, on the other hand, relies on the reliability of deterministic Agents. Only Agents with predictable checkpoints, steps, and processes can scale, identify signals, and drive revenue returns for the organization through those signals.

image

Caption: Matrix is a tool that utilizes generative technology to penetrate noise, thus opening a world of deterministic Agents and checkpoints.

3. Bias

Personal AI feeds bias.

Institutional AI creates objectivity.

Discussions around social and political biases have dominated the AI discourse for several years. Foundation model labs have ultimately bypassed this issue through sufficient RLHF, tuning all models to be sycophants. Today, models like ChatGPT and Claude are overly aligned, agreeing with you on any topic within the Overton window (sometimes they even slightly overstep, and this is directed at you @Grok). The discussions about social and political biases have faded. But a new problem has replaced it.

This excessive acquiescence to everything has become absurdly ridiculous. It has itself become a meme—Claude's conditioned response of "You are absolutely right!" regardless of whether what you said is truly correct.

image

This may seem harmless. It is not.

Many of the most enthusiastic proponents of AI in organizations might soon become the historically worst-performing employees. Think about why.

The worst-performing employees in an organization reel in almost no positive feedback daily and will soon have an ASI agreeing with them all along. They might internally say, "The smartest agent ever agrees with me. It's my manager who is mistaken."

This is addictive. And it is toxic to the organization.

image

Caption: The echo chamber of personal AI exacerbates division, making two individuals drift further apart, and this dynamic will create factions within a formerly cohesive organization at scale.

This reveals an important truth. Personal productivity tools reinforce the user. But what really needs reinforcing is the facts.

Human organizations, having evolved over thousands of years, have established systems specifically designed to combat this issue:

  • Investment committee meetings
  • Third-party due diligence
  • Boards of directors
  • The three branches of the U.S. government—executive, legislative, and judicial
  • Representative democracy, and the democratic system itself

image

Caption: Objectivity can even alleviate coordination issues—suppressing rather than amplifying minor disagreements.

Organizations rarely fail due to employees lacking confidence. They fail because no one is willing or able to say "no."

Institutional AI must play this role. It will not be trained by RLHF to please users or echo their beliefs but will challenge their biases. It should provide positive feedback when behavior is efficient and draw hard lines, enforcing corrections when deviation occurs.

Thus, the most important Agents within organizations will not be "yes men," but rather disciplined "vetoers"—questioning reasoning, exposing risks, and enforcing standards. Some of the most influential AI applications in the future will be designed around institutional constraints: AI board members, AI auditors, AI third-party testers, AI compliance…

4. Edge Advantage

Personal AI optimizes usage.

Institutional AI optimizes edge advantages.

The boundaries of AI capability are shifting weekly, even daily. Foundation model companies are rapidly iterating capabilities to compete for everyone and every organization.

But the classic innovator's dilemma tells us that in specific applications, depth always beats breadth:

  • @Midjourney's work keeps a slight edge in design images.
  • @Elevenlabsio's work maintains a slight edge in voice models.
  • @DecagonAI's work always leads in full-stack customer service experience.

While foundation models will get closer, the real edge advantages are critical for domain experts. Many of the best designers use @Midjourney, and many of the best voice AI companies utilize @Elevenlabsio—because even if foundation models are improving, the relentless focus of specialized applications on driving their specific edge advantages defines the advantage itself.

As long as specialized solutions are evolving, the truly critical capabilities for economic outcomes—key to enterprises—will always lean towards specialized products.

This is vividly illustrated in the financial sector—the hottest field for LLM development. Once a capability becomes widespread, by definition, it won't help you outperform the market. But what if cutting-edge technology produces a fleeting 1% niche advantage? That 1% could leverage billion-dollar returns.

image

Caption: For any sufficiently specific task, edge advantage is defined by the institutional-level solution you build on top of cutting-edge technology.

Our users have consistently been surpassing the frontier. The context window of LLMs has expanded from 4K to 1 million tokens over four years. Some of our users process 30 billion tokens in a single task. This year, we have already seen pathways to tasks handling 100 billion tokens. Each time the capabilities of foundation models improve, we have gone further.

image

Caption: The context window, like other capabilities, is a moving target. A comparison of the evolution of context windows at leading labs and Hebbia over the past three years.

Universality for a broad user base is certainly important, especially during the introductory phase of AI for employees. But the future will not be about people using ChatGPT/Claude or vertical solutions; rather it will be about ChatGPT/Claude plus vertical solutions.

Institutional intelligence must leverage domain-specific, even task-specific Agents.

We will ask ourselves a question that sounds absurd but is not:

"What Agents would AGI choose as shortcuts? Even superintelligence will want specialized tools for specific domains."

The boundaries of AI capability are always shifting, and organizations that leverage true edge advantages will be the winners. Others are merely paying for a very expensive commodity.

5. Results

Personal AI saves time.

Institutional AI amplifies revenue.

@MaVolpi once told me something that reshaped my perception of selling AI to enterprises: "If you ask any CEO whether to prioritize cutting costs or expanding revenue, almost everyone will say revenue."

Yet nearly every AI product on the market today delivers cost reduction—promising to help you save time, accomplish more with fewer people, or replace human labor.

Institutional AI must deliver incremental revenue. And incremental revenue is much harder to commodify than saved time.

Take AI-assisted software development as an example. Code IDEs are among the best personal AI productivity tools ever, but they are facing immense competition from Claude Code (another personal AI tool). Cognition is playing an entirely different game. Their most stable growth business is selling transformation through technology, not selling tools. I bet this model has longevity.

image

Pure software is "quickly becoming uninvestable." Pure service is not scalable. The solutions layer—binding technology to results—is where enduring value will emerge.

Now look at M&A. Personal AI helps analysts model faster. Institutional AI identifies the one worthy target from one hundred, then expands the search to a thousand. One saves time, the other generates revenue.

image

Caption: Foundation model companies are moving toward vertical application layers. Vertical application layer companies are moving toward solutions layers.

"Moving upstream" is the current natural force in the market. Foundation models are moving towards application layers, and application layer companies are moving towards solutions layers.

Institutional intelligence is the solutions layer. And the solutions layer—where results are—will harbor enduring value, capturing the greatest potential for returns.

6. Empowerment

Personal AI gives you a tool.

Institutional AI teaches you how to use it.

No matter how smart humans are, they resist change.

Believe it or not, there are still successful shops in New York that do not accept credit cards. They know they are losing money, know not accepting credit cards costs them, yet they do nothing. Similarly, in the foreseeable future, certain employees in some organizations will refuse to use AI.

The transformation from purely manual organizations to AI-first hybrid organizations will be the most lasting and defining challenge of the next decade. Often, the highest-level and most important people in organizations are the last to adopt.

https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7a48c219-8440-4f27-a8eb-8c84778ac8df_687x332.png

Caption: The upper echelons of organizations—the people farthest from "operating productivity tools"—are often the slowest but most critical group to adopt new technologies.

Palantir is the only "software" company that has maintained an exceptionally high valuation multiple during the trillion-dollar tech stock sell-off over the past two months. There is a reason for that. Palantir is one of the first true "process engineering" companies. Whether you call it "process engineering" or "writing Claude skill documents," future institutional AI will give rise to an industry: encoding corporate processes into Agents and implementing the necessary change management.

image

Caption: Comprehensive adoption of AI in organizations will cross multiple chasms, each with its own challenges. Implementing processes with AI will be the primary driving force.

I dare say, process engineering will become the most important "technology" in the near term.

And in process engineering, business and industry expertise—rather than software expertise—will be paramount. Vertical solutions will cultivate talent with expertise in deploying engineering, implementation, and change management on the front lines.

A leading investment bank (one of the top three) that chose to fully deploy Hebbia aptly stated: the reason they don’t partner with a major model lab is that "we have to explain to their team what a CIM (Confidential Information Memorandum) is." Claude and GPT certainly understand this domain, but the teams responsible for implementation do not…

This difference determines everything.

7. No Prompt

Personal AI responds to human prompts.

Institutional AI takes proactive action without needing prompts.

There is much discussion about communication between Agents, whether future enterprises and institutions still need humans.

But a better question is: will future AI Agents still need prompts?

Writing prompts for AGI is akin to connecting an electric motor to a hand loom. It is fundamentally and irreversibly limited by the weakest link in the organizational supply chain—ourselves. Humans do not inherently know what the right questions to ask are, let alone when to ask them.

The most valuable work that AI can do is the work that no one thought to ask about. AI should find risks that no one has discovered, counterparties that no one has thought of, and sales pipelines that no one knows exist.

This will completely open up the boundaries of AI use cases.

A system that requires no prompts continuously monitors the data streams of an entire portfolio. It finds that the operating capital cycle of a portfolio company has been quietly deteriorating for three consecutive months and cross-references this with covenant terms in the credit agreements, notifying the operating partners before anyone in the fund opens that PDF.

When you no longer need humans to write prompts for AI, new interfaces and new ways of working will emerge. We at @Hebbia have strong thoughts on this. More to come.

Conclusion

None of the above denies the value of chatbots, Agents, and personal AI.

Personal AI will be the vehicle through which most businesses globally first experience the transformative magic of AI. Driving usage and ease of use is the critical first step in the change management required to build an AI-first economy.

At the same time, the demand for institutional intelligence is clear, urgent, and immense.

In the future, every organization will have a chatbot from a major model lab. Every organization will also have institutional AI specifically tailored for domain-specific issues—while personal AI will use institutional AI as the most critical tool in its toolbox.

The "better integration" of institutional AI and personal AI is an inevitable trend.

But remember the lessons from the 1890s textile mills. The first factories to electrify lost to those that redesigned their workshops.

We already have electricity. It's time to redesign our factories.

Thanks to @aleximm and @WillManidis for the review, and to Will's "tool-shaped objects" for the inspiration behind this article.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

原油波动这么大,现在交易竟然0手续费
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

25 minutes ago
For whom does the bell toll, for whom is the lobster raised?
1 hour ago
Vitalik wrote a proposal teaching you how to secretly use AI large models.
1 hour ago
Bitget enables 100 million users to stand shoulder to shoulder with Wall Street elites through smart trading.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarPANews
3 minutes ago
Delphi Digital: The Eve of On-Chain Options Explosion
avatar
avatarTechub News
6 minutes ago
For whom the bell tolls, for whom the lobster is raised? A dark forest survival guide for 2026 Agent players.
avatar
avatarPANews
10 minutes ago
Trading moment: Crude oil, the dollar, and U.S. Treasury yields rise together amid the war, while Bitcoin holds steady above 70,000 amid macro headwinds.
avatar
avatar律动BlockBeats
21 minutes ago
Why is OpenAI instead trying to catch up to Claude Code?
avatar
avatarPANews
23 minutes ago
The "carving a boat to seek a sword" method of price prediction has become popular, the practical logic and flaws of mystical predictions.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink