
David Sacks|Aug 09, 2025 18:14
A BEST CASE SCENARIO FOR AI?
The Doomer narratives were wrong. Predicated on a “rapid take-off” to AGI, they predicted that the leading AI model would use its intelligence to self-improve, leaving others in the dust, and quickly achieving a godlike superintelligence. Instead, we are seeing the opposite:
— the leading models are clustering around similar performance benchmarks;
— model companies continue to leapfrog each other with their latest versions (which shouldn’t be possible if one achieves rapid take-off);
— models are developing areas of competitive advantage, becoming increasingly specialized in personality, modes, coding and math as opposed to one model becoming all-knowing.
None of this is to gainsay the progress. We are seeing strong improvement in quality, usability, and price/performance across the top model companies. This is the stuff of great engineering and should be celebrated. It’s just not the stuff of apocalyptic pronouncements. Oppenheimer has left the building.
The AI race is highly dynamic so this could change. But right now the current situation is Goldilocks:
— We have 5 major American companies vigorously competing on frontier models. This brings out the best in everyone and helps America win the AI race. As @BalajiS has written: “We have many models from many factions that have all converged on similar capabilities, rather than a huge lead between the best model and the rest. So we should expect a balance of power between various human/AI fusions rather than a single dominant AGI that will turn us all into paperclips/pillars of salt.”
— So far, we have avoided a monopolistic outcome that vests all power and control in a single entity. In my view, the most likely dystopian outcome with AI is a marriage of corporate and state power similar to what we saw exposed in the Twitter Files, where “Trust & Safety” gets weaponized into government censorship and control. At least when you have multiple strong private sector players, that gets harder. By contrast, winner-take-all dynamics are more likely to produce Orwellian outcomes.
— There is likely to be a major role for open source. These models excel at providing 80-90% of the capability at 10-20% of the cost. This tradeoff will be highly attractive to customers who value customization, control, and cost over frontier capabilities. China has gone all-in on open source, so it would be good to see more American companies competing in this area, as OpenAI just did. (Meta also deserves credit.)
— There is likely to be a division of labor between generalized foundation models and specific verticalized applications. Instead of a single superintelligence capturing all the value, we are likely to see numerous agentic applications solving “last mile” problems. This is great news for the startup ecosystem.
— There is also an increasingly clear division of labor between humans and AI. Despite all the wondrous progress, AI models are still at zero in terms of setting their own objective function. Models need context, they must be heavily prompted, the output must be verified, and this process must be repeated iteratively to achieve meaningful business value. This is why Balaji has said that AI is not end-to-end but middle-to-middle. This means that apocalyptic predictions of job loss are as overhyped as AGI itself. Instead, the truism that “you’re not going to lose your job to AI but to someone who uses AI better than you” is holding up well.
In summary, the latest releases of AI models show that model capabilities are more decentralized than many predicted. While there is no guarantee that this continues — there is always the potential for the market to accrete to a small number of players once the investment super-cycle ends — the current state of vigorous competition is healthy. It propels innovation forward, helps America win the AI race, and avoids centralized control. This is good news — that the Doomers did not expect.(David Sacks)
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink