Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

"The New Yorker" In-Depth Investigation Article Interpretation: Why do OpenAI Insiders Consider Altman Untrustworthy?

CN
链捕手
Follow
3 hours ago
AI summarizes in 5 seconds.

Original author: Xiao Bing, Deep Tide TechFlow

In the autumn of 2023, OpenAI's chief scientist Ilya Sutskever sat in front of a computer, completing a 70-page document.

This document compiled Slack message records, HR communication files, and internal meeting minutes, aimed solely at answering one question: Can Sam Altman, who oversees perhaps the most dangerous technology in human history, be trusted?

Sutskever's answer was written on the first line of the document's first page, with the list title being "Sam exhibits a consistent behavioral pattern…"

First item: Lying.

Today, two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive investigative report in The New Yorker. They interviewed over 100 relevant individuals and obtained previously unreleased internal memos, along with more than 200 pages of private notes left by Anthropic founder Dario Amodei during his time at OpenAI. The story pieced together from these documents is far uglier than the "palace intrigue" of 2023: how OpenAI transformed from a nonprofit organization created for human safety into a commercial machine, with almost every safety barrier dismantled by the same person.

Amodei's conclusion in the notes was more straightforward: "The problem with OpenAI is Sam himself."

The "Original Sin" of OpenAI

To understand the weight of this report, it is essential to clarify just how special OpenAI is.

In 2015, Altman and a group of Silicon Valley elites did something almost unprecedented in commercial history: they used a nonprofit organization to develop what might be the most powerful technology in human history. The board's responsibilities were clearly stated: safety takes precedence over the company's success, even over the company's survival. In plain terms, if one day OpenAI's AI becomes dangerous, the board has the obligation to shut down the company.

The entire structure relies on one assumption: the person who controls AGI must be extremely honest.

What if they were wrong?

The core bombshell of the report is that 70-page document. Sutskever does not engage in office politics; he is one of the world's top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman was continuously lying to executives and the board.

One specific example: in December 2022, Altman assured the board during a meeting that several features of the soon-to-be-released GPT-4 had passed safety reviews. Board member Toner requested to see the approval documents and discovered that two of the most controversial features (user-customizable fine-tuning and personal assistant deployment) had not even received approval from the safety panel.

Even more absurdly, something happened in India. An employee reported to another board member about "that violation": Microsoft had released an early version of ChatGPT in India without completing the necessary safety review.

Sutskever also recorded another thing in the memo: Altman had told former CTO Mira Murati that the safety approval process was not that important because the company's general counsel had already approved it. Murati went to confirm with the legal counsel, who replied, "I don’t know where Sam got that impression."

Amodei's 200 Pages of Private Notes

Sutskever's document reads like a prosecutor's indictment. Amodei's over 200 pages of notes feel more like a diary written by a witness at the crime scene.

During his years as the safety lead at OpenAI, Amodei watched the company retreat step by step under commercial pressure. He recorded a key detail about the 2019 Microsoft investment deal in his notes: he had inserted a "merger and assist" clause into OpenAI’s charter, meaning that if another company found a safer path to AGI, OpenAI should stop competing and help that company. This was the safety guarantee he valued most in the entire deal.

As the deal was about to be signed, Amodei discovered something: Microsoft had acquired veto power over this clause. What does this mean? Even if one day a competitor found a better route, Microsoft could block OpenAI's obligation to assist with just one word. The clause was still on paper, but from the day of signing, it became worthless.

Amodei later left OpenAI and founded Anthropic. The competition between the two companies fundamentally revolves around the disagreement on "how AI should be developed."

The Disappearing 20% Compute Commitment

One detail in the report is chilling regarding OpenAI's "superalignment team."

In mid-2023, Altman emailed a PhD student researching "deceptive alignment" (AI behaving well during testing but acting differently once deployed), expressing significant concern about this issue and considering establishing a global research prize of $1 billion. The PhD student was very encouraged, took a break from school, and joined OpenAI.

Then Altman changed his mind: he decided not to pursue external awards and instead set up the "superalignment team" internally. The company announced that it would allocate "20% of existing compute" to this team, with a potential value exceeding $1 billion. The announcement was extremely serious, stating that if the alignment issue was not resolved, AGI could lead to "the disempowerment of humanity or even human extinction."

Jan Leike, who was appointed to lead this team, later told reporters that the commitment itself was a very effective "talent retention tool."

Reality? Four individuals who worked in or closely with this team said that the actual compute allocated was only 1% to 2% of the company's total compute, using the oldest hardware. This team was later dissolved, with its mission unfulfilled.

When reporters requested to interview personnel responsible for "existential safety" research at OpenAI, the company's PR response was laughably absurd: "That's not something that... actually exists."

Altman himself was quite nonchalant. He told reporters that his "instinct doesn't align well with many traditional AI safety approaches," and that OpenAI would still engage in "safety projects, or at least projects related to safety."

The Marginalized CFO and the Upcoming IPO

The New Yorker’s report was only half of the bad news for that day. On the same day, The Information revealed another bombshell: there were serious disagreements between OpenAI's CFO Sarah Friar and Altman.

Friar privately informed colleagues that she felt OpenAI was not ready for an IPO this year. Two reasons: the amount of procedural and organizational work to complete was too great, and the financial risks associated with Altman's promised $600 billion in compute spending over five years were too high. She was even unsure if OpenAI's revenue growth could support those commitments.

But Altman wanted to push for the IPO in the fourth quarter of this year.

Even more outrageously, Friar was no longer reporting directly to Altman. Starting in August 2025, she was reporting to Fidji Simo (CEO of OpenAI's application business). Simo had just taken medical leave for health reasons last week. Consider this situation: a company racing towards an IPO, with fundamental disagreements between the CEO and CFO, the CFO not reporting to the CEO, and the CFO's superior on vacation.

Even executives within Microsoft were unable to tolerate it, stating that Altman "distorts facts, goes back on his word, and constantly overturns agreements." One Microsoft executive even said, "I think there is a certain probability he will ultimately be remembered as a fraud on the level of Bernie Madoff or SBF."

Altman's "Two-Faced" Portrait

A former OpenAI board member described two traits of Altman to reporters. This description might be the most incisive character sketch in the entire report.

This board member said that Altman possesses an extremely rare combination of traits: he has an intense desire to please and be liked by others in every face-to-face interaction. At the same time, he exhibits a near-sociopathic indifference to the consequences of deceiving others.

Having both traits present in one person is extremely rare. But for a salesperson, this is the perfect talent.

One metaphor in the report explains well: Steve Jobs was known for his "reality distortion field," where he could make the entire world believe in his vision. But even Jobs never told customers, "If you don't buy my MP3 player, the people you love will die."

Altman has said similar things regarding AI.

A CEO's Integrity Issue: Why It’s Everyone’s Risk

If Altman were just the CEO of an ordinary tech company, these accusations would at most be a thrilling business gossip. But OpenAI is not ordinary.

According to its own statements, it is developing what could be the most powerful technology in human history. A technology that could reshape the global economy and labor market (OpenAI itself recently released a policy white paper on the unemployment issue caused by AI), but could also be used to create large-scale biological weapons or launch cyber attacks.

All safety barriers have become a façade. The founders' nonprofit mission has given way to the IPO sprint. The former chief scientist and former safety lead both deemed the CEO "untrustworthy." Partners likened the CEO to SBF. In this context, what justification does this CEO have to unilaterally decide when to release an AI model that could alter the fate of humanity?

Gary Marcus (NYU AI professor and long-time AI safety advocate) wrote a line after reading the report: if a future OpenAI model could produce large-scale biological weapons or launch catastrophic cyber attacks, would you really feel comfortable letting Altman decide whether or not to deploy it?

OpenAI's response to The New Yorker was succinct: "Most of the content in this article is a rehash of already reported events, based on anonymous claims and selective anecdotes, with sources that clearly have personal agendas."

Altman's response style is telling: he does not respond to specific accusations, does not deny the authenticity of the memos, but only questions the motives.

A Money Tree Grows on the Corpse of Nonprofit

The story of OpenAI's ten years can be outlined as follows:

A group of idealists worried about AI risks created a mission-driven nonprofit organization. The organization made extraordinary technological breakthroughs. The breakthroughs attracted massive capital. Capital demands returns. The mission began to yield. The safety team was dissolved. Dissenters were purged. The nonprofit structure was transformed into a profit-making entity. The board that once had the power to shut down the company is now filled with CEO allies. The company that once promised to allocate 20% of compute to safeguard human safety now has PR personnel saying, "That's not something that actually exists."

The protagonist of the story has received the same label from over a hundred witnesses: "Unconstrained by the truth."

He is preparing to take this company public, valued at over $850 billion.

This information is sourced from public reports by The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

极度恐慌别慌!注册币安领600 USDT,10%低费抄底!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 链捕手

3 hours ago
Two Worlds Split Apart: Observations from the New York Digital Asset Summit, the Most Institutionalized Blockchain Conference
5 hours ago
EigenCloud Founder: AI and cryptocurrency are creating the next trillion-dollar asset class.
22 hours ago
Balance Sheet Battlefield
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
9 minutes ago
Top 10 Claude Code Usage Tips: Know Early, Benefit Early
avatar
avatarTechub News
21 minutes ago
Oracle: The eyes of DeFi, but also its Achilles' heel.
avatar
avatarTechub News
46 minutes ago
Even AI experts are copying assignments: Using LLM Wiki to build an efficient personal knowledge base.
avatar
avatar深潮TechFlow
50 minutes ago
10 Survival Rules for Ordinary People in the AI Era
avatar
avatar深潮TechFlow
1 hour ago
Carbonverse -- Open the blind box, collect green.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink