Written by: Xiaobing, Deep Tide TechFlow
In the autumn of 2023, OpenAI's Chief Scientist Ilya Sutskever sat in front of his computer, finishing a 70-page document.
This document was compiled from Slack message records, HR communication files, and internal meeting minutes, solely to answer one question: Can Sam Altman, the one who oversees what may be the most dangerous technology in human history, indeed be trusted?
The answer provided by Sutskever was written on the first line of the document's first page, with the list heading stating "Sam exhibits a consistent behavior pattern..."
The first item: lying.
Today, two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published a lengthy investigative report in The New Yorker. They interviewed over 100 involved parties, obtaining previously unreleased internal memos, as well as over 200 pages of private notes from Anthropic founder Dario Amodei from his time at OpenAI. The story pieced together from these documents is far uglier than the "palace intrigue" of 2023: how OpenAI transformed from a non-profit organization created for human safety into a commercial machine, with almost every safety barrier dismantled by the same individual.
Amodei's conclusion in the notes is even more straightforward: "The problem with OpenAI is Sam himself."
OpenAI's "Original Sin" Setup
To understand the significance of this report, one must first clarify how unique OpenAI is as a company.
In 2015, Altman and a group of Silicon Valley elites did something almost unprecedented in commercial history: they developed what might be the most powerful technology in human history through a non-profit organization. The board's duty was clearly stated: safety trumped the success of the company and even the survival of the company. In plain words, if one day OpenAI's AI becomes dangerous, the board is obligated to shut down the company themselves.
The entire structure is based on one assumption: the person in charge of AGI must be extremely honest.
What if that assumption is wrong?
The core bombshell of the report is that 70-page document. Sutskever avoids office politics; he is one of the world's top AI scientists. But by 2023, he grew increasingly certain of one thing: Altman was continuously lying to executives and the board.
A specific example: In December 2022, Altman assured during a board meeting that multiple features of the soon-to-be-released GPT-4 had passed safety checks. Board member Toner requested to see the approval documents, only to find that two of the most controversial features (user-defined fine-tuning and personal assistant deployment) had not obtained approval from the safety panel at all.
Even more outrageous incidents occurred in India. An employee reported to another board member about "that violation": Microsoft prematurely released an early version of ChatGPT in India without completing the necessary safety review.
Sutskever also recorded in the memo another matter: Altman once told former CTO Mira Murati that the safety approval process wasn't that important as the company's legal counsel had already approved it. Murati ran to confirm with the legal counsel, who replied, "I don't know where Sam got that impression."
Amodei's 200 Pages of Private Notes
Sutskever's documents resemble a prosecutor's indictment. Amodei's 200-plus pages of notes feel more like a diary written by an eyewitness at the crime scene.
During his years as the safety lead at OpenAI, Amodei watched the company retreat step by step under commercial pressure. He noted a key detail regarding Microsoft's investment deal in 2019: he had inserted a "merger and assistance" clause into OpenAI's charter, meaning that if any other company found a safer path to AGI, OpenAI should stop competing and assist that company. This was the safety assurance he valued most in the entire transaction.
As signing day approached, Amodei discovered something: Microsoft obtained veto power over this clause. What did that mean? Even if one day a competitor did find a better path, Microsoft could block OpenAI's obligation to assist with a single word. The clause remained on paper, but from the signing day, it became worthless.
Amodei later left OpenAI and founded Anthropic. The competition between the two companies fundamentally concerns how "AI should be developed."
The Missing 20% Computational Power Commitment
One detail in the report is chilling, regarding OpenAI's "superalignment team."
In mid-2023, Altman emailed a PhD student researching "deceptive alignment" (AI behaving well in tests but doing its own thing once deployed), expressing deep concern about this issue and considering establishing a $1 billion global research award. The PhD student felt encouraged, took a break from school, and joined OpenAI.
Then Altman changed his mind: no external awards, instead establishing an internal "superalignment team." The company announced with great fanfare that it would allocate "20% of its existing computational power" to this team, with a potential value exceeding $1 billion. The announcement was extremely serious, stating that if alignment issues were not solved, AGI could lead to "the empowerment of humans being stripped away or even human extinction."
Jan Leike, appointed to lead this team, later told reporters that this commitment itself was a very effective "talent retention tool."
The reality? Four people who worked in or closely with this team said that the actual allocation of computational power was only 1% to 2% of the company's total computational power, and it was the oldest hardware. This team was later disbanded, with its mission unfulfilled.
When reporters requested an interview with OpenAI personnel responsible for "existential safety" research, the company's PR response was comical: "That's not a... real thing."
Altman himself appeared calm. He told reporters that his "intuition does not align well with a lot of traditional AI safety thinking," but OpenAI would still pursue "safety projects, or at least projects related to safety."
The Marginalized CFO and Upcoming IPO
The New Yorker’s report was only half of the day's bad news. On the same day, The Information broke another bombshell: there was a serious disagreement between OpenAI's CFO Sarah Friar and Altman.
Friar privately told colleagues that she felt OpenAI was not ready to go public this year. Two reasons: the procedural and organizational workload to be completed was too great, and the financial risk caused by Altman's promised $600 billion computational power expenditures over five years was too high. She was even uncertain whether OpenAI's revenue growth could support these commitments.
But Altman wanted to rush for an IPO in the fourth quarter of this year.
What is even more outrageous is that Friar no longer reports directly to Altman. Starting from August 2025, she reports to Fidji Simo (CEO of OpenAI's application business). But Simo just took sick leave last week. Consider this situation: a company rushing for an IPO has a fundamental disagreement between the CEO and CFO, the CFO does not report to the CEO, and the CFO's superior is on leave.
Even executives within Microsoft could not take it anymore, stating that Altman "distorts facts, goes back on his word, and constantly overturns agreements." One Microsoft executive even said, "I think there is a significant chance he will ultimately be remembered as a con artist on the level of Bernie Madoff or SBF."
Altman's "Two-Faced" Portrait
A former member of the OpenAI board described two traits of Altman to reporters. This remark might be the harshest character sketch in the entire report.
This board member stated that Altman has an extremely rare combination of traits: he has a strong desire to please and be liked by others in every face-to-face interaction. At the same time, he exhibits a nearly sociopathic indifference to the consequences of deceiving others.
Having both traits in one person is extremely rare. But for a salesman, this is the perfect gift.
There’s a metaphor in the report that sums it up well: Steve Jobs was known for his "reality distortion field," his ability to make the entire world believe in his vision. But even Jobs never told customers, "If you don’t buy my MP3 player, the person you love will die."
Altman has said similar things regarding AI.
The Integrity Issue of a CEO, Why It Is Everyone's Risk
If Altman were just the CEO of an ordinary tech company, these accusations would be at most an exciting business gossip. But OpenAI is not ordinary.
According to its own claim, it is developing what might be the most powerful technology in human history. This technology can reshape the global economy and labor market (OpenAI itself just published a policy white paper on the issue of AI-induced unemployment), and it can also be used to create large-scale biological weapons or launch cyberattacks.
All safety barriers have become mere formalities. The founder's non-profit mission has given way to the IPO rush. Both the former chief scientist and the former safety lead have deemed the CEO "untrustworthy." Partners have compared the CEO to SBF. Under these circumstances, what right does this CEO have to unilaterally decide when to release an AI model that could change the fate of humanity?
Gary Marcus (NYU AI professor and longtime AI safety advocate) wrote a line after reading the report: If a future OpenAI model could create large-scale biological weapons or launch catastrophic cyberattacks, would you really feel comfortable letting Altman decide whether or not to release it?
OpenAI's response to The New Yorker was concise: "Most of the content of this article is a rehash of previously reported events, with anonymous statements and selective anecdotes, and the sources are clearly motivated by personal objectives."
This is very characteristic of Altman's response style: not addressing specific accusations, not denying the authenticity of the memo, only questioning the motives.
On the Corpse of a Non-Profit, a Money Tree Has Grown
The decade of OpenAI can be summarized in a story outline like this:
A group of idealists concerned about AI risks created a mission-driven non-profit organization. The organization made extraordinary technological breakthroughs. The breakthroughs attracted massive capital. The capital demanded returns. The mission began to yield. The safety team was disbanded. Those who questioned were purged. The non-profit structure was transformed into a for-profit entity. The board, which once had the authority to shut down the company, is now filled with allies of the CEO. The company that once promised to allocate 20% of computational power to safeguard human safety now has PR personnel stating, "That is not a real thing."
The protagonist of this story has been given the same label by over a hundred first-hand witnesses: "unconstrained by the truth."
He is preparing to take this company public, with a valuation exceeding $850 billion.
Information in this article is compiled from public reports by various media outlets, including The New Yorker, Semafor, Tech Brew, Gizmodo, Business Insider, and The Information.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。