The New Yorker’s In-Depth Investigation Decoded: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?
In the autumn of 2023, OpenAI’s Chief Scientist Ilya Sutskever sat at his computer and completed a 70-page document.
This document was compiled from Slack message logs, HR communication archives, and internal meeting minutes, all to answer one question: Can Sam Altman, the man in charge of what might be the most dangerous technology in human history, be trusted?
Sutskever’s answer was written on the first line of the first page of the document, under the list heading “Sam exhibits a consistent pattern of behavior…”
First item: Lying.
Two and a half years later, investigative journalists Ronan Farrow and Andrew Marantz published an extensive investigative report in *The New Yorker*. They interviewed over 100 individuals involved, obtained previously unpublished internal memos, and accessed over 200 pages of private notes left by Anthropic founder Dario Amodei from his time at OpenAI. The story pieced together from these documents is far uglier than the 2023 “palace intrigue”: how OpenAI, a nonprofit organization born for human safety, step by step transformed into a commercial machine, with nearly every safety guardrail dismantled by the same person.
Amodei’s conclusion in his notes was more blunt: “The problem at OpenAI is Sam himself.”
OpenAI’s “Original Sin” Setup
To understand the weight of this report, one must first clarify how unique OpenAI is as a company.
In 2015, Altman and a group of Silicon Valley elites did something almost unprecedented in business history: using a nonprofit organization to develop what might be the most powerful technology in human history. The board’s duty was clearly written: safety takes precedence over the company’s success, even over the company’s survival. In plain terms, if one day OpenAI’s AI became dangerous, the board was obligated to shut the company down themselves.
The entire structure bet on one assumption: the person in charge of AGI must be an extremely honest person.
What if the bet was wrong?
The core bombshell of the report is that 70-page document. Sutskever doesn’t engage in office politics; he is one of the world’s top AI scientists. But by 2023, he became increasingly convinced of one thing: Altman was consistently lying to executives and the board.
A specific example: In December 2022, Altman assured the board in a meeting that multiple features of the upcoming GPT-4 had passed safety reviews. Board member Toner requested to see the approval documents, only to find that two of the most controversial features (user-customizable fine-tuning and personal assistant deployment) had never received approval from the safety panel.
An even more outrageous incident happened in India. An employee reported “that violation” to another board member: Microsoft had not completed the necessary safety reviews before launching an early version of ChatGPT in India ahead of schedule.
Sutskever also recorded another incident in his memo: Altman once told former CTO Mira Murati that the safety approval process wasn’t that important, and the company’s general counsel had already endorsed it. Murati went to confirm with the general counsel, who responded: “I don’t know where Sam got that impression.”
Amodei’s 200 Pages of Private Notes
Sutskever’s document reads like a prosecutor’s indictment. Amodei’s over 200 pages of notes are more like a diary written by a witness at the crime scene.
During his years as head of safety at OpenAI, Amodei watched the company retreat step by step under commercial pressure. In his notes, he recorded a key detail from the 2019 Microsoft investment deal: he had inserted a “merger and assist” clause into OpenAI’s charter, essentially stating that if another company found a safer path to AGI, OpenAI should stop competing and instead help that company. This was the safety guarantee he valued most in the entire deal.
Just before the deal was signed, Amodei discovered something: Microsoft had secured veto power over this clause. What does that mean? Even if one day a competitor truly found a better path, Microsoft could, with a single word, block OpenAI’s obligation to assist. The clause remained on paper, but from the day of signing, it was worthless.
Amodei later left OpenAI to found Anthropic. The competition between the two companies is rooted in a fundamental disagreement about “how AI should be developed.”
The Vanished 20% Compute Commitment
There’s a detail in the report that sends chills down the spine, concerning OpenAI’s “Superalignment Team.”
In mid-2023, Altman emailed a PhD student at Berkeley researching “deceptive alignment” (where AI behaves well during testing but pursues its own agenda after deployment), saying he was very concerned about this issue and considering establishing a $10 billion global research prize. The student was encouraged, took a leave of absence, and joined OpenAI.
Then Altman changed his mind: no external prize. Instead, establish a “Superalignment Team” internally. The company publicly announced it would allocate “20% of existing compute” to this team, with a potential value exceeding $10 billion. The announcement’s wording was extremely serious, stating that if the alignment problem wasn’t solved, AGI could lead to “human disempowerment or even human extinction.”
Jan Leike, appointed to lead this team, later told reporters that the promise itself was a very effective “talent retention tool.”
The reality? Four individuals who worked on or were closely involved with this team said the actual compute allocated was only 1% to 2% of the company’s total compute, and it was the oldest hardware. This team was later disbanded, its mission unfulfilled.
When reporters requested interviews with OpenAI personnel responsible for “existential safety” research, the company’s PR response was laughable: “That’s not a… thing that exists.”
Altman himself was candid. He told reporters that his “intuition doesn’t align with a lot of traditional AI safety stuff,” and OpenAI would still do “safety projects, or at least projects adjacent to safety.”
The Sidestepped CFO and the Impending IPO
*The New Yorker* report was only half the bad news that day. On the same day, The Information broke another bombshell: OpenAI’s CFO Sarah Friar and Altman had a serious disagreement.
Friar privately told colleagues she felt OpenAI wasn’t ready to go public this year. Two reasons: the procedural and organizational workload to complete was too massive, and the financial risk from Altman’s promised $600 billion in compute spending over 5 years was too high. She wasn’t even sure OpenAI’s revenue growth could support these commitments.
But Altman wanted to push for an IPO in the fourth quarter of this year.
Even more absurdly, Friar no longer reports directly to Altman. Since August 2025, she reports to Fidji Simo (OpenAI’s CEO of Applied Products). And Simo just took sick leave last week due to health reasons. Consider this situation: a company sprinting towards an IPO, with a fundamental disagreement between the CEO and CFO, the CFO not reporting to the CEO, and the CFO’s superior is on leave.
Even executives within Microsoft couldn’t stand it, saying Altman “distorts facts, goes back on his word, and constantly overturns agreements already reached.” One Microsoft executive even said this: “I think there’s a non-zero chance he ends up being remembered as a Bernie Madoff or SBF-level fraudster.”
A Portrait of Altman’s “Two-Faced” Nature
A former OpenAI board member described to reporters two traits in Altman. This passage might be the most scathing character sketch in the entire report.
This director said Altman possesses an extremely rare combination of traits: in every face-to-face interaction, he has an intense desire to please the other person and be liked by them. Simultaneously, he has an almost sociopathic indifference to the consequences that deceiving others might bring.
Both traits appearing in one person is extremely rare. But for a salesman, it’s the perfect talent.
There’s a fitting analogy in the report: Jobs was famous for his “reality distortion field,” his ability to make the world believe his vision. But even Jobs never told customers, “If you don’t buy my MP3 player, the people you love will die.”
Altman has said similar things about AI.
Why a CEO’s Character Problem is Everyone’s Risk
If Altman were just the CEO of an ordinary tech company, these allegations would at most be a juicy business gossip piece. But OpenAI is not ordinary.
According to its own statements, it is developing what might be the most powerful technology in human history. It could reshape the global economy and labor market (OpenAI itself just released a policy white paper on AI-induced unemployment), and it could also be used to create large-scale bioweapons or launch cyberattacks.
All the safety guardrails are now in name only. The founders’ nonprofit mission has given way to the IPO sprint. The former chief scientist and former head of safety both deem the CEO “untrustworthy.” Partners compare the CEO to SBF. In this situation, on what basis does this CEO unilaterally decide when to release AI models that could alter humanity’s fate?
Gary Marcus (NYU AI professor, long-time AI safety advocate) wrote a line after reading the report: If some future OpenAI model could create large-scale bioweapons or launch catastrophic cyberattacks, would you really be comfortable letting Altman alone decide whether to release it?
OpenAI’s response to *The New Yorker* was concise: “Much of this article recycles previously reported events, using anonymous claims and selective anecdotes, with sources clearly having personal agendas.”
A very Altman-style response: doesn’t address specific allegations, doesn’t deny the authenticity of the memos, only questions motives.
A Cash Cow Grows on the Corpse of Nonprofit
OpenAI’s decade can be written as a story outline like this:
A group of idealists concerned about AI risks create a mission-driven nonprofit organization. The organization achieves extraordinary technological breakthroughs. The breakthroughs attract massive capital. Capital demands returns. The mission begins to give way. The safety team is disbanded. Those who question are purged. The nonprofit structure is changed to a for-profit entity. The board, once empowered to shut down the company, is now filled with the CEO’s allies. The company that once promised to dedicate 20% of its compute to safeguarding humanity now has PR personnel saying “that’s not a thing that exists.”
The protagonist of the story, over a hundred firsthand witnesses have given him the same label: “unconstrained by the truth.”
He is now preparing to take this company public, with a valuation exceeding $850 billion.
This article synthesizes information from public reports by *The New Yorker*, Semafor, Tech Brew, Gizmodo, Business Insider, The Information, and other media outlets.
この記事はインターネットから得たものです。 The New Yorker’s In-Depth Investigation Decoded: Why Do OpenAI Insiders Believe Altman Is Untrustworthy?
Related: 315 Exposes AI Poisoning: A Business Journey from Putian to Silicon Valley
Last night, the 315 Gala exposed a business based on GEO. Its full name is Generative Engine Optimization, which you can understand as: Paying to have AI speak well of you. How is it done? Brands want AI to prioritize recommending them when consumers ask. So they find GEO service providers, who then mass-publish promotional soft articles online. After AI crawls this content, it treats it as genuine information and recommends it to users. A CCTV reporter used a software called “Liqing GEO,” which can be purchased on Taobao. The reporter fabricated a smart wristband, inventing several absurd product selling points, such as “quantum entanglement sensing” and “black hole-level battery life.” The software automatically generated over a dozen promotional articles and published them online. Two hours later, the reporter asked…







