The starting point was an employee memo obtained by The Information. In the letter, Dresser did three things simultaneously: praised the new Amazon partnership as having “staggering demand,” acknowledged that the Microsoft partnership “limits our ability to reach customers where they are,” and spent considerable space dissecting Anthropic’s revenue figures. The timing of this leak was precisely one week after Anthropic announced it had surpassed the $30 billion annualized revenue milestone.
Superficially an internal company communication, it was in essence a meticulously constructed information campaign. To decipher it, the most direct approach is to examine three dimensions separately—revenue accounting standards, the enterprise competitive landscape, and the compute arms race—and then view them within the same cloud partnership structural diagram.
Where Does the $8 Billion Accounting Gap Come From?
Anthropic reports $30 billion in annualized revenue; OpenAI says the actual figure is $22 billion. The $8 billion difference stems from the two companies making fundamentally different choices in revenue recognition standards.
Anthropic uses a gross accounting method: when an enterprise purchases Claude usage credits through AWS, Anthropic records the full amount of that payment as top-line revenue, treating the platform share paid to Amazon as a cost. OpenAI does the opposite, recording only the net amount it actually receives from Microsoft, with Microsoft’s share not appearing on the top line.

Both methods comply with U.S. Generally Accepted Accounting Principles (GAAP). Anthropic’s logic is that it is the “principal” in customer transactions, with cloud vendors merely being distribution channels. OpenAI’s logic is that it treats Microsoft as an “agent,” booking only the portion it actually receives. The root of the divergence is not about who is fabricating numbers, but about who more aggressively asserts their dominant position in the sales chain.
In the memo, Dresser wrote that Anthropic “uses accounting that makes its revenue numbers appear larger,” including booking the gross amounts of AWS and Google’s share into its top-line revenue. The subtext of this statement is not hard to understand: when Anthropic submits its S-1 filing to the SEC, auditors will rule on this standard, potentially requiring adjustments and disclosures to align the accounting. Converted to the same standard, Anthropic is at $22 billion, and OpenAI is at $24 billion—the leader changes position.
It must be noted that Anthropic’s revenue growth rate itself is already historic. According to data from Bloomberg and Sacra, its annualized revenue grew from approximately $9 billion at the end of Q4 2025 to the current $30 billion—more than tripling in less than five months. This is primarily driven by real customer procurement, not something explainable by accounting standard adjustments. The core of this accounting controversy is not that Anthropic is shrinking, but that OpenAI is using the “accounting standard” as a knife to redraw the boundaries.
The Catch-up Speed on the Enterprise Side is Faster Than Most Anticipated
Ramp, which tracks the actual AI spending behavior of thousands of enterprises on its platform, is a primary data source for judging real enterprise choices.
Ramp AI Index data for April: Anthropic’s share among enterprise paying customers rose to 30.6%, while OpenAI’s is 35.2%. The gap narrowed from 11 percentage points in February to 4.6 percentage points. Based on Anthropic’s average monthly growth rate of +6.3 percentage points over the past two months (itself a record single-month increase for this metric), it will overtake OpenAI on this metric in approximately two months.

More noteworthy are the structural signals. In three high-purchasing-power industries, Anthropic’s lead has become a reality: Information Technology/Software (63% vs. 54%), Financial Services (52% vs. 46%), and Professional Services (47% vs. 44%), all surpassing OpenAI. These three industries happen to be the areas where enterprise AI budgets are most concentrated and procurement decisions are most sophisticated. This means that the companies with the greatest say in the AI purchasing chain have begun collectively tilting towards Anthropic.
In the memo, Dresser unusually admitted that Anthropic “holds a significant lead among enterprise customers,” citing programming capabilities. Coming from within OpenAI, this statement carries a weight entirely different from external evaluations—it’s a company internally telling its own employees that the opponent is winning on the core battlefield. She simultaneously added a warning: “You do not want to be a single-product company in a platform war.” This was a reminder to employees that if Claude’s advantage in programming cannot extend to the platform layer, it is ultimately just an entry ticket, not a boarding pass.
Compute Gap: Similar Today, Fourfold by 2030
Compute capacity is the most difficult competitive dimension for AI companies to close in the short term, as its construction cycle is measured in years and its capital threshold in tens of billions.
The current numbers don’t show a huge gap: OpenAI has about 1.9 gigawatts (GW), Anthropic about 1.4 GW, a difference of roughly 35%. In the memo, Dresser described Anthropic as “operating on a meaningfully smaller curve,” but this description isn’t an exaggeration given the current capacity comparison. The gap is real, just not yet decisive.
The real fork comes after 2027. OpenAI plans to reach 30 GW of compute by 2030, backed by a $30 billion five-year cloud computing contract with Oracle, the entire Stargate infrastructure project, and a total construction commitment of $1.4 trillion.
Anthropic’s path relies on a Broadcom custom chip agreement with a capacity of 3.5 GW, deployed via Google Cloud, effective from 2027. Combined with existing training clusters on AWS, the target by the end of 2027 is 7-8 GW.

Even if Anthropic fully delivers on its 2027 target, there remains a fourfold gap between it and OpenAI’s 2030 plan. This chasm is not technically insurmountable; if improvements in model efficiency can generate more output per unit of compute, Anthropic could build sufficiently good products with less compute.
But it must do so under the premise that Claude’s momentum in the enterprise sector continues, using sustained subscription revenue to support its compute procurement costs: according to Sacra estimates, Anthropic will pay cloud partners approximately $1.9 billion this year, rising to about $6.4 billion in 2027.
Amazon, Betting on Two Competitors Simultaneously
The most intriguing sentence in this memo is Dresser’s direct characterization of the Microsoft partnership, writing that it “also limits our ability to reach customers where they are.”
OpenAI’s pivot towards Amazon is already very clear: according to CNBC, in February of this year, Amazon announced a $50 billion investment in OpenAI, while securing exclusive third-party cloud distribution rights for OpenAI’s enterprise Agent management platform, Frontier.
This is an active switch from the Microsoft orbit to the Amazon orbit. The logic behind it is straightforward: many enterprise customers’ AI infrastructure is already built on AWS’s Bedrock platform, and Microsoft’s exclusivity clauses make it difficult for OpenAI to sell directly there.
But the other side of Amazon’s role in this competition is equally noteworthy: it is currently Anthropic’s largest cloud infrastructure partner and strategic investor, with cumulative investments of $8 billion. Their collaborative Project Rainier cluster deploys approximately 500,000 Trainium 2 chips. Amazon’s total bet in the entire AI race amounts to $58 billion, flowing simultaneously to two opponents currently clashing head-on in the enterprise market.

This isn’t just a hyperscale cloud vendor diversifying its bets; it’s a more precise structure: Amazon is both Anthropic’s “strategic ally and largest backer” and the new cloud foundation OpenAI is using to “replace Microsoft.”
When the two companies compete for the same pool of enterprise customers, the channel they are fighting over happens to be Amazon’s Bedrock platform, which simultaneously distributes both companies’ models. Whichever achieves higher conversion rates on Bedrock, Amazon profits, but OpenAI and Anthropic lose to each other.
Under pressure from the ongoing erosion of enterprise market share and the structural cracks in the Microsoft partnership, OpenAI chose to rebuild its narrative through a carefully calculated numbers war, while leveraging Amazon to reconfigure its distribution channels. Taken apart, these three sets of numbers reveal a competition more complex than either side wants you to see.
هذا المقال مصدره من الانترنت: A four-page internal letter, what is OpenAI’s game plan?
Related: The Most Conservative Money in the U.S. Is Eyeing Cryptocurrency
On March 30, 2026, the U.S. Department of Labor released a 164-page proposed rule titled “Fiduciary Duties Regarding Selected Investment Alternatives.” The core of this document is to formally open the door to alternative assets for the U.S. 401(k) market, which exceeds $10 trillion in size, with digital assets waiting behind that door. Simultaneously, the proposed rule proactively establishes a legal firewall for fiduciaries. Behind this rule lies a complete reversal of the U.S. regulatory stance. In March 2022, during the Biden administration, the Employee Benefits Security Administration (EBSA) issued a sternly worded guidance warning plan fiduciaries to exercise “extreme care” before considering adding cryptocurrencies to 401(k) investment options. The document also listed five specific risk reasons: extreme price volatility, participants’ lack of judgment capabilities, custody and recordkeeping concerns, questionable…







