आइकॉन_इंस्टॉल_आईओएस_वेब आइकॉन_इंस्टॉल_आईओएस_वेब आइकन_इंस्टॉल_एंड्रॉइड_वेब

Why is OpenAI Catching Up to Claude Code Instead?

विश्लेषण12 घंटे पहले发布 व्याट
322 0

Original Author: Maxwell Zeff, Wired

Original Compilation: Peggy, BlockBeats

Editor’s Note: In the current era of rapidly rising AI coding agents, OpenAI, which once led the generative AI wave with ChatGPT, has unexpectedly become a “chaser” in this critical race. In stark contrast, Anthropic, founded by former OpenAI members, has rapidly gained popularity in the developer community and enterprise market with Claude Code, becoming one of the key leaders in the AI programming tools field.

Through interviews with OpenAI executives, engineers, and multiple developers, this article reveals the real story behind this competition: from the early splitting of the OpenAI Codex project and the shift of resources towards ChatGPT and multimodal models, to the internal team’s reintegration and accelerated launch of AI programming products, OpenAI is undergoing a transition from strategic neglect to a comprehensive catch-up. In a sense, this is not a lag in technical capability but a misalignment in strategic rhythm: the explosion of ChatGPT changed the company’s priorities, the partnership with Microsoft constrained product pathways, while Anthropic bet on the AI programming track earlier.

Behind this race, deeper questions are gradually emerging: as AI agents begin to take on more and more cognitive work, software development processes and even white-collar labor itself may be redefiनेड.

The following is the original text:

OpenAI CEO Sam Altman sits with his legs crossed on his office chair, looking up at the ceiling as if pondering an answer not yet formed. To some extent, this also has to do with the environment.

OpenAI’s new headquarters in San Francisco’s Mission Bay is a modern structure of glass and light wood, its atmosphere almost that of a “tech temple.” On the display shelf behind the reception desk are brochures introducing the “Eras of AI,” seemingly depicting a path to technological revelation. The stairwell walls are covered with posters marking milestones in AI development, one of which records a moment when thousands of viewers watched via livestream as a machine defeated a top-tier esports team in a *Dota 2* match. In the corridors, researchers come and go wearing team merch with slogans; one shirt reads: “Good research takes time.” Ideally, of course, not too long.

We are sitting in a huge conference room. The question I posed to Altman concerns the AI programming revolution sweeping the industry, and why OpenAI seems not to be leading this wave.

Today, millions of software engineers have already begun handing over parts of their programming work to AI, making many in Silicon Valley truly face a reality for the first time: automation might touch their own jobs. Coding agents have thus become one of the few application scenarios where companies are willing to pay a premium for AI. Logically, such a moment could, and perhaps should, have become the next “victory moment” on OpenAI’s stairwell posters. But now, the name dominating the headlines is not theirs.

The company’s rival is Anthropic, an AI company founded by former OpenAI members. With its coding agent product Claude Code, Anthropic has experienced explosive growth. The company disclosed in February that the product already contributes nearly one-fifth of its business scale, corresponding to an annualized revenue exceeding $25 billion. In contrast, according to a person familiar with the matter, as of the end of January, OpenAI’s own programming product, OpenAI Codex, had an annualized revenue of just over $10 billion.

The question is: Why is OpenAI lagging behind in this AI programming race?

“First-mover advantage is incredibly valuable,” Sam Altman says after a moment of thought. “We’ve experienced that with ChatGPT.”

However, in his view, now is precisely the time for OpenAI to go all-in on AI programming. He believes the company’s existing model capabilities are powerful enough to support highly complex coding agents. Of course, such capability is no accident; the company has invested tens of billions of dollars in model training for this purpose.

“This is going to be a huge business,” Altman says, “not just for the economic value it brings itself, but for the general productivity that programming unlocks.” He pauses, then adds: “I rarely use this word lightly, but I think this is likely one of those multi-trillion-dollar markets.”

Going further, he believes OpenAI Codex might be the “most likely path” to Artificial General Intelligence (AGI). According to OpenAI’s definition, AGI is an AI system capable of surpassing human performance in the vast majority of economically valuable work.

Why is OpenAI Catching Up to Claude Code Instead?

Sam Altman, CEO of OpenAI. Photo: Mark Jayson Quines.

However, despite Altman’s confident assessment delivered with an unhurried demeanor, the internal reality within the company over the past few years has been far more complex. To understand the fuller internal story, I interviewed over 30 people familiar with the matter, including current OpenAI executives and employees who spoke under company approval, as well as some former employees who described the company’s internal workings under conditions of anonymity. Piecing together these accounts reveals an uncommon situation: OpenAI is scrambling to catch up.

Rewind to 2021. Back then, Altman and other OpenAI executives invited *WIRED* reporter Steven Levy to their early office in San Francisco’s Mission District to witness a demo of a new technology. It was a project derived from GPT-3, trained on a massive amount of open-source code from GitHub.

In the live demo, the executives showed how this tool, named OpenAI Codex, could take natural language instructions and generate simple code snippets.

“It can actually perform actions for you in the computer world,” explained OpenAI President and co-founder Greg Brockman at the time. “What you have is a system that can truly execute commands.” Even then, OpenAI researchers widely believed Codex would be key technology for building a “super assistant.”

During that period, Altman and Brockman’s schedules were almost filled with meetings with Microsoft—the software giant being OpenAI’s largest investor. Microsoft planned to use Codex to power one of its first commercialized AI products: a code completion tool called GitHub Copilot that could be embedded directly into the development environments programmers use daily.

An early OpenAI employee recalls that at that stage, Codex “basically only did autocomplete.” But Microsoft executives still saw it as a significant signal of the AI era’s arrival.

When GitHub Copilot officially launched publicly in June 2022, it attracted hundreds of thousands of users within just a few months.

Why is OpenAI Catching Up to Claude Code Instead?

Greg Brockman, President of OpenAI. Photo: Mark Jayson Quines.

The OpenAI team initially responsible for Codex was subsequently reassigned to other projects. An early employee recalls the company’s judgment at the time: future models themselves would inherently possess programming capabilities, so there was no need to maintain a dedicated Codex project team long-term. Some engineers were moved to work on DALL-E 2, while others shifted to training GPT-4. At the time, this seemed the key path to bring OpenAI closer to AGI.

Then, in November 2022, ChatGPT launched, gaining over 100 million users within two months. Virtually every other project within the company was forced to pause. For the next few years, OpenAI effectively had no dedicated team for AI programming products. A former member who worked on the Codex project said that after ChatGPT’s popularity, AI programming no longer seemed to fit within the company’s new “consumer product-first” strategy. Meanwhile, the prevailing industry view was that this field was already “covered” by GitHub Copilot, which was essentially Microsoft’s turf. OpenAI was mainly just providing the underlying model support.

Thus, in 2023 and 2024, OpenAI’s resources flowed more towards multimodal AI models and intelligent agents. These systems were designed to simultaneously understand text, images, video, and audio, and operate cursors and keyboards like humans. This direction seemed more aligned with industry trends at the time: Midjourney’s image generation models rapidly gained popularity on social networks, and the industry widely believed that large language models must be able to “see” and “hear” the world to truly advance towards higher levels of intelligence.

In contrast, Anthropic chose a different path. While the company was also developing chatbots and multimodal models, it seemed to recognize the potential of programming capabilities earlier. In a recent podcast, Brockman also acknowledged that Anthropic was “highly focused on programming capabilities” from a very early stage. He noted that Anthropic trained its models not only on complex programming problems from academic competitions but also on a large amount of “messy” code problems from real code repositories.

“That’s a lesson we learned later,” Brockman said.

In early 2024, Anthropic began using this real repository data to train Claude 3.5 Sonnet. When the model was released in June, many users were impressed by its programming capabilities.

This performance was particularly validated by a startup named Cursor. Founded by a group of people in their twenties, the company developed an AI programming tool that allows developers to describe requirements in natural language, with the AI directly modifying code. When Cursor integrated Anthropic’s new model, its user base grew rapidly, according to a person close to the company.

A few months later, Anthropic began internally testing its own coding agent product, Claude Code.

As Cursor’s popularity grew, OpenAI once attempted to acquire the startup. But according to multiple sources close to the company, Cursor’s founding team rejected the proposal before negotiations deepened. They believed the AI programming industry had immense potential and wished to remain independent.

Why is OpenAI Catching Up to Claude Code Instead?

Andrey Mishchenko, OpenAI Codex Research Lead. Photo: Mark Jayson Quines.

At the time, OpenAI was training its first so-called “reasoning model,” OpenAI o1. This type of model can perform step-by-step reasoning on a problem before giving an answer. OpenAI stated at its release that the model performed particularly well in “accurately generating and debugging complex code.”

Mishchenko explains that a key reason for the marked progress in AI models’ programming capabilities is that programming is a “verifiable task.” Code either runs or it doesn’t, providing very clear feedback signals to the model. When errors occur, the system quickly knows where the problem lies. OpenAI leveraged this feedback loop to continuously train o1 on increasingly complex programming problems.

“Without the ability to freely explore codebases, implement modifications, and test its own results—these are all part of ‘reasoning’ capabilities—today’s coding agents wouldn’t be at the level they are,” he says.

By December 2024, multiple small teams within OpenAI had begun focusing on AI coding agents. One team was co-led by Mishchenko and Thibault Sottiaux. Sottiaux, formerly of Google DeepMind, is now OpenAI’s Codex lead.

Initially, their interest in coding agents stemmed primarily from internal R&D needs, hoping to use AI to automate large amounts of repetitive engineering work, such as managing model training tasks and monitoring GPU cluster operations.

A parallel effort was led by Alexander Embiricos. Previously responsible for OpenAI’s multimodal agent project, he now serves as Codex’s product lead. Embiricos had developed a demo project called Jam, which spread rapidly within the company.

Why is OpenAI Catching Up to Claude Code Instead?

Thibault Sottiaux, OpenAI Codex Lead. Photo: Mark Jayson Quines.

Unlike controlling a computer via mouse and keyboard, Jam could directly access the computer’s command line. The 2021 Codex demo only showed AI generating code for humans to run manually; Embiricos’s version could execute that code itself. He recalls being almost stunned watching a webpage logging Jam’s actions in real-time refresh updates on his laptop.

“For a while, I thought multimodal interaction might be the path to our mission. Like humans sharing screens and working alongside AI all day,” Embiricos says. “Then it suddenly became very clear: perhaps giving models direct programmatic access to the computer is the real way to achieve that.”

These scattered projects took months to gradually converge into a unified direction. By early 2025, when OpenAI completed training OpenAI o3—a model further optimized for programming tasks compared to OpenAI o1—the company finally had the technical foundation to build a true AI programming product. But by then, Anthropic’s Claude Code was already preparing for public release.

Before Claude Code’s launch (released as a “limited research preview” in February 2025, fully launched in May), the dominant paradigm in AI programming was still called “vibe coding.” Developers advanced projects with AI-assisted tools, with humans steering the direction and AI filling in specific implementations. Such tools had already attracted hundreds of millions in investment.

But Anthropic’s new product changed this model. Like the Jam demo, Claude Code could run directly via a computer’s command line, meaning it could access all of a developer’s files and applications. Programming was no longer just “AI-assisted”; developers could hand entire tasks over to an AI agent to complete.

Faced with this shift, OpenAI began accelerating the launch of a competing product. Sottiaux recalls forming a “sprint team” in March 2025, tasked with integrating multiple internal teams within weeks to launch an AI programming product as quickly as possible.

Simultaneously, Altman also attempted to achieve a “corner overtaking” via acquisition, aiming to acquire the AI programming startup Windsurf for $3 billion. OpenAI leadership believed the deal would bring the company a mature AI programming product, an experienced team, and an existing enterprise customer base.

But the acquisition subsequently stalled. According to *The Wall Street Journal*, the issue lay with OpenAI’s largest partner, Microsoft. Microsoft wanted access to Windsurf’s intellectual property. Since 2021, Microsoft had been using OpenAI’s models to power GitHub Copilot, a product that had become a highlight in Microsoft’s earnings calls. But as Cursor, Windsurf, and Claude Code introduced new AI coding agent experiences, GitHub Copilot began to seem like a previous-generation AI tool. If OpenAI launched a new programming product, it might not be good news for Microsoft.

This acquisition negotiation coincided with the most tense period in OpenAI’s relationship with Microsoft. The two sides were renegotiating their cooperation agreement, with OpenAI attempting to reduce Microsoft’s control over its AI products and compute resources. Ultimately, the Windsurf deal became a casualty in this power struggle. By July, OpenAI abandoned the transaction. Subsequently, Google hired Windsurf’s founding team, while the remaining employees were acquired by another AI programming company, Cognition.

“I certainly wanted that deal to happen at the time,” Altman says. “But not every deal is within one’s control.” He states that while he originally hoped the Windsurf acquisition “would accelerate our progress to some extent,” he is equally impressed by the momentum of the Codex team. While negotiations were ongoing, Sottiaux and Embiricos continued developing the product and rolling out updates.

By August, Altman decided to push forward with full acceleration.

Why is OpenAI Catching Up to Claude Code Instead?

Alexander Embiricos, OpenAI Codex Product Lead. Photo: Mark Jayson Quines.

One of Greg Brockman’s favorite ways to measure AI capability is a little game he designed himself, the “Reverse Turing Test.” He wrote the code for this game years ago, and now tasks AI agents with re-implementing it from scratch.

The rules are simple: two human players sit at different computers, each seeing two chat windows on their screen. One window connects to the other human player, the other to an AI. Players must guess which window is the AI while also trying to trick their opponent into thinking they are the AI.

Brockman says that for most of last year, OpenAI’s strongest models would take hours to build such a game, requiring extensive explicit human instructions and assistance throughout the process. But by last December, Codex could generate a fully functional version directly from a carefully crafted prompt, using the new GPT-5.2 model under the hood.

This change wasn’t noticed only by Brockman. Developers worldwide also began realizing that AI coding agent capabilities had suddenly made a significant leap. Discussions around AI programming, initially focused on Claude Code, quickly broke out of Silicon Valley tech circles to become a topic of mainstream media attention.

Even some ordinary users with no programming experience began using AI to directly create their own software projects.

This surge in usage was no accident. During this period, both Anthropic and OpenAI invested heavily to acquire more AI coding agent users. Multiple developers told *WIRED* that their $200 monthly subscription plans for Codex or Claude Code actually provided usage credits worth over $1,000. This rather “generous” allowance was essentially a market strategy: first get developers accustomed to using AI programming tools in their daily work, then charge enterprises based on usage volume.

According to multiple sources familiar with the matter, in September 2025, Codex’s usage was only about 5% of Claude Code’s. But by January 2026, Codex’s user base had grown to about 40% of Claude Code’s.

George Pickett, a developer with 10 years of experience in tech startups, recently even started organizing in-person meetups themed around Codex.

“I think it’s pretty clear we’re replacing white-collar jobs with AI agents,” Pickett says. “As for what that means for society, honestly, no one knows. It’s definitely going to be a huge shock, but I’m generally quite optimistic about the future.”

Meanwhile, Simon Last, co-founder of the efficiency software company Notion, valued at

यह लेख इंटरनेट से लिया गया है: Why is OpenAI Catching Up to Claude Code Instead?

Related: SNZ-NTU CCTF Releases “Top Ten Blockchain Industry Trends for 2026”

Looking back from the vantage point of early 2026, Web3 is undergoing a profound transformation: evolving from early speculative experiments towards verifiable financial infrastructure. Stablecoins are no longer just a unit of account for the crypto market but are being widely discussed as a settlement layer for global payments; RWA assets are moving beyond the pilot stage to become composable financial instruments within DeFi; technologies like smart accounts, intent-based execution, and zero-knowledge proofs are bringing on-chain interactions onto the expected trajectory of mainstream user experience. Based on a systematic review of technological progress, market dynamics, and cutting-edge academic research, we have distilled the ten most noteworthy trends for 2026 in this report: 1. On-chain Treasuries and Cash Management: RWA Moves from Concept to Product 2. Stablecoins: A New Focal Point…

© 版权声明

相关文章