Anthropic Has Created the Most Powerful AI Model in History, But Dares Not Release It…
Author|Azuma (@azuma_eth)

On April 8th, Anthropic, the AI development company behind Claude, officially announced a new initiative called “Project Glasswing.” This project will be jointly advanced with several leading industry giants including Amazon, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.
Anthropic stated that this is an urgent initiative aimed at protecting the world’s most critical software. The parties will jointly use the Mythos Preview version to discover and fix potential flaws in the systems the world currently relies on.

Mythos is the next-generation AI model currently under development by Anthropic. It is the first model in human history to exceed ten trillion parameters (in contrast, current mainstream models on the market range from hundreds of billions to one trillion parameters), with a staggering training cost of $10 billion. Compared to Claude’s current most powerful model, Opus 4.6, Mythos scores significantly higher in tests for software coding, academic reasoning, and cybersecurity.
Rumors about Mythos began circulating in the market last week. The prevailing concern at the time was: Would Mythos, with its specialized cybersecurity capabilities, disrupt the current security offense-defense landscape? If maliciously exploited, could it cause larger-scale security incidents? Odaily also reported on this matter and discussed the potential impact on security offense-defense in the 加密currency industry with industry security expert Yu Xian, founder of SlowMist (see “Odaily Interview with Yu Xian: Anthropic’s Nuclear-Level New Model Leak, How Will It Affect Crypto Security Offense-Defense?“). However, at that time, Anthropic had not publicly acknowledged the existence of Mythos, so relevant information remained limited.
On April 8th, with the announcement of the “Project Glasswing” initiative, Anthropic also disclosed more details about Mythos. Based on the real-world test cases released by Anthropic, the company’s claims about Mythos’s capabilities are not exaggerated. In fact, the company is even hesitant to release the model publicly directly, fearing it could be maliciously exploited by hacker groups. Instead, it plans to first allow leading companies to test and troubleshoot through the “Project Glasswing” initiative, patching potential vulnerabilities in advance.
Mythos Shows Its Muscle: Thousands of “Zero-Day Vulnerabilities” Uncovered in Weeks
When discussing Mythos’s capabilities, Anthropic stated bluntly that the birth of this model signifies the arrival of a harsh reality — AI models’ coding capabilities have reached an extremely high level. In discovering and exploiting software vulnerabilities, they can almost surpass everyone except the most skilled humans.
According to Anthropic’s disclosure, within just a few weeks, Anthropic used Mythos to identify thousands of zero-day vulnerabilities (i.e., flaws previously unknown even to the software developers themselves). Many of these are high-risk vulnerabilities, affecting all major operating systems and mainstream browsers, and impacting a range of other critical software.
Anthropic provided several representative examples:
- Mythos discovered a 27-year-old vulnerability in OpenBSD, a system renowned for being “extremely secure” and widely used in critical infrastructure like firewalls. This vulnerability allows attackers to remotely crash the system directly;
- In the widely used video processing library FFmpeg, Mythos found a 16-year-old vulnerability. The code containing this issue had been triggered over 5 million times by automated testing but remained undiscovered;
- Mythos can also automatically chain multiple vulnerabilities in the Linux kernel to escalate privileges from a regular user to complete server control.
More worryingly, Anthropic stated that most of these vulnerabilities were “autonomously discovered and exploitation paths constructed” by Mythos with almost no human intervention. This perhaps indicates that AI has begun to possess automated offense-defense capabilities similar to top-tier hacker teams.
On evaluation benchmarks, Mythos also shows a generational leap compared to Opus 4.6. For example, in cybersecurity vulnerability reproduction tests, Mythos achieved 83.1%, while Opus 4.6 scored 66.6%; in multiple coding and reasoning tests, Mythos’s scores also showed significant leads.


Perhaps precisely because Mythos’s capabilities are too powerful, Anthropic did not choose to release the model directly. Instead, it first launched the “Project Glasswing” initiative to allow the entire internet to “fortify” in advance.
Through this initiative, Anthropic will provide early access to the Mythos Preview version to participating parties for discovering and fixing vulnerabilities or weaknesses in their foundational systems — focusing on tasks such as local vulnerability detection, binary program black-box testing, endpoint security hardening, and system penetration testing.
Anthropic also pledged to provide participating parties with a total of $100 million in model usage credits to support usage throughout the research preview phase. Thereafter, the Mythos Preview version will be available to participants at a price of $25 per million input tokens / $125 per million output tokens (participants can also access the model via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry). In addition to the model usage credits, Anthropic will donate $2.5 million to Alpha-Omega and OpenSSF through the Linux Foundation and $1.5 million to the Apache Software Foundation to help open-source software maintainers cope with the evolving security landscape.
Anthropic plans to gradually expand the participation scope of “Project Glasswing” and continue the initiative for several months, while sharing experiences as much as possible so other organizations can apply relevant lessons to their own security construction. Within 90 days, Anthropic will publicly report phased results, including patched vulnerabilities and disclosable security improvements.
Technology Will Only Keep Advancing, But There’s No Need for Excessive Worry
AI is irreversibly changing the world we know, including the cybersecurity field focused on in this article. As the threshold for discovering and exploiting vulnerabilities is significantly lowered, concerns naturally arise: Will AI become a sharp blade in the hands of malicious actors, threatening the existing cybersecurity balance? (PS: For 加密currency users who need to place real assets in wallet systems or on-chain protocols, this concern is particularly strong.)
Addressing this issue, Anthropic believes “there are still reasons for optimism.” AI models are dangerous precisely because they have the potential to cause harm in the hands of bad actors. However, at the same time, AI also holds immeasurable value in discovering and fixing critical software defects and developing more secure new software.
It is foreseeable that AI capabilities will continue to evolve rapidly in the coming years. However, as new attack methods emerge, new defense mechanisms will also appear simultaneously. Technological advancement is inevitable, but this does not mean risks will necessarily spiral out of control — as long as defense systems evolve in sync, they can even leverage AI to build higher-strength security moats.
本文来源于互联网: Anthropic Has Created the Most Powerful AI Model in History, But Dares Not Release It…
Related: Redemption Wave Meets Credit Withdrawal Wave: US Private Credit Industry Faces “Run Storm”
Original Source: Wall Street News The US private credit industry is facing a dual squeeze of liquidity contraction and asset revaluation. As investors rush to withdraw funds and major Wall Street financial institutions scale back credit lines, this massive $1.8 trillion market is teetering. According to the Financial Times, private credit giants Cliffwater and Morgan Stanley have recently imposed redemption restrictions on their multi-billion dollar funds. These semi-liquid funds faced a surge in withdrawal requests in the first quarter, with the scale of outflows forcing management to trigger “gates” to avoid fire sales of underlying illiquid assets. While facing pressure on the funding side, private credit firms are also encountering tightening from major banks on the financing side. JPMorgan Chase recently notified relevant institutions that it is reducing the collateral…







