AI Development12 min read

Claude Mythos Leak: Capybara Tier AI Model Revealed

On March 26, 2026, nearly 3,000 unpublished Anthropic documents became publicly accessible due to a CMS misconfiguration — revealing the existence of "Capybara," a new model tier above Opus that Anthropic describes as the most powerful AI system it has built. Here is what we know, what remains uncertain, and what it means for the AI industry.

Digital Applied Team
March 31, 2026
12 min read
~3,000

Documents Leaked

Above Opus

New Capybara Tier

45%

Polymarket Public Launch by June

5 Days

Before Second Anthropic Leak

Key Takeaways

New Model Tier Above Opus: The leaked documents reveal 'Capybara' as a new tier above Opus in Anthropic's model hierarchy, described as their most powerful AI model to date
CMS Misconfiguration Was the Root Cause: Nearly 3,000 unpublished Anthropic documents became publicly accessible due to a content management system where files defaulted to public access unless manually changed
Cybersecurity Focus Dominates Early Access: Anthropic is restricting early access to cyber defense organizations, citing unprecedented cybersecurity capabilities that could 'exploit vulnerabilities in ways that far outpace defenders'
Dramatic Benchmark Improvements: The draft blog describes 'dramatically higher' scores than Claude Opus 4.6 on coding, academic reasoning, and cybersecurity benchmarks, though specific numbers were not included
Prediction Markets Are Active: Polymarket assigns approximately 45% probability of public release by June 30, 2026, with the strongest odds currently pointing to a Q2-Q3 timeline

The Leak: 3,000 Documents Exposed

On March 26, 2026, security researchers Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge discovered that Anthropic's content management system had made close to 3,000 unpublished files publicly accessible. The root cause was a configuration error where uploaded files defaulted to public access unless a user explicitly changed the permission setting.

Among the exposed documents was a detailed draft blog post describing a new AI model called Claude Mythos and the Capybara model tier. Fortune reviewed the documents and notified Anthropic, which promptly restricted public access. An Anthropic spokesperson subsequently confirmed the model's existence and described it as "a step change" in AI performance.

The timing of this leak is significant. Just five days later, on March 31, Anthropic experienced a second data exposure when Claude Code's full source code was accidentally included in an npm package. Fortune described this as "a second major security lapse days after accidentally revealing Mythos," and the combined incidents prompted broader scrutiny of Anthropic's operational security practices.

~3,000

Documents Exposed

Mar 26

Discovery Date

CMS

Misconfiguration Root Cause

Capybara: A New Tier Above Opus

The most consequential revelation in the leaked documents is the existence of a new model tier. The draft blog states: "'Capybara' is a new name for a new tier of model: larger and more intelligent than our Opus models — which were, until now, our most powerful."

This expands Anthropic's model hierarchy from three tiers to four. Until now, the structure has been Haiku (fast and affordable), Sonnet (balanced performance), and Opus (maximum capability). Capybara sits above all of them, creating a premium tier for applications that demand the absolute frontier of AI performance.

Naming Convention: Mythos vs. Capybara

The terminology can be confusing. "Claude Mythos" appears to be the generation name (like Claude 3, Claude 4), while "Capybara" is the tier name (like Haiku, Sonnet, Opus). The full designation would be "Claude Mythos Capybara." It remains unclear whether the Mythos generation will also include Haiku, Sonnet, and Opus variants, or whether the generation name applies exclusively to the Capybara-tier model.

Anthropic Model Tier Hierarchy (Reported)
Based on leaked documents — not officially confirmed
Capybara

New Premium Tier

Larger and more intelligent than Opus. Early access only.

Opus

Maximum Capability (Current)

Claude Opus 4.6 — most powerful publicly available model.

Sonnet

Balanced Performance

Optimized for cost-efficiency and speed at high capability.

Haiku

Fast and Affordable

Lightweight tier for high-volume, latency-sensitive applications.

Capabilities and Benchmark Claims

The leaked draft describes Capybara's performance in qualitative terms rather than publishing specific benchmark numbers. According to the document, the model achieves "dramatically higher" scores than Claude Opus 4.6 on tests of software coding, academic reasoning, and cybersecurity.

What the Draft Claims

  • Software Coding — Dramatically higher performance than Opus 4.6 on coding benchmarks, suggesting significant improvements in code generation, debugging, and complex software engineering tasks
  • Academic Reasoning — Substantial gains on reasoning benchmarks, indicating improvements in mathematical problem-solving, scientific analysis, and multi-step logical reasoning
  • Cybersecurity — Described as "currently far ahead of any other AI model in cyber capabilities," with the ability to surface previously unknown vulnerabilities in production codebases
  • General Intelligence — Positioned as the most powerful model Anthropic has built, suggesting broad improvements across all capability dimensions rather than narrow specialization

The Cybersecurity Dimension

The cybersecurity claims deserve particular attention because of their dual-use implications. The draft describes capabilities that could be transformative for both offense and defense:

Defensive Potential
  • Automated vulnerability discovery in production code
  • Proactive threat identification and remediation
  • Security audit automation at scale
  • Faster patching through AI-assisted code fixes
Offensive Risks
  • Exploit discovery at unprecedented speed and scale
  • Lowered barrier for sophisticated attacks
  • Potential to outpace human defenders
  • Zero-day vulnerability generation concerns

This dual-use concern is precisely why Anthropic is reportedly limiting early access to cybersecurity defense organizations. The strategy is to give defenders a head start — time to identify and patch vulnerabilities using the model before it becomes available to a wider audience that could include bad actors.

Cybersecurity Implications

Anthropic has reportedly warned top government officials that Mythos makes large-scale cyberattacks much more likely in 2026, according to Axios. The leaked draft itself states the model "presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders."

This language is extraordinary coming from the model's own creator. Anthropic, a company that positions itself as the safety-focused AI lab, is acknowledging that its own technology may shift the balance between offense and defense in cybersecurity.

Market Impact

The leak had immediate financial consequences. Cybersecurity company stocks fell following the revelations, with investors pricing in the possibility that AI-powered attack capabilities could overwhelm traditional security tooling. Bitcoin also experienced a brief slide alongside broader software stocks, reflecting market uncertainty about the implications of a step change in AI-assisted cyber capabilities.

For organizations managing their AI transformation strategy, the cybersecurity implications of frontier models like Mythos Capybara underscore the importance of integrating AI-powered security into organizational defenses. Waiting for these capabilities to become widely available before responding will be too late.

Early Access Strategy and Rollout

The leaked draft outlines a cautious rollout strategy that prioritizes cybersecurity applications before broader availability. This approach differs from how Anthropic has launched previous models, where general availability typically followed an initial API access period.

The Defense-First Approach

Anthropic is restricting early access to organizations focused on cyber defense, giving them time to use the model's capabilities to harden systems before the same capabilities could be leveraged for attacks. This is functionally a responsible disclosure approach applied to an AI model rather than a specific vulnerability — giving defenders a head start.

Reported Rollout Phases
Based on leaked draft — subject to change
1

Cybersecurity Defense Partners (Current)

Select organizations focused on cyber defense are evaluating the model for vulnerability discovery and system hardening.

2

Extended Early Access (Projected)

Broader enterprise access for high-value use cases in coding, research, and specialized domains.

3

General Availability (Timeline Uncertain)

Public API access and integration into Claude products. The draft notes the model is "expensive to run."

The "expensive to run" caveat in the draft is significant. It suggests that even after general availability, Capybara may be reserved for use cases where maximum capability justifies the cost — similar to how Opus has always been positioned as a premium option rather than a default choice.

Market Reaction and Prediction Odds

Prediction markets have become a useful gauge of AI industry expectations, and the Mythos leak quickly generated trading activity on Polymarket. As of early April 2026, the markets are assigning meaningful probability to a public Capybara release within the next few months.

Polymarket: Public Release Odds
  • By June 30, 2026~45%
  • By Q3 2026Strongest implied
  • Claude 4.7 by June 30~67%
Financial Market Impact
  • Cybersecurity stocks declined on leak news
  • Bitcoin slid alongside software sector
  • Anthropic valuation expectations rose

The relationship between the Mythos leak and Anthropic's expected IPO has also drawn attention. Analysts at Proactive Investors asked whether the leaked model represents Anthropic "racing to impress investors before it floats." The timing — a step-change model revealed months before a potential public offering — has led some commentators to speculate about whether the leak, despite being accidental, could accelerate Anthropic's fundraising narrative.

Anthropic's Evolving Model Hierarchy

The introduction of a Capybara tier represents an important shift in how Anthropic structures its model offerings. Until now, the three-tier system (Haiku, Sonnet, Opus) mapped cleanly to different use cases and price points. A fourth tier introduces new strategic questions.

Implications for the Existing Lineup

If Capybara becomes the new frontier, Opus may shift from "most powerful" to "most powerful at a reasonable price" — a positioning change that could actually increase Opus adoption. Many organizations that currently use Sonnet because Opus is too expensive might find that a repositioned Opus offers the right balance once Capybara absorbs the premium-tier positioning.

The frontier model comparison landscape becomes even more complex with a fourth tier. For teams evaluating which model to use for AI-powered applications, the decision framework now includes: Is this a Haiku task (fast and cheap), a Sonnet task (balanced), an Opus task (powerful), or a Capybara task (frontier-maximum capability)?

Competitive Context

Anthropic's move to a four-tier system coincides with intensifying competition. OpenAI's GPT-5 series, Google's Gemini 3, and open-weight competitors like DeepSeek and Qwen are all pushing frontier performance. A Capybara tier positions Anthropic to compete at the absolute top of the capability ladder while maintaining its existing tiers for cost-sensitive applications.

Strategic Analysis: What This Means for AI

The Mythos Capybara leak carries implications that extend well beyond Anthropic's product roadmap. For the broader AI industry and for organizations building on these technologies, several strategic themes emerge.

The Safety-Capability Tension

Anthropic has built its brand on safety-first AI development. The Mythos leak reveals a model that the company itself describes as posing "unprecedented cybersecurity risks" — yet it is building and deploying the model anyway. This is not necessarily contradictory (safe development includes understanding and mitigating risks), but it does illustrate the fundamental tension facing every frontier AI lab: the most capable models are also the most dangerous, and choosing not to build them means ceding the frontier to competitors who may invest less in safety.

Operational Security as Competitive Advantage

Two data exposures in five days is a pattern, not a coincidence. For AI companies whose value depends partly on proprietary model capabilities and product roadmaps, operational security is not just an IT function — it is a competitive necessity. Anthropic's leaks handed competitors detailed intelligence about its most advanced model and its most popular developer tool.

What Organizations Should Do Now

For teams building products and services on AI infrastructure, the Mythos revelation suggests several practical actions:

Audit Cybersecurity Posture

If Capybara-class models can discover unknown vulnerabilities, assume adversaries will have similar capabilities soon. Prioritize AI-powered security assessments now.

Plan for Tier Flexibility

Design AI-powered workflows that can switch between model tiers based on task complexity. Not every task needs Capybara — but some will.

Monitor Prediction Markets

Polymarket odds provide real-time market intelligence on release timelines. Factor these into planning horizons for AI-dependent projects.

Evaluate Early Access Programs

If your organization operates in cybersecurity or adjacent domains, explore whether early access to Capybara-class models is available through existing Anthropic partnerships.

Prepare for the Next Generation of AI

Capybara-class models will reshape what AI can do for your business. Our team helps organizations build AI strategies that adapt to rapidly evolving model capabilities.

Free consultation
Expert guidance
Future-ready strategy

Frequently Asked Questions

Related Articles

Continue exploring AI model developments and industry analysis with these related guides