Anthropic is once again riding the headlines, and this time, the buzz isn't about the standard Claude AI. It’s about a new, unreleased model called Mythos, which the company claims is not just immensely powerful, but is capable of independently discovering security flaws and turning them into working cyberattacks. It is apparently so good that Anthropic is calling Mythos too dangerous to be released to the public, and the US government is worried that this AI model may break digital security locks that banks and others use.

In its official blog post, Anthropic introduced Mythos as a breakthrough in autonomous cybersecurity, warning that the model could be dangerous if it fell into the wrong hands. So far Mythos has not been released in public even though it is apparently finished and ready. Instead, Anthropic is stress testing the AI model through a closed initiative called “Project Glasswing”, giving access to Mythos only to select researchers and organisations.

According to Anthropic, making it widely available would be like handing advanced hacking capabilities to anyone with a laptop.

Since Mythos first appeared on the tech radar, there has been non-stop chatter. While some are impressed and calling it a big leap in cybersecurity, others are questioning whether this model is a strategic plan of Anthropic to capture the attention. Also, the buzz created by some US government meetings has not helped at all. Just a few days ago, Bloomberg reported that the US government officials had met Wall Street banks, telling them to get ready for Mythos and do a thorough review of their digital security.

Separately, the Wall Street Journal reported that the US government has formed a team led by National Cyber Director Sean Cairncross to fortify the government systems before Mythos is released. Apparently, or so the report says, it is a race against time.

So, what exactly is Mythos? Is this level of caution and paranoia justified, or is it a clever way to create buzz around a new AI product?

What is Mythos and why is it different?

We don’t have a lot of details for Mythos yet in the public domain. Instead, all we know about it is from what Anthropic is saying about it. Though, some things are clear.

The new AI model from Anthropic isn’t your typical tool for summarising documents or answering questions. Instead, Mythos is designed to explore software independently, identify vulnerabilities, and chain them together into full-blown exploits.

In simple terms: instead of just flagging a potential problem, Mythos can actually dig into the software, find hidden flaws, and figure out exactly how to break into a system, all by itself.

Anthropic highlights that during internal testing, Mythos reportedly discovered and exploited multiple vulnerabilities that had remained hidden for years. In one of the cases, Mythos found a 27-year-old bug in OpenBSD that had gone unnoticed despite years of security audits. In another, it uncovered a long-standing vulnerability in FFmpeg, a tool used by countless streaming platforms.

It also reportedly uncovered vulnerabilities in FreeBSD’s NFS server and constructed a full exploit capable of granting root access, all without human intervention.

Note that Mythos AI did not discover these flaws, through basic scanning. The AI model reportedly created working exploits on its own, chaining vulnerabilities together and bypassing system protections.

And while this sounds imperative, this very level of autonomy has now raised concerns across the cybersecurity community.

Anthropic keeping Mythos locked away

For now, Anthropic is keeping Mythos under tight control. The model is not publicly available, and only a limited set of partners are permitted to test it.The company says this is a deliberate move to prevent misuse. If Mythos can genuinely discover and weaponise vulnerabilities at scale, it could be a powerful tool, both for defenders and attackers.

However, the entire chatter around Mythos’ capabilities — and Anthropic’s warnings that it could be dangerous if not kept under strict control — is now leaving the tech community and the internet divided.

Some are impressed and argue that Anthropic’s decision to keep it tightly controlled is responsible behaviour. However, others aren’t entirely convinced and are questioning the reality and capabilities of Mythos.

Is Anthropic Mythos hyped?

One of the reasons behind the doubt is the scale of resources Anthropic poured into Mythos to find these security flaws.

According to the reports, discovering the OpenBSD bug cost Anthropic roughly $20,000 in compute and required over 1,000 recursive testing sessions. “The process of discovering the OpenBSD bug which existed for 27 years required around a thousand testing sessions and 20000 dollars worth of computational resources. The findings indicate that researchers employed extensive computational brute force techniques instead of using AI systems to obtain immediate results,” one user wrote on Reddit.

In other words, some people say that if an AI requires this much money in resources to do something it is hardly going to be useful.

Then there is the timing. With Anthropic recently valued at $380 billion and eyeing a potential IPO, some are calling Mythos its ace card to stand tall against competitors. By framing the conversation around safety, critics claim that Anthropic is trying to compete with the likes of OpenAI and Google, and attracting investors and massive enterprise contracts from corporations terrified of the next big breach.

So is the Mythos hype real?

The hype is reality. But it might not be rooted in the actual capabilities of Mythos. This is because we have been hearing of AI being fatal and hazardous for humans for over 5 years now. In fact, in 2020, when Anthropic founder Dario Amodei was still with OpenAI, the hype was created around GPT 3, which was apparently too dangerous to be released for humans. Since then AI companies — most notably OpenAI and Anthropic — have used scary stories about their AI to create more and more buzz.

This is not to say that Mythos cannot be real. It possibly can. But we will get to know about it only when it sees a wider release, and is deployed beyond the Anthropic labs. Until then, there remains some doubt.

Whether Anthropic is using fear as a strategic moat or not remains unclear. What we do know is Anthropic is pushing hard to stay at the top of the AI race, and by talking about a model which is not just capable but autonomously powerful it is definitely putting significant pressure on its rivals.

- Ends

Published By:

Divya Bhati

Published On:

Apr 13, 2026 18:29 IST