Dario Amodei, co-founder and CEO of Anthropic, attends the AI Impact Summit in New Delhi, India on Thursday, February 19, 2026.
Prakash Singh | Bloomberg | Getty Images
Over the last month, banks, tech giants and governments around the world have been scrambling to contain the risks posed by Mythos. The Anthropic model is so powerful that it is said to have discovered thousands of previously unknown vulnerabilities in the world’s software infrastructure.
There’s just one problem. That means the functionality they’re concerned about already exists.
Cybersecurity experts and artificial intelligence researchers told CNBC that the software vulnerabilities uncovered by Mythos can be found using existing models, including those from Anthropic and OpenAI.
“What we’re seeing across the industry right now is that by cleverly adjusting the disclosure model, you can reproduce the vulnerability found in Mythos and get very similar results,” said Ben Harris, CEO of cybersecurity firm watchTowr Labs.
Mythos has executives and policymakers alike worried that a dangerous new era of AI-powered cybercrime is on the horizon. Anthropic limited the release to a few American companies, including Apple, Amazon, JPMorgan Chase, and Palo Alto Network, to reduce the risk of malicious parties getting their hands on the product.
Even with such precautions in place, the announcement prompted the Trump administration to consider new government oversight of future models.
It’s the latest in a series of high-profile products from Anthropic, increasing competition with OpenAI as the two AI giants approach their long-awaited IPOs. A few weeks after the introduction of Mythos, OpenAI CEO Sam Altman announced GPT-5.5-Cyber, a model focused on cybersecurity.
OpenAI on Thursday granted limited access to GPT-5.5-Cyber to its vetted cybersecurity team.
The controlled deployment of Mythos, part of a security effort called “Project Glasswing,” was meant to give companies time to solidify their cyber defenses against incoming attacks from criminal groups and hostile nations.
“The danger is that the amount of vulnerabilities, breaches and economic damage caused by ransomware for schools and hospitals, as well as banks, has increased significantly,” Anthropic CEO Dario Amodei said at an Anthropic event this week.
“It’s scary enough.”
But for those fighting on the front lines of cyberwar, one of the key capabilities Anthropic touts – the ability to discover software vulnerabilities at scale – has been around since last year.
“The models we have now are powerful enough to detect zero-days at scale, which is scary enough,” Claudia Kroc, CEO of cybersecurity company Vidoc, told CNBC.
She says it’s been going on for “months, if not a year.”
The term “zero-day” refers to previously unknown software flaws that have not been patched, giving attackers an opportunity to exploit them before defenders can respond.
To test whether they could discover the same vulnerabilities as Mythos, Vidoc researchers relied on a technique called “orchestration.” As the name suggests, this process involves breaking code into smaller pieces and creating workflows to coordinate and cross-check results between different tools or models.
“We ran the older model against the same codebase to see if it detected the same vulnerabilities,” Klock said. “We did that with both OpenAI and Anthropic’s older model.”
AISLE, another cybersecurity company, found that many of Mythos’ key results could be reproduced using cheaper models running in parallel. This suggests that scale and coordination are more important than having the latest model.
“You’d find more bugs if you had 1,000 good detectives searching everywhere than one good detective who had to guess where to look,” AISLE founder Stanislav Fort wrote in a blog post.
In comments to CNBC, Anthropic did not dispute that previous models had the ability to discover software vulnerabilities.
In fact, a company spokesperson said Anthropic has been warning for months that AI’s cyber capabilities are rapidly advancing. They pointed to a February blog post showing that Claude Opus 4.6, a widely used model, discovered more than 500 “high severity” vulnerabilities in open source software.
At this week’s Anthropic event, Amodei affirmed this point, saying that while the scale of software vulnerabilities discovered by Mythos has skyrocketed compared to previous models, the trend is not new.
“The risks are very real, which is why we took this action,” Amodei said. “But in some ways, they’re not that surprising. … We’ve been seeing warnings about this for a while.”
hysteria and panic
What sets Mythos apart is its ability to take the next step and develop working exploits with little or no human intervention, effectively automating processes that previously required skilled researchers, an Anthropic spokesperson said.
But cyber researchers say hackers working for criminal groups and hostile states already have this skill set. Hackers in North Korea, China and Russia “know how to do this with or without Anthropic,” Kroc said.
Harris said the threat of AI-powered hacking has businesses and government regulators concerned about protecting critical systems from a new wave of ransomware and other types of attacks.
He described conversations with banks, insurers and regulators in recent weeks as “hysterical”.
Even before the advent of generative AI, companies faced the problem of skilled hackers exploiting newly discovered vulnerabilities within hours, while patching the code often took days or weeks. Some patches require key systems to be taken offline, complicating matters.
“The industry is panicking at the amount of vulnerabilities it faces right now,” Harris said. “But even before Mythos was widely available, we weren’t able to fix the vulnerability quickly enough.”
According to Harris, previously only a small number of experts around the world had the ability and time to find and exploit obscure vulnerabilities in software. Now, with the AI models available today, the barrier to entry into cyber havoc has been lowered.
This means banks and other targets will be exposed to more attacks, and software systems that have not previously attracted much attention from cybercriminals will be under threat, Harris said.
Advantage: Attack power
Anthropic, OpenAI and others are working to develop cyber defense capabilities commensurate with the problems they’ve identified, but the researchers say the initial benefit lies in attack rather than defense.
JPMorgan’s Jamie Dimon suggested as much last month, saying that while AI tools could ultimately help companies protect themselves from cyberattacks, they would first only make them more vulnerable.
“The amount of vulnerabilities being discovered has increased significantly, but it appears that tools are not being deployed to help remediate them,” said Justin Herring, a partner at law firm Mayer Brown and former executive assistant superintendent for cybersecurity at the New York Financial Regulatory Authority.
“Vulnerability management is the great Sisyphean task of cybersecurity,” Herring said.
Although the limited group included in the initial release of Mythos gave us a head start in patching vulnerabilities, it also has its drawbacks. AI researchers have not been given access to Mythos to independently verify Anthropic’s claims or begin building defenses against them.
Some say this has prevented the broader cyber community from participating in the solution.
Pavel Grubic, CEO of cybersecurity startup Tenzai, which uses Anthropic’s model, said this could create a “hierarchy of haves and have-nots” and inhibit the pace of cybersecurity innovation.
He said many cybersecurity startups are working on solutions that can help businesses in the new era of AI.
“They’re trying to figure out the best way to fix the world before the rest of the world has access to it,” said Ben Seri, co-founder of cybersecurity startup Zaffran Security. “It’s a chicken-and-egg situation, and you’re going to break the egg. It’s inevitable.”
Make CNBC your preferred source on Google and never miss a moment from the most trusted names in business news.
Source link
