Treasury summoned bank chiefs Tuesday over Anthropic’s Mythos.

What Anthropic announced

Anthropic said it wouldn't release Mythos to the public because the model can, the company warned, find and exploit flaws in major operating systems and browsers when instructed. Instead, Anthropic rolled out a limited "Claude Mythos Preview" to 11 outside organizations as part of what it calls Project Glasswing. The list of partners includes big tech and finance names — Google, Microsoft, Amazon Web Services, JPMorganChase and Nvidia among them.

Anthropic says it was acting out of caution, arguing defenders should get early access so they can harden systems. The company argued defenders should get access to what it calls powerful offensive capabilities so they can harden systems before the broader public — including bad actors — can use similar tools.

Why regulators rushed in

Washington reacted quickly to those warnings, prompting urgent conversations among regulators and bank leaders. Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell called an urgent meeting at Treasury on Tuesday with leaders of the country's largest banks, according to people briefed on the discussions. The invitees were firms regulators class as systemically important — the kind whose failure would ripple through markets.

Arranging the meeting on short notice showed just how worried officials were. "We're taking every step we can to make sure that everybody is safe from these potential risks, including Anthropic agreeing to hold back the public release of the model until our officials have figured everything out," Kevin Hassett, National Economic Council director, said on Fox News.

The presence of Powell suggested officials were treating the problem as more than a tech policy debate — they saw potential systemic risk.

How big is the threat?

Anthropic and some security researchers warn that Mythos represents a leap in AI-assisted cyber offense — not because the model invents entirely new techniques, but because it could let non-experts chain together exploits and automate complex attack steps. Tasks that once needed deep specialist skill — scanning code for vulnerabilities, assembling multiple exploits, running large-scale probes — can now be sped up or partly automated, the advocates of caution say.

The real worry for many experts is less raw capability and more how easy it would be for attackers with limited skills to use such tools. If groups with limited skills can use AI to run sophisticated campaigns across thousands of targets, defenders could be overwhelmed.

Still, not everyone accepts the dire reading. Some AI commentators have said Mythos doesn't look drastically ahead of other top models and that Anthropic may be leaning into dramatic language for PR value. Independent researchers have argued that some of the attack capabilities touted as novel may already be available through smaller, cheaper models or combinations of tools.

Are banks uniquely exposed?

Regulators clearly think banks are a special case. The Financial Services Forum meeting in town helped organizers get executives together, but Treasury and the Fed focused the conversation on whether lenders knew the risks and were taking steps to protect their systems.

Banks are heavy users of third-party software, host critical customer data, and operate infrastructure that criminals prize — all reasons Treasury and the Fed wanted to ensure everyone was on the same page.

International regulators moved too. The Bank of Canada convened major lenders and financial firms to talk about Mythos-related cybersecurity risks, and the Bank of England scheduled meetings with top banks and insurers to review preparations. So the concern quickly crossed borders.

Why limiting Mythos may not be enough

Anthropic's limited release — and its stated goal of giving defenders time to harden systems — is one approach. But some security researchers say it might be too little, too late. Reporting and recent tests suggest that even public models can already help attackers do sophisticated work in minutes. A study by AI security firm AISLE indicated several of the vulnerabilities Anthropic flagged could be exploited using smaller models or existing tool chains.

That finding makes the narrative. If smaller models already do much of the heavy lifting, then locking down one frontier model only delays the inevitable. And the tech industry appears to be moving in similar directions: Axios reported that OpenAI is developing a model nicknamed "Spud," aimed at advanced cybersecurity tasks and likely to see a phased, partner-first rollout — a plan that echoes Anthropic's caution-and-control posture.

What banks can do

There are practical steps banks can take, and a few were on the agenda at Treasury. Patching known vulnerabilities faster, running red-team exercises that assume AI-assist, and expanding threat-hunting operations were all discussed in public reporting about the meetings. Regulators also showd the value of sharing indicators of compromise and collaborating with cloud and software vendors who are now part of the limited preview programs.

Defenders are chasing a shifting threat as attackers adopt more automation and AI tools. If attackers automate operations with AI, defenders will need faster detection and automated responses to keep up.

That means more investment in detection, better supplier risk management, and deeper cooperation between banks, cloud providers and government agencies.

Debate over disclosure and control

The Anthropic episode lays bare a thorny policy question: When is it better to gate a powerful tool so defenders get a head start, and when does gating simply concentrate power and delay needed public scrutiny? Advocates of gradual, partner-first releases say the risks of a public dump are too high. Critics say secrecy can hinder third-party vetting and leave the public less informed.

Either way, Anthropic has forced this debate into public view, and regulators now have to choose how to respond. Regulators and industry now must decide whether to press for stricter controls, mandate reporting, or simply accelerate defensive upgrades. The choices will shape how banks and other critical sectors deal with AI-driven cyber risk.

What happens next depends on coordination — among governments, banks and major tech providers who build and host the software banks rely on. The models are improving fast. So are the stakes.

Related Articles

“We’re taking every step we can to make sure that everybody is safe from these potential risks,” said Kevin Hassett, National Economic Council director.