Senior US officials convened an urgent meeting with leading bank executives to address mounting concerns over the cybersecurity implications of a powerful new artificial intelligence model developed by Anthropic.

The discussions, led by Treasury Secretary Scott Bessent and attended by Federal Reserve chair Jerome Powell, brought together the heads of major Wall Street banks at the Treasury Department in Washington, D.C., amid fears that the technology could expose critical vulnerabilities across financial systems.

At the centre of the concerns is Anthropic’s latest model, known as Claude Mythos, which has demonstrated an ability to identify software weaknesses at a level that, in some cases, exceeds human experts.

At the meeting, officials reportedly urged banks to deploy the model to probe their own systems to identify security vulnerabilities and bolster their defences.

Following a leak involving portions of Claude’s code, the company released a blog post earlier in April warning that AI systems had already surpassed “all but the most skilled humans at finding and exploiting software vulnerabilities”. The post cautioned that “The fallout — for economies, public safety and national security — could be severe.”

Access to the model has already been restricted, with Anthropic limiting its availability to a small group of trusted organisations for controlled testing under an initiative known as Project Glasswing.

The programme brings together major technology firms and financial institutions to test the AI in a controlled environment, with the goal of identifying and fixing weaknesses before similar tools become more widely available.

So far JPMorgan is the only bank publicly announced as a partner for testing Mythos internally, though others, including Goldman Sachs, Citigroup, Bank of America and Morgan Stanley, are reportedly expected to follow as part of a broader effort to assess and reinforce their cyber defences.