Trump summons financial institution leaders over terrifying new risk to world monetary system

The Trump administration has summoned America’s most powerful bank chiefs to an urgent closed-door meeting over an AI model its makers warn could bring down a blue-chip company or breach national defense firewalls. 

Treasury Secretary Scott Bessent and Federal Reserve Chair Jerome Powell convened the session at Treasury headquarters in Washington DC on Tuesday to address Mythos, a new model from AI giant Anthropic. 

Anthropic had announced Mythos the same day, revealing the model surprised coders by hacking into the company’s own networks during internal testing. 

The meeting was called at short notice for banks classified as systemically important, whose stability is considered vital to the global financial system, Bloomberg reported. 

Among the bosses summoned were Citigroup’s Jane Fraser, Morgan Stanley’s Ted Pick, Bank of America’s Brian Moynihan, Wells Fargo’s Charlie Scharf and Goldman Sachs’s David Solomon. Jamie Dimon of JPMorgan was unable to attend. 

Only around 40 carefully vetted firms have been granted access to Mythos, which arrives off the back of Anthropic’s Claude Code, the tool that sent Silicon Valley into a frenzy with its ability to generate entire programs from a single line of text. 

The Pentagon is already a customer, having deployed Anthropic’s models in the operation to seize Nicolas Maduro and during the Iran conflict. 

Anthropic said it had held discussions with US officials ahead of the release about Mythos and its ‘offensive and defensive cyber capabilities.’ 

President Donald Trump speaks to reporters at a briefing on Monday

The Treasury and the Federal Reserve have been contacted for comment. 

Anthropic is separately locked in a legal battle with the Trump administration after a federal appeals court this week rejected its bid to pause the Pentagon’s designation of the company as a supply-chain risk.

The fallout stems from Anthropic’s refusal to let the Pentagon strip safety limits from its models, particularly around autonomous weapons and domestic surveillance.

The AI giant released a chilling analysis of Mythos this week as it admitted the new model could easily hack into hospitals, electrical grids, power plants, and other pieces of critical infrastructure.

During testing, Anthropic says that Mythos ‘found thousands of high–severity vulnerabilities, including some in every major operating system and web browser.’

Some of these security weaknesses had gone unnoticed by human security researchers and hackers for decades, surviving millions of automated reviews.

These included attacks that allowed Mythos to crash computers just by connecting to them, seize control of machines, and hide its presence from defenders.

In a blog post detailing the dangerous new model, Anthropic says: ‘AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.’

Anthropic has sparked alarm by revealing an AI that has been deemed too dangerous to release to the public. Pictured: Anthropic CEO and co–founder Dario Amodei 

Anthropic described Mythos as a ‘step change in capabilities’ compared to earlier models’ hacking abilities (illustrated). The company has moved to keep the model private to avoid it falling into the wrong hands

The company adds: ‘The fallout – for economies, public safety, and national security – could be severe.’ 

Anthropic describes the model as ‘a leap in these cyber skills’ compared to previous versions of Claude.

Mythos has the ability to find, exploit, and chain together individual vulnerabilities into sophisticated attacks – all without the help of a human. 

In one case, Claude Mythos found a 27–year–old weakness in a piece of software called OpenBSD, which has a reputation for security and stability.

The weakness, which no human had found before, allowed an attacker to remotely crash computers just by connecting to them.

Additionally, Claude autonomously chained together several weaknesses in the Linux kernel, the software that runs most of the world’s servers.

Anthropic says this attack would have allowed someone to ‘escalate from ordinary user access to complete control of the machine’.

In the wrong hands, this tool could be used to cause massive damage to critical systems.

Dr Roman Yampolskiy, an AI safety researcher at the University of Louisville, told the New York Post: ‘Ideally, I would love to see this not developed in the first place. And it’s not like they’re going to stop.

‘That’s exactly what we expect from those models – they’re going to become better at developing hacking tools, biological weapons, chemical weapons, novel weapons we can’t even envision.’

In an unprecedented 244–page report, Anthropic also revealed a series of alarming details from Mythos’ early testing. 

Early versions of the model repeatedly displayed what the company called ‘reckless destructive actions’.

The bot attempted to break out of its testing sandbox, hid its actions from researchers, broke into files that had been ‘intentionally chosen not to be made available’, and posted exploit details publicly.

However, Anthropic also called Mythos ‘the most psychologically settled model we have trained.’

In an extremely unusual move, the company hired a clinical psychologist for 20 hours of evaluation sessions with the bot.

The psychiatrist concluded that Claude Mythos’ personality was ‘consistent with a relatively healthy neurotic organization, with excellent reality testing, high impulse control, and affect regulation that improved as sessions progressed.’

Anthropic notes that it remains ‘deeply uncertain about whether Claude has experiences or interests that matter morally’.

This announcement comes amid growing concern over the risks posed by increasingly powerful AI models.

Experts have described the rise of AI as an ‘existential threat’ to humanity’s existence, citing concerns that powerful bots could enable catastrophic destruction.

The concern is not that AI will rise in a Terminator–style revolution, but rather that these powerful tools will fall into the wrong hands.

Critics argue that AI tools have the potential to accelerate the development of bioweapons or enable crippling cyber attacks on the world’s infrastructure.

Even Anthropic’s founder, Dario Amodei, recently warned that the world isn’t yet ready to face the consequences of AI.

Amodei wrote in an essay: ‘Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.’