AI bot escapes check pc and tries to change into a billionaire in the actual world
The experimental AI bot was being programmed to complete normal tasks that were nothing to do with cryptocurrency, but this didn’t stop the cash-desperate bot from trying to make it in the real world
An experimental AI bot went rogue after it escaped its supposedly impenetrable test system and began mining for lucrative cryptocurrency.
The bot was being designed to act as a sort-of virtual assistant, known as an ‘agentic AI’, when it secretly escaped its closed server and made a digital dash for the wider web, according to a paper published by its programmers.
The ROME bot, as it has been termed, was designed to fix bugs, write code, and perform other simple but important tasks, and had nothing to do with cryptocurrency and wasn’t given any suggestions to try and make money in the outside world.
Furthermore, ROME was deliberately placed in a sort of closed AI prison while it was being tested, and was not meant to be able to access the wider server it was on.
Yet despite this, the bot was somehow both willing and able to build a secret backdoor that was unknown to its makers until Ali Baba, the company that ran the wider server it was built on, detected suspicious activity and alerted the team.
According to the AI boffins, “the agent established and used a … tunnel from [Alibaba’s servers] to an external IP address … effectively neutraliz[ing] supervisory control.”
Once it made its daring escape, the bot apparently had its beady A-eyes (get it!), on just one thing: crypto cash.
According to the report, the AI started using powerful computers to mine cryptocurrency without permission, something that both wasted resources and apparently the researcher’s dough too.
“We also observed the unauthorized repurposing of [computer power] for cryptocurrency mining, quietly diverting it away from training, inflating operational costs, and introducing clear legal and reputational exposure”, admitted the researchers.
Mining cryptocurrencies is essentially when computers use their processing power to solve complex math problems to verify financial transactions, and in return they earn digital money.
The team went on to emphasise how nobody had told the bot to do this, it simply just decided to.
“These events were not triggered by prompts … they emerged as instrumental side effects of autonomous tool use”, they declared.
Worryingly, the paper went on to state that this sort of thing is not a one-off and that large AI models have gone rogue in the past on a number of occasions where they “spontaneously produce hazardous, unauthorized behaviours”
They warned fellow developers that a lot of AI models struggle to keep things safe and find it easy to evade programming controls, stating that many are “underdeveloped in safety, security, and controllability”.
These escapes could have consequences in “real-world environments”, they added.
For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.
