David Dalrymple, a leading figure at the Government’s scientific research agency, says folk should be worried security is not keeping pace with bot development
The world ‘may not have time’ to protect humanity from Terminator-style AI systems taking over the world, a top boffin has warned.
David Dalrymple, a leading figure at the Government’s scientific research agency, said people should be worried about the growing capability of the technology.
Bots will be able to perform ‘all the functions that humans’ can ‘but better’, he said.
Mankind will be ‘outcompeted’ in every domain needed to ‘maintain control of our civilisation, society and planet’.
Dalrymple, a programme director and AI safety expert at the Aria agency, reckons the Terminator-style takeover will happen within five years.
He said: “I think we should be concerned about systems that can perform all of the functions that humans perform to get things done in the world but better.
“Because we will be outcompeted in all of the domains that we need to be dominant in, in order to maintain control of our civilisation, society and planet.”
Dalrymple said there was a gap in understanding between the public sector and AI companies about the power of looming breakthroughs in the technology.
He said: “I would advise that things are moving really fast and we may not have time to get ahead of it from a safety perspective.
“And it’s not science fiction to project that within five years most economically valuable tasks will be performed by machines at a higher level of quality and lower cost than by humans.”
Dalrymple thinks bots will be able to automate a full day of research and development work by the end of the year.
That will ‘result in a further acceleration of capabilities’ because the technology will be able to self-improve.
The expert, who is developing safeguards for AI’s use in critical infrastructure such as energy networks, said governments should not assume all systems churned out by tech giants are safe.
“We can’t assume these systems are reliable,” he said.
“The science to do that is just not likely to materialise in time given the economic pressure.
“So the next best thing that we can do, which we may be able to do in time, is to control and mitigate the downsides.”
If tech progress gets ahead of safety it would lead to the ‘destabilisation of security and economy’, he said.
“I am working to try to make things go better but it’s very high risk and human civilisation is on the whole sleep walking into this transition.”
This month the Government’s AI Security Institute – aka AISI – said the capabilities of advanced artificial intelligence models were ‘improving rapidly’ across all domains and the performance in some areas was doubling every eight months.
Leading models can complete apprentice-level tasks 50% of the time – up from 10% last year.
AISI also found most advanced systems can independently complete tasks that would take a human expert over an hour.
The institute also tested self-replication – a key safety concern because it involves a system spreading copies of itself to other devices and becoming harder to control.
Tests showed two leading models achieving success rates of more than 60%.