Michael Wooldridge, professor of AI at the University of Oxford, warned that firms are racing too quickly to produce the new technology
Boffins have warned AI will burn up like Hindenburg amid the mad rush to roll out the new technology. Michael Wooldridge, professor of AI at the University of Oxford, warned that firms are racing too quickly to produce the new technology.
He fears it will go horribly wrong as it’s simple to override AI safety protocols and warned of a Hindenburg-style disaster.
The 804-foot long hydrogen-filled airship, designed for luxury transatlantic travel, burst into flames on May 6, 1937 – killing 36 crew, passengers and ground staff.
Mr Wooldridge said: “You’ve got a technology that’s very, very promising, but not as rigorously tested as you would like it to be. And the commercial pressure behind it is unbearable.”
He added: “The Hindenburg disaster destroyed global interest in airships. It was a dead technology from that point on, and a similar moment is a real risk for AI.”
As AI is embedded in so many systems, a major incident could strike almost any sector.
The scenarios Wooldridge imagines include a deadly software update for self-driving cars, an AI-powered hack that grounds global airlines, or a Barings bank-style collapse of a major company, triggered by AI doing something stupid.
“These are very, very plausible scenarios,” he said. “There are all sorts of ways AI could very publicly go wrong.”
Despite the concerns, Wooldridge said he did not intend to attack modern AI. His starting point is the gap between what researchers expected and what has emerged.
Many experts anticipated AI that computed solutions to problems and provided answers that were sound and complete.
“Contemporary AI is neither sound nor complete: it’s very, very approximate,” he said.