Google will now let AI chatbots assist develop weapons after secretly rewriting its guidelines
Google has officially removed a pledge from its artificial intelligence principles – meaning the company could theoretically use AI to create new weapons.
The technology giant has rewritten the principles that guide its development and use of AI – which are published online – but a section pledging not to develop tech “that cause or are likely to cause harm” has now been removed. Previously this was in place to ensure AI could be used ‘ethically.’ The now removed section suggested they would not create apps involved in weapons or would “gather or use information for surveillance violating internationally accepted norms”.
Instead, the new rules – which have been shaved down and changed – now feature a section on “responsible development and deployment.” Google claim this will give “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

(Image: Getty Images)
Google senior vice president James Manyika and Sir Demis Hassabis, who leads the firm’s AI lab, Google DeepMind, said the company needed to update its supposedly outdated AI principles as they had been first published in 2018 and the technology has “evolved rapidly” since then.
“Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organisations and individuals use to build applications,” they said. “It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.”
They said this had meant increased dealings with international tech companies on common principles, which the blog post said Google was “encouraged” by.
But Mr Manyika and Sir Demis said “global competition” for AI leadership was taking place within an “increasingly complex geopolitical landscape”.

(Image: Getty Images)
“We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights,” they said. “And we believe that companies, governments, and organisations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.”
There is an ongoing debate among AI experts, governments, regulators, tech firms and academics about how the development powerful emerging technology should be monitored or regulated.
Previous international summits have seen countries and tech firms sign non-binding agreements to develop AI “responsibly”, but no binding international law on the issue is yet in place.
In the past, Google’s contracts to provide technology, such as cloud services, to the US and Israeli military have sparked internal protests from employees.
For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletters.