Neo-Nazis Are All-In on AI

Extremists across the US have weaponized artificial intelligence tools to help them spread hate speech more efficiently, recruit new members, and radicalize online supporters at an unprecedented speed and scale, according to a new report from the Middle East Media Research Institute (MEMRI), an American non-profit press monitoring organization.

The report found that AI-generated content is now a mainstay of extremists’ output: They are developing their own extremist-infused AI models, and are already experimenting with novel ways to leverage the technology, including producing blueprints for 3D weapons and recipes for making bombs.

Researchers at the Domestic Terrorism Threat Monitor, a group within the institute which specifically tracks US-based extremists, lay out in stark detail the scale and scope of the use of AI among domestic actors, including neo-Nazis, white supremacists, and anti-government extremists.

“There initially was a bit of hesitation around this technology and we saw a lot of debate and discussion among [extremists] online about whether this technology could be used for their purposes,” Simon Purdue, director of the Domestic Terrorism Threat Monitor at MEMRI, told reporters in a briefing earlier this week. “In the last few years we’ve gone from seeing occasional AI content to AI being a significant portion of hateful propaganda content online, particularly when it comes to video and visual propaganda. So as this technology develops, we’ll see extremists use it more.”

As the US election approaches, Purdue’s team is tracking a number of troubling developments in extremists’ use of AI technology, including the widespread adoption of AI video tools.

“The biggest trend we’ve noticed [in 2024] is the rise of video,” says Purdue. “Last year, AI-generated video content was very basic. This year, with the release of OpenAI’s Sora, and other video generation or manipulation platforms, we’ve seen extremists using these as a means of producing video content. We’ve seen a lot of excitement about this as well, a lot of individuals are talking about how this could allow them to produce feature length films.”

Extremists have already used this technology to create videos featuring a President Joe Biden using racial slurs during a speech and actress Emma Watson reading aloud Mein Kampf while dressed in a Nazi uniform.

Last year, WIRED reported on how extremists linked to Hamas and Hezbollah were leveraging generative AI tools to undermine the hash-sharing database that allows Big Tech platforms to quickly remove terrorist content in a coordinated fashion, and there is currently no available solution to this problem

Adam Hadley, the executive director of Tech Against Terrorism, says he and his colleagues have already archived tens of thousands of AI-generated images created by far-right extremists.

“This technology is being utilized in two primary ways,” Hadley tells WIRED. “Firstly, generative AI is used to create and manage bots that operate fake accounts, and secondly, just as generative AI is revolutionizing productivity, it is also being used to generate text, images, and videos through open-source tools. Both these uses illustrate the significant risk that terrorist and violent content can be produced and disseminated on a large scale.”

actorsAdoptionaialgorithmsartificial intelligenceasATbidenbitbotscontentemmafar-rightFashionfilmsgetgovernmentHamashate speechHezbollahintelligenceitjoeJoe Bidenmachine learningMediaMiddle EastmodelsNational AffairsNazisOpenAIotherplatformsPoliticsPolitics / DisinformationproductivitypropagandareadingresearchrisksawscalesimonSocial MediaTechnologyTerrorismthatthetoolsUSUS ElectionVIDEOwellwhite
Comments (0)
Add Comment