The Emergence of Generative AI Tools in Extremist Propaganda: A Rising Security Concern


Extremist groups increasingly use generative AI tools to create propaganda, posing a formidable threat to digital security efforts. This article discusses the implications of this trend and the countermeasures being developed to combat it.

Extremist groups are increasingly harnessing the power of artificial intelligence, specifically generative AI, to produce and circulate propaganda. This rising trend could potentially undermine the efforts of Big Tech companies in recent years to curb the spread of extremist content online.

“Our biggest concern is that if terrorists start using gen AI to manipulate imagery at scale, this could well destroy hash-sharing as a solution,”

Major tech platforms have been developing databases of recognized violent extremist content, known as hashing databases, for years. These databases are shared across platforms to rapidly and automatically eliminate such content from the internet. However, Hadley and his colleagues now identify approximately 5,000 instances of AI-generated content weekly. This includes images disseminated recently by groups affiliated with Hezbollah and Hamas, seemingly engineered to shape the narrative around the Israel-Hamas conflict.

The Threat of Generative AI in the Hands of Extremists

Hadley expressed concern that extremist groups might manipulate imagery within six months to circumvent the hashing databases. He noted, “The tech sector has done so well to build automated technology, terrorists could well start using gen AI to evade what’s already been done.”

Researchers at Tech Against Terrorism have recently discovered a neo-Nazi messaging channel sharing AI-generated imagery created with racist and antisemitic prompts on an app available on the Google Play store. They also found far-right figures producing a “guide to memetic warfare,” advising others how to use AI-generated image tools to create extremist memes. Other discoveries include a tech support guide published by the Islamic State on securely using generative AI tools and a pro-al-Qaeda outlet printing several posters likely created using a fertile AI platform.

Generative AI: A Double-Edged Sword

Alongside the threat posed by generative AI tools, Tech Against Terrorism has identified potential uses of this technology to aid extremist groups. This includes using autotranslation tools that can swiftly convert propaganda into multiple languages or creating personalized messages at scale for recruitment efforts online. However, Hadley believes AI also offers an opportunity to preempt extremist groups and use the technology to counteract their efforts.

He announced a partnership with Microsoft to explore creating a gen AI detection system to counter the emerging threat of gen AI being used for terrorist content at scale. “We’re confident that gen AI can be used to defend against hostile uses of gen AI.”

The partnership was unveiled on the eve of the Christchurch Call Leaders’ Summit, an initiative designed to eradicate terrorism and extremist content from the internet. Brad Smith, vice chair and president at Microsoft, emphasized the issue’s urgency: “By combining Tech Against Terrorism’s capabilities with AI, we hope to help create a safer world both online and off.”

Addressing the Threat on Smaller Platforms

While tech giants like Microsoft, Google, and Facebook have their own AI research divisions and are likely already deploying resources to tackle this issue, this new initiative will ultimately assist smaller platforms without the resources to combat these threats independently. As Hadley noted, smaller venues can quickly become overwhelmed by extremist content, even with the hashing databases.

The perils of AI-generated content are not confined to extremist groups. A recent report by the Internet Watch Foundation, a UK-based nonprofit working to eliminate child exploitation content from the internet, highlighted the increasing presence of child sexual abuse material (CSAM) created by AI tools on the dark web. The researchers discovered over 20,000 AI-generated images posted to a dark web CSAM forum in just one month, with many of these images judged most likely to be criminal.

As the threat of generative AI continues to grow and evolve, the digital security community must stay ahead of the curve, developing innovative solutions to neutralize these risks and ensure AI technology’s safe and responsible use.